Everyone’s obsessed with making AI reason deeper — train models to solve complex mathematics and master intricate proofs. But in this race for artificial intelligence, we’ve forgotten something fundamental about human intelligence – how we actually think and work with others.
Deep reasoning is important, but it’s only half the story. What if, instead of treating AI as tools to be prompted, we can engage with them as naturally as we do with brilliant colleagues – each bringing their unique perspective to the conversation? This isn’t just about making AI think deeper – it’s about unlocking its breadth of knowledge through better questions.
Asking good questions is harder than finding answers. Try this mental exercise: give yourself two minutes to write down ten meaningful questions about different topics. Hard, isn’t it? The difficulty lies in where to even begin. With regular problems, we at least have a starting point – a puzzle to solve, a goal to reach. But with questions, we’re creating the map before we know the territory. It’s not about finding the right path; it’s about imagining what paths might exist in the first place.
And this is where LLMs shine: they’re incredible brainstorming partners. Not just because they can process vast amounts of information, but because they can consider countless angles that might never occur to us. Humans are limited by our experiences and cognitive biases. LLMs don’t have these limitations. They can draw connections between seemingly unrelated concepts and suggest possibilities we might never have considered.
The challenge isn’t that LLMs lack knowledge – they have plenty. The real problem is that we don’t know how to extract it effectively. We initially thought prompt engineering was the magic key, but that turned out to be too simplistic. It works, but it feels mechanical and constrained. We don’t think about conversation frameworks or prompt strategies when chatting with friends. We just… talk.
What we need is AI that can match this natural fluidity of human conversation. Imagine an AI system that could seamlessly shift between different types of expertise, like a conversation with a renaissance polymath who can speak authoritatively about both quantum physics and Renaissance art. This isn’t just about role-playing – it’s about having access to different ways of thinking about problems.
Imagine an AI system that could seamlessly shift between different types of expertise, allowing you to converse with a diverse range of experts. For instance, you could discuss quantum physics with a renowned physicist like Albert Einstein, then transition to a conversation about Renaissance art with an art historian like Giorgio Vasari. This isn’t merely about role-playing; it’s about accessing diverse perspectives and specialized knowledge to enhance your understanding and problem-solving abilities.
I’ve started calling this concept “persona-driven discovery,” and I believe it could revolutionize how we learn and solve problems by acting as a catalyst for serendipity.
We’ve all had those magical moments in libraries where we stumble upon exactly the book we needed but weren’t looking for. These AI systems could create those moments deliberately, suggesting unexpected perspectives and prompting us to explore unfamiliar territories. It’s like having a brilliant friend who knows when to push you out of your intellectual comfort zone.
All of this points toward a future where AI tools aren’t just answering our questions but actively participating in our thinking process. They could help us prototype ideas faster, facilitate group brainstorming sessions, and create personalized learning experiences that adapt to our individual ways of thinking.
The real breakthrough will come when we stop thinking about these systems as tools and start thinking about them as thought partners. This shift isn’t just semantic – it’s fundamental to how we might solve problems in the future. Instead of asking an AI to complete a task, we might engage it in a genuine dialogue that helps us see our challenges from new angles.
The building blocks are already there: we have models that can process and generate human-like text, we have systems that can maintain context in conversations, and we’re developing better ways to keep AI knowledge current and relevant.
There are still unsolved problems. When we make AI systems more specialized (like training them to be history experts), they often lose their broader capabilities. It’s the same trade-off you get with human experts – deep knowledge in one area often comes at the cost of breadth. The trick will be creating systems that can maintain both depth and breadth, switching between different modes of thinking without losing their fundamental capabilities.
This is what role-aware AI systems could offer – not just rigid question-and-answer sessions, but fluid conversations where different perspectives emerge organically as needed. Each AI “participant” would bring their unique expertise and viewpoint while staying current with the latest developments in their field. They would build on each other’s insights, challenge assumptions, and help you see problems from angles you might never have considered on your own. The key isn’t just having access to different types of knowledge, but having them work together in a way that mirrors the natural give-and-take of human conversation.
The potential impact of this shift could be profound. Just as the internet changed how we access information, these AI thought partners could change how we process and use that information. They could help us break out of our mental ruts, see connections we might have missed, and approach problems from angles we might never have considered.
This is the future I’m excited about – not one where AI replaces human thinking, but one where it enhances and expands it in ways we’re just beginning to imagine.
BTW, this essay is written after a long brainstorming session with the Hachi of “Paul Graham”. You can check it here: https://go.hachizone.ai/pg-think-wider
Please message me at jing AT hachizone.ai if you have any feedback on this essay or Hachi in general.
