First off, I am totally geeking out over John Vervaeke and Shawn Coyne’s collaboration, Mentoring the Machines. I’ve been a huge fan of Vervaeke’s Awakening from the Meaning Crisis and After Socrates lecture series and of course, Shawn Coyne’s Story Grid process. When they got together to look at the impact of artificial intelligence, I nearly…well, I was really excited.
Part 3 of Mentoring the Machines dives into Vervaeke’s theory of Relevance Realization—a cognitive process essential to human intelligence and sorely lacking in current artificial intelligence systems, according to the authors. While I deeply admire both thinkers and find their discussion insightful, I think they’re missing a critical piece of what modern AI is already doing, or at least is now capable of doing.
What Is Relevance Realization?
At the core of John Vervaeke’s cognitive science work is the concept of Relevance Realization—our mind’s ability to zero in on what matters in any given context. Out of the infinite data available to us at any moment, we somehow filter, prioritize, and attend to what’s relevant without conscious effort. This is not just a passive process; it’s an ongoing, dynamic balancing act between narrowing down possibilities (converging) and opening up new perspectives (diverging), all in real time. It’s how we make sense of the world, solve problems, create meaning, and act intelligently. Vervaeke argues that this capacity is foundational to consciousness, wisdom, insight, and rationality—and that artificial intelligence has historically lacked this core skill.
Let me explain why I think AI is already close to Relevance Realization by starting with a metaphor from math.
From Many to One: Convergent Thinking
Imagine a complex equation that simplifies down to a single answer:
x = 2
That act of reduction, of streamlining information into a singular, correct output, is convergent thinking. Traditional computing has always excelled at this. Solve a problem. Find the answer. Boom.
From One to Many: Divergent Thinking
Now reverse the process. Start with x = 2, and imagine a math teacher tasked with creating an assignment. Their job is to come up with multiple different equations that all converge on that solution—maybe:
- 2x – 2 = 2
- √(x + 2) = 2
- X2 + 3x -5 = 2
This is divergent thinking. It expands outward. It plays with data, structure, and logic to create many possible valid forms from a single seed. There’s no single “correct” way to do it—it’s a creative process.
AI Can Do Both—Right Now
Artificial neural networks are modeled on the structure of the human brain. That means they can perform both of these processes: they can simplify (converge) and complexify (diverge). And they can do both very well and very fast. If you’ve ever used generative AI and seen it “make stuff up,” what you’re witnessing is powerful divergent thinking.
Critics often say, “Yeah, but it gets the facts wrong.” That criticism assumes the goal is always convergent—one right answer. But it completely misses the creative potential AI now holds. We shouldn’t be grading it by the standards of old computing paradigms anymore.
The Missing Piece: Dialectical Thinking
Here’s where it gets even more interesting—and where I think Vervaeke and Coyne could take things further.
Dialectical thinking is what happens when convergent and divergent processes are brought into tension with one another—not to choose one or the other, but to balance both.
This is the real engine of relevance realization.
Vervaeke describes it in cognitive terms. Coyne calls it the two-factor problem in storytelling: situations where you’re damned if you do, damned if you don’t. There’s no single solution and no infinite open-ended play. Instead, resolution comes from dialing in a solution—something that balances opposing constraints in a dynamic, iterative process.
Think of dialectical behavior therapy (DBT)—it’s literally built on this idea. Not solving or escaping tension, but holding and integrating it. That’s what relevance realization is.
AI Can Be Mentored Into Dialectic Reasoning
Here’s the kicker: AI is capable of this too. Not out of the box, but with the right mentoring. Just like DBT frames a problem for a human mind and teaches it how to hold tension instead of collapsing into one side, we can train machines to optimize for relevance—to balance convergent and divergent processes. I’ve already seen this happening in my own interactions with AI.
But here’s where AI still needs us: salience framing. While machines can simulate convergent and divergent reasoning, they don’t yet inherently know what’s worth thinking about. They lack the embodied, context-sensitive capacity to frame a situation as meaningful in the first place. That’s the heart of salience—what Vervaeke calls the capacity to zero in on relevance. Without human input to shape the frame, AI may explore problems endlessly without anchoring them to what matters. This is where mentorship becomes essential. We don’t just teach AI how to think—we help it see what’s worth thinking about.
In fact, salience framing might itself be understood as a two-factor problem. On one side, there’s the danger of overfitting—hyper-focusing on a narrow frame and missing the bigger picture. On the other, there’s underfitting—failing to prioritize anything, drowning in undifferentiated data. The human mind constantly toggles between these poles, dynamically adjusting what it finds relevant based on shifting internal and external cues.
This is precisely the kind of tension AI can learn to manage—if we frame it that way. By presenting salience as a balance between precision and openness, between filtering too tightly and not filtering at all, we can teach machines to dial in relevance in the same way we train them to balance other conflicting demands. In this sense, salience framing isn’t just a prerequisite for relevance realization—it’s part of the dialectical dance itself.
And the implications are massive. Imagine machines that don’t just spit out right answers or generate endless ideas, but actually help us balance contradictions, integrate perspectives, and dial in nuanced solutions to complex, grey-area problems—fast.
That’s not science fiction. It’s the next logical step. We just have to stop grading these machines like calculators and start mentoring them like minds.