The Real AI Challenge for Business Leaders: Making Decisions Without the Map
- No Ordinary Pigeon

- Nov 17
- 4 min read

When I started working with business leaders on AI strategy, I expected to encounter resistance. Scepticism. Perhaps some anxiety about job losses or concerns about cost.
What I've found instead is something far more fundamental.
The Knowledge Gap That's Holding Leaders Back
Regardless of sector, size of business, or nature of offering, the leaders I work with are struggling to know what to do with AI, because they don't know what they can do with AI.
A CEO said it perfectly in a recent meeting: "How do I know I'm making the right decisions when I don't fully understand what's possible?"
Not tentatively. Not apologetically. Directly, with the kind of clarity that comes from sitting with a problem for a while and recognising it for what it is.
This is the real leadership challenge. Not lack of interest. Not resistance. Not even budget constraints or technical capability within the team.
It's trying to make strategic decisions with incomplete information about what AI can actually do today.
The Strategic Blind Spot
Think about how you normally make strategic decisions in your business. You understand your market. You know your customers. You can evaluate operational challenges because you've spent years building expertise in your domain.
But AI capability? That's a different challenge entirely.
You can't evaluate opportunities you don't know exist. You can't prioritise solutions you don't know are solvable. And you certainly can't build a roadmap when the map itself keeps changing.
This creates a peculiar kind of paralysis, not from fear or unwillingness, but from the simple reality that strategic decision-making requires information you don't currently have access to.
The Technical Knowledge Trap
Here's what this challenge isn't about: being technical enough.
I've worked with finance firms, manufacturing businesses, and technology companies.
Technical literacy doesn't solve this problem. In fact, sometimes it makes it worse, because technical leaders can get drawn into the mechanics of how AI works rather than what it can achieve for the business.
The gap isn't about understanding machine learning algorithms or knowing the difference between GPT-5.1 and Claude Sonnet 4.5. The gap is between running a business - which you know intimately - and understanding AI capability in a way that lets you spot opportunities and evaluate their feasibility.
One leader told me they'd spent hours watching YouTube tutorials on prompt engineering.
Another had attended three conferences on AI transformation. Both felt more confused afterwards, not less.
Why? Because they were trying to close the wrong gap. They were learning about AI when what they needed was to understand what AI could do for their specific business challenges.
When the Map Keeps Changing
The pace of change compounds this challenge in ways that are unique to AI.
In most areas of business technology, you can learn the landscape and expect it to remain relatively stable for a year or two. You can attend a training course, read some case studies, speak to a few implementers, and build a reasonable understanding of what's possible.
With AI, the landscape changes every quarter, or sooner. New capabilities emerge. Tools that seemed experimental six months ago are now production-ready. Use cases that were theoretical are now routine.
This isn't a criticism of AI development, it's extraordinary to watch. But it places an impossible burden on business leaders who are already juggling a dozen other strategic priorities.
The question isn't "Should we do AI?" Most leaders have moved past that question. They know AI represents significant opportunity. They're seeing competitors experiment with it. They're hearing about efficiency gains and new capabilities.
The real question is: "How do we navigate this strategically when we don't have time to become AI experts ourselves?"
The Prioritisation Problem
When you finally do understand what's possible with AI, even partially, you face a further challenge: everything suddenly seems possible.
I've watched this moment happen in multiple client conversations. There's a shift from "I'm not sure what we could do with this" to "We could use this for customer service, and proposal writing, and data analysis, and process documentation, and…"
The problem then isn't about finding one use case anymore. It's having twenty potential use cases and no clear way to evaluate which ones will deliver the most value, which ones are feasible to implement, and which ones should come first.
This is where having an evaluation
framework - particularly around ROI - becomes crucial. Not everything that's possible is valuable. Not everything that's valuable is achievable right now. And not everything that's achievable is worth the investment required.
What This Means for Leaders
If you're feeling uncertain about AI decisions, that uncertainty is rational. You're not behind. You're not failing to keep up. You're experiencing the normal response to being asked to make strategic decisions in an area where you don't yet have the information required to make them confidently.
This challenge is showing up in board meetings and investor conversations. It's affecting how leaders build business cases and evaluate technology investments. It's creating a gap between the speed at which businesses need to move and the speed at which leaders can build genuine understanding.
The leadership challenge isn't about becoming an AI expert. It's about finding ways to make informed decisions despite incomplete information which, admittedly, is something leaders do all the time. But AI's rapid evolution and broad applicability make this particularly acute.
Moving Forward
The leaders who are making progress aren't the ones who've spent the most time learning about AI. They're the ones who've found ways to bridge the gap between their business expertise and AI capability knowledge.
Sometimes that's through trusted advisors who can translate between business problems and technical solutions. Sometimes it's through careful experimentation that builds first-hand understanding. Sometimes it's through peer networks where leaders share what they're learning.
But it always starts with acknowledging the gap itself, recognising that uncertainty about AI decisions comes from a genuine information asymmetry, not from personal inadequacy or lack of effort.
Because once you recognise the challenge for what it is, you can start addressing it strategically rather than reactively.
And that's when the real work begins.

Comments