I have sat in a lot of rooms where the conversation starts with "which AI tool should we use?" before anyone has asked what problem they are actually trying to solve.

A developer wants to experiment. A C-suite leader has read something on LinkedIn. The board is asking questions. And suddenly the whole organisation is in motion, evaluating vendors, setting up proof of concepts, budgeting for GPU compute, before anyone has stopped to ask the most basic question.

Why are we doing this?


The prescription without a diagnosis

My favourite analogy for this is a doctor who prescribes medicine before asking what hurts. You walk in with a headache and they hand you something for a stomach ache. Technically it is medicine. Technically it might even be a good product. But it is solving the wrong problem entirely.

That is what most AI initiatives look like from the inside.

Simon Sinek talks about starting with why. Most organisations starting an AI program start with how. The LLM is the how. The deployment architecture is the how. The vendor selection is the how. The why, what problem we are actually solving and what outcome we are chasing, is assumed rather than defined.

Assumptions at the start of a transformation programme are just delayed failures.


AI might not even be the answer

This is the thing nobody says out loud in vendor meetings.

Sometimes the problem you have does not need AI. It needs a better process. Or a simple automation script. Or someone to actually fix the broken workflow that has been held together with spreadsheets and goodwill for six years.

AI has become the answer people reach for before they understand the question. And the cost of that reflex is real. A proof of concept feels cheap until you start thinking about what production actually looks like. Token costs, inference replicas, data partitioning, hosting at enterprise scale, none of that is free. Microsoft, Anthropic and OpenAI are not running charities. The economics that work in a POC can break badly at enterprise scale if you haven't done the maths upfront.

The technical choices, which LLM to use, how to deploy it, how to secure it, how to fine-tune it, all of that you can learn. There is no shortage of guides, courses, and documentation. It is genuinely learnable.

But the business use case clarity? That is on you. Nobody can hand you that from the internet. It requires someone who understands the business deeply enough to ask uncomfortable questions before the build starts.


The two technical problems that will kill you anyway

Even when the use case is clear, most AI projects run into the same two walls.

The first is broken process. AI does not fix a bad process. It accelerates it, which usually means it makes the problems worse faster. If the underlying workflow is unclear, inconsistent, or owned by nobody, the AI layer will expose every crack that was previously hidden by human judgment and manual intervention.

The second is bad data. This one is not new. It has been the graveyard of data warehouse projects, analytics initiatives, and CRM rollouts for thirty years. AI is no different. A beautiful model trained on dirty, incomplete, or poorly governed data will give you confident, fast, wrong answers. And confident wrong answers at scale are more dangerous than slow right ones.

Both of these problems are fixable. But only if you plan to fix them. If you assume the data is good enough, or that the process will sort itself out after the AI is live, the project will fail. The POC will look like a success. The production rollout will not.


What actually needs to happen first

Before the vendor evaluation. Before the architecture design. Before the proof of concept, someone needs to sit with the business and work through these questions honestly:

  • Why are we doing this? What decision are we trying to make better, or what task are we trying to do faster? Why does it matter to the organisation right now?
  • Is AI actually the right tool? Would a simpler automation solve this? Would fixing the underlying process solve this? Would a basic rule-based system be good enough?
  • What does the data look like? Who owns it? Is it clean, complete, and governed well enough to trust at scale?
  • What will it actually cost in production? A POC running a few hundred prompts a day feels cheap. But token costs, inference replicas, data partitioning, and model hosting at enterprise scale add up fast. Model that out before you commit, not after.
  • What does success look like in six months? Not in a demo. In production, with real users, real edge cases, and real data.

These are not technical questions. They are business questions with technology implications. And that gap, between what the business needs and what the technology can do, is exactly where things go wrong.


This is where Enterprise Architects make the difference

Most AI projects sit in a dangerous middle ground. The technical team is excited and moving fast. The business stakeholders are nodding along without fully understanding the implications. And nobody is asking the hard questions that sit across both worlds.

That is the space an experienced EA occupies.

I have spent two decades sitting between over-eager technical teams and less tech-savvy business leaders. Not slowing things down, but making sure the thing being built is actually the right thing. That is a different skill from engineering and a different skill from strategy. It is the ability to translate, challenge, and connect the dots before the organisation has spent six months heading in the wrong direction.

In an AI engagement, that means:

Challenging the use case before the architecture starts, because fixing the why costs nothing, fixing a live system costs a lot
Running a realistic cost model before the board approves the budget, not after the first invoice arrives
Identifying the data and process gaps that will kill the project in production even if the POC looks great
Keeping the technical team grounded in business outcomes while keeping the business team honest about what AI can and cannot do
Designing for auditability, compliance, and governance from day one, not as an afterthought when the regulators ask questions

The technology is learnable. The judgment about when to use it, how to size it, and what to fix first, that takes experience you cannot shortcut.


SL

Subbu Lakshmanan

Enterprise Architect and AI advisor based in Melbourne, working with organisations across Australia and globally. 20+ years helping enterprises make better technology decisions in complex environments. Connect on LinkedIn