Back to Insights
strategy getting-started governance

The AI Readiness Checklist: 12 Questions Before You Build

5 min read

There’s a pattern we see with organizations starting their AI journey: a leadership team gets excited about AI, greenlights a project, and six months later the initiative is stalled—not because the technology didn’t work, but because foundational questions were never asked.

These twelve questions aren’t a formality. They’re the difference between a project that ships and one that becomes a cautionary tale in next year’s strategy review.

The Business Case Questions

1. What specific business problem are you solving?

“We want to use AI” is not a problem statement. “Our support team spends 60% of their time answering the same 50 questions” is. The more specific the problem, the easier it is to build, measure, and declare success. If you can’t describe the problem without mentioning AI, you’re starting from the wrong end.

2. How are you solving this problem today?

Understanding the current process—its costs, its failure modes, its workarounds—is essential. You need a baseline to measure against, and you need to understand the implicit requirements that the current process handles. The manual process might be slow, but it also might handle edge cases and exceptions in ways you’ll need to replicate.

3. What does success look like in numbers?

Define your metrics before you build. Is success a 50% reduction in processing time? A 30% decrease in error rate? A measurable improvement in customer satisfaction? Vague goals like “improve efficiency” make it impossible to know if you’ve succeeded and make it easy for scope to expand indefinitely.

4. Who owns this initiative and has authority to ship it?

AI projects that lack clear ownership stall at decision points. Someone needs to have the authority to make trade-offs, approve releases, and resolve disagreements between stakeholders. If the answer is “a committee,” be prepared for slow progress.

The Data Questions

5. Where is the data you need, and can you access it?

AI runs on data. Where does yours live? Is it in a database, a document repository, spreadsheets, or people’s heads? Can your technical team access it programmatically, or does it require manual exports and approvals? Data access issues are the single most common blocker we see—not because the data doesn’t exist, but because getting to it is harder than expected.

6. How clean is that data?

Be honest. If your customer records have duplicate entries, your product documentation is three versions behind, or your knowledge base has gaps that everyone works around, those problems will transfer directly to your AI. The model will learn from what you feed it—including the mistakes. You don’t need perfect data, but you need to know where the problems are.

7. Are there privacy or compliance constraints on this data?

Personal information, health records, financial data, and proprietary business information all carry restrictions. Know what you’re working with before you start building. Retrofitting privacy controls into a deployed system is significantly more expensive and risky than building them in from the start.

The Organization Questions

8. Does your team have the skills to build and maintain this?

Be realistic about your team’s current capabilities. Building an AI system requires different skills than maintaining one. You might bring in external help for the build, but someone internal needs to understand, monitor, and adjust the system after launch. If nobody on your team can troubleshoot the AI when it misbehaves, you have a sustainability problem.

9. Who will be affected by this change, and are they on board?

AI changes how people work. The support team whose tickets are being automated, the analysts whose reports are being generated, the managers whose approval workflows are being streamlined—these people need to be involved early. Resistance from affected teams is a leading cause of AI project failure, and it’s almost always preventable with early engagement.

10. What’s your governance plan?

Who approves model deployments? Who monitors for quality issues? What happens when the AI makes a mistake? How do you handle bias or fairness concerns? Governance doesn’t have to be bureaucratic, but it does need to be defined. Organizations that figure this out after a public incident spend far more than those who plan ahead.

The Technical Questions

11. What’s your timeline and budget, realistically?

A focused AI project typically takes 4-12 weeks for an initial deployment, depending on complexity. If someone is promising results in two weeks or planning an 18-month roadmap before any deployment, recalibrate. Budget should account for ongoing costs—API usage, monitoring, maintenance—not just the initial build.

12. How will you handle the AI being wrong?

Because it will be wrong sometimes. Every AI system produces errors, hallucinations, or unexpected outputs. The question isn’t whether this will happen but how you’ll catch it and what happens when you do. Build error handling, human escalation paths, and quality monitoring into your plan from the beginning.

Using This Checklist

You don’t need perfect answers to all twelve questions before starting. But you need honest answers. A “we don’t know yet” is far better than an assumption that turns out to be wrong three months into development.

The questions you can’t answer are often the most valuable ones to identify. They point to work that needs to happen before the technical build begins—stakeholder alignment, data preparation, governance planning—that will determine whether the project succeeds more than any technical decision.

If you’re finding that most of these questions have clear answers and the problem is specific and well-understood, you’re probably ready to build. If several of these are unanswered or the answers reveal significant gaps, an AI readiness assessment can help you close those gaps efficiently before committing to implementation.

The goal isn’t to create barriers to getting started. It’s to make sure that when you do start, you’re building something that will actually ship, work, and deliver value.


More from the blog

Enjoyed this? Get more like it.

Practical AI insights, delivered monthly. No spam.

Unsubscribe anytime. We respect your inbox.

All Insights