Strong Data: The Cornerstone of Effective AI
- Matthias Leybold

- Apr 2
- 5 min read
Updated: Apr 10
Many leadership teams today are frustrated by the gap between AI’s potential and its actual bottom-line impact. In my experience, this gap rarely comes down to the AI itself. It is the result of unresolved Data Debt.
Returning to the 2022 Enterprise Data Strategy (EDS) foundations is how organisations move beyond experimentation and into execution. When those foundations are strong, they create a high-fidelity environment. One where agents can make complex, autonomous decisions with a level of nuance comparable to your best people.
In 2022, when I co-authored “One Data Strategy to Rule Them All” at PwC, the focus was on the volume and promise of “Big Data.” Our argument was simple: without a unified Enterprise Data Strategy, organisations would remain stuck in data swamps. Rich in information, but poor in execution.
Four years later, the context has changed. The question is no longer how humans extract insight from data. It is how machines act on it.
As I explore in my recent work, “One Agent to Rule Them All,” we are entering the era of the autonomous enterprise. But the underlying constraint has not changed.
An agent is only as reliable as the data it depends on. No cohesive data strategy, no scalable agent.
The EDS framework: A 2022 foundation for a 2026 reality
The original whitepaper outlined five dimensions of a mature data organisation. At the time, they were strategic ambitions. Today, they are baseline requirements for anyone serious about agentic AI.
1. Data Governance: Defining the Boundaries of Autonomy
2022 Governance meant defining access. Roles, responsibilities, catalogues. The goal was trust.
2026 The scope has expanded. You are no longer governing people alone. You are governing non-human actors.
What matters now
If fiduciary limits for agents are unclear, governance is performative. A modern Enterprise Data Strategy (EDS) needs explicit agentic permissioning. Clear boundaries within which an agent can act, and just as importantly, where it cannot.
2. Data Architecture & Quality: Engineering for Meaning
2022 Architecture was about moving away from fragmented, accidental IT landscapes.
2026 The shift is from clean data to usable meaning.
What matters now
Agents do not operate on rows and tables. They operate on relationships and context. Without a semantic layer that connects data across the enterprise, reasoning breaks down. What matters is not cleanliness alone, but whether the data can be understood.
3. Data Security & Privacy: From Control to Traceability
2022 The focus was classification, categorisation, and regulatory compliance.
2026 The focus is traceability. Every decision needs a lineage.
What matters now
Under frameworks like the EU AI Act, it is no longer enough to control access. You need to explain outcomes. Organisations that can trace every autonomous action back to its source will move faster in regulated environments.
4. Data Processes & Tools: From Storage to Context
2022 Structured operating models and disciplined use cases were the priority.
2026 Static storage is no longer sufficient. Context has to be live.
What matters now
Agents act in the present. If the data they rely on is stale, the decision will be too. The shift is toward real-time context injection. Systems that feed agents with what is true now, not what was true yesterday.
5. Culture & People: Governing a Hybrid Workforce
2022 The ambition was data literacy across the organisation.
2026 The challenge is accountability.
What matters now
Leadership is no longer just enabling data-driven decisions. It is overseeing a workforce that includes autonomous systems.
That changes the role of the Board. It is no longer support. It is governance.
What this means for leaders in 2026
If these shifts hold, most organisations are not facing an AI capability gap. They are facing a readiness problem.
The question is no longer where AI can be applied. It is whether the organisation is structured to let it act.
Are your data foundations strong enough for autonomous decisions?
Do your agents operate within clear risk boundaries at scale?
Can you explain and defend their decisions?
In an agentic enterprise, failure rarely sits in the model. It sits in the foundation.
Many organisations are still optimising for pilots. Others are investing in more powerful models while their data landscape remains fragmented and inconsistent. The pattern is familiar. Early success, limited scale, and frustration once complexity sets in.
There is also a broader structural shift underway. As Matt Labovich recently put it, agentic AI does not scale through prompts. It scales through a harness.
A structured environment where AI operates within the organisation’s methods, standards, and governance. That is the missing layer in many organisations today. AI is being added on top, rather than built in.
This shift is now reaching the boardroom. For Non-Executive Boards in particular, the question is no longer whether AI is being used, but whether it is being governed in a way that enables the business to move forward with confidence.
What does oversight look like when decisions are made autonomously? Where does accountability sit when outcomes are generated by agents rather than individuals? And how do you exercise fiduciary duty over systems that evolve over time?
But governance alone is not enough.
The role of the NEB is not only to control risk, but to challenge management, to ask the harder questions, and to actively encourage the responsible use of AI in driving business transformation.
In practice, this creates a dual mandate. Provide assurance where needed, while also creating the conditions in which AI can be deployed with intent and at scale. Many boards are still developing the language, frameworks, and confidence to do both.
This is where a balance becomes critical.
On one side, caution. The need for clear guardrails, traceability, and control. On the other, courage. The willingness to move beyond pilots and embed AI into how the business actually operates.
Most organisations today lean too far in one direction. Either experimenting without structure, or hesitating because the structure is not yet in place.
The competitive edge lies in managing both, deliberately.
Which brings us back to data. The harness only works if the underlying data is coherent, connected, and usable. Without that, there is nothing for the agent to anchor to, and nothing for the organisation to scale.
Most organisations are not missing better models. They are missing the environment in which those models can operate.
Conclusion
At Aspireon, I work with leaders to address that gap. Not by focusing on the agentic rooftop, but by strengthening the foundations that make it viable in the first place.
Because in the end, AI will not differentiate your business. Your Data Strategy, and your ability to execute on it, will.
And increasingly, this comes down to balance.
The caution to build the right foundations, governance, and controls.
The courage to move beyond pilots and let AI operate where it creates real value.
Those who manage both will define the competitive edge in 2026.



Comments