AI as the primary engine for progress

A year of inflection.

3 hours ago   •   8 min read

By Dr Shaun Conway
audio-thumbnail
AI Is a Verifiable Progress Engine Not Gold Rush
0:00
/272.944014

AI has become the new Rorschach test that challenges our perceptions of what we see in the picture that is emerging.

Some people see an existential threat. Others see a gold rush. I see something more basic and more demanding:

AI is becoming the primary engine for progress.

Not because it is magic, or conscious, or destined. But because it is the first technology that can work with us at the level of intentions, decisions, and complex systems – not just at the level of data storage or transaction processing.

This past year across the IXO ecosystem, and in building Qi as an intelligent human–AI cooperating system, we are starting to see this shift become very real.

Progress is not just more output

When I say “progress”, I don’t mean more GDP, more apps, or more content.

Progress, for me, means:

  • More people able to act with agency and good information
  • More resources flowing to what actually works
  • More feedback loops that reward integrity, not spin
  • More systems that regenerate the foundations they depend on: ecosystems, communities, health

The Internet of Impacts that we’ve been building at IXO for over a decade only makes sense if these forms of progress are measurable and improvable.

AI is finally powerful enough to help us do that – if we use it in the right way.

From “AI tools” to a cooperating system

Most people still think of AI as a tool you “use”: you ask a model a question, it answers. You generate a slide deck. You automate a support workflow.

That’s useful, but it’s not transformative.

Our vision for Qi is different, as we are building an human–AI intelligent cooperating system.

0:00
/0:27

In Qi, AI isn’t a single assistant. It’s a network of specialised agents working alongside people, each with:

  • A defined role and scope
  • Access to specific data and tools
  • Clear rules for what they are allowed to do
  • Verifiable accountability for the actions they take

You can think of it as a digital immune system for getting important work done in the real world:

monitoring agents that detect anomalies, diagnostic agents that investigate, remediation agents that act, and coordination agents that keep everyone aligned.

This year, a lot of our team's energy has gone into making this architecture real – not as a slide, but as trust infrastructure that can be used in climate finance, pathogen genomics, creating economic opportunities for young people, and beyond.

Our progress this year

I’ll share a few examples, without pretending they are finished. They are scaffolding for something much larger, but the patterns are emerging.

Intelligence for clean cooking and climate

In Zambia, we are working with partners to replace charcoal with modern energy cooking.

On the surface, this looks simple: stoves, pellets, households.

Underneath, there is real complexity as these supply-chains and networks scale:

  • Time-series data from millions of devices
  • Probabilistic models of stove usage and emissions reductions
  • Financial flows from investors to stove providers to households
  • Article 6.2 governance and ITMO integrity requirements
  • Social outcomes around health, nutrition, and youth development
  • Benefit transfers back to households, through digital vouchers

This year, we’ve advanced from seeing these as separate problems to treating them as one coherent system:

  • Agentic Oracles that evaluate digital claims about stove usage and outcome performance
  • dMRV models that continuously update a P-value of “confidence of impact” for each device
  • Smart account structures that can route value without exposing users to regulatory risk
  • Governance patterns that keep ECS and other partners insulated from the crypto plumbing, while benefiting from its assurances

AI is the engine that makes this tractable. It learns from the streams of sensor data, links them to causal models of impact, and helps determine not only “what happened” but “how confident we are that this helped”.

Without AI, this system would collapse under its own complexity. With well-governed intelligent flows, it becomes controllable, manageable and improvable.

Revolutionizing Clean Cooking for 2.8 Billion People
Discover innovative solutions for scaling access to modern energy cookstoves, transforming lives, and promoting health while combating climate change.

Pathogen genomics and trustworthy health intelligence

With PathGen, our partners in public health are facing a different kind of complexity:

  • Genomic sequences
  • Hospital and lab data
  • Cross-border data-sharing constraints
  • Sovereignty and trust issues between countries
  • High-stakes policy decisions that must be explainable

This year, we advanced PathGen from idea to a functioning proof-of-concept platform where:

  • Intelligence is federated, not centralised
  • Authorisations are formally modelled and verifiable
  • AI models can be audited and attested using cryptographic proofs
  • IXO’s infrastructure acts as an assurance and governance layer around the core analytics

Again, AI is not just another module. It is the engine that turns raw, distributed data into actionable, explainable guidance – without requiring countries to give up sovereignty.

PathGen Public Preview
How IXO is helping to build Asia’s Sovereign Intelligence Layer for Infectious Disease Control

Read about the PathGen preview held on 1 December in Singapore, attended by government and regional partners, and covered by Channel News Asia.

Tokenised rights and real-world assets

On the financial side, we’ve deepened our design work around:

  • Tokenised rights to Outcome Units (such as Carbon Credits and ITMOs)
  • Physical infrastructure assets, and commodities
  • Stable value instruments for climate-linked financing

Here, AI’s role is less visible but no less critical:

  • Modelling risk and scenarios for different collateral structures
  • Stress-testing mechanisms for over-collateralisation and redemption
  • Analysing market behaviour across chains and venues
  • Supporting decision-making for treasury, governance, and liquidity management

We are not deploying black-box algorithms to “optimise yield”. We are using AI and formal reasoning tools to understand the behaviour of complex financial structures before they touch the real economy, and to keep them within bounds that are defensible to regulators and investors.

Governance, assurance, and real accountability

Progress this year hasn’t only been technical. It has also been about governance, assurance, and discipline:

  • Achieving ISO-27001 certification of our information security and privacy systems
  • Designing assurance wrappers for Oracles, so third-party AI services can be trusted without IXO having to “own” their risks
  • Clarifying how digital rights, claims, and credentials interact across jurisdictions

Here, AI again acts as an engine, not a toy. It helps us:

  • Track, analyse, and document risks and controls
  • Formalise policies into machine-readable rules
  • Simulate the impacts of different governance choices
  • Provide traceability of decisions over time

The theme across all of this:

AI enables us to operate more complex, more accountable systems than would be possible with human capacity alone.

Why I see AI as the primary engine for progress

There are three reasons I am comfortable making this claim:

  1. AI can operate at the level of intentions and outcomes

Traditional IT systems are good at recording what happened. They are bad at understanding what we were trying to achieve.

In the IXO ecosystem, everything starts with intents and claims:

  • An intent to fund a stove that reduces emissions
  • An intent to share genomic intelligence across borders for outbreak control
  • An intent to employ young people in meaningful work
  • An intent to issue or retire a unit of impact

AI can live in that space between intent and outcome:

  • Interpreting messy, real-world contexts
  • Deciding actions that move us closer to the intended outcome
  • Updating beliefs as new data arrives
  • Providing explanations that humans can interrogate

This is where real progress happens.

  1. AI can navigate complexity that humans can’t hold in their heads

Our problems are now systemic by default: climate, health, finance, governance.

You cannot reason properly about:

  • Millions of sensor streams
  • Multi-jurisdictional regulations
  • Causal graphs of interventions and side-effects
  • Market dynamics across multiple chains

without some form of machine intelligence that can work reliably with uncertainty, probabilities, and continuous feedback.

The point is not that AI replaces human judgement. It’s that human judgement without AI is now, in many domains, structurally under-powered.

  1. AI can be made accountable, if we design for this

There is a common story that AI is inherently untrustworthy because it “hallucinates”.

What I’ve learned this year is that hallucinations are not the core problem. Unaccountable systems are.

Your Agent Did What!?
If agents are going to act for us, their decisions must be visible, justified, and verifiable.

The IXO trust stack and Qi intelligent cooperating system address this head-on:

  • Every significant agent action can be turned into a verifiable claim
  • Authorisations can be expressed in formal, machine-checkable logic
  • Outcomes can be tied to causal models, not just correlations
  • Governance can be encoded as executable rules, not just policies sitting in PDFs

If we design AI systems to be observable, constrained, and auditable, they can become more accountable than many human-only systems we rely on today.

Where this takes us to next

There is much to be done, and in some ways it still feels like we are only just getting started, whilst at the same time we are seeing great traction and more potential for the IXO software solutions across many more use cases!

Qi as a cooperating system is not finished. The Internet of Impacts is not “done”. Our models, protocols, and governance structures will keep evolving.

But this year has convinced me of one thing:

If we want climate action that isn’t performative, public health systems that aren’t reactive, and economic systems that don’t eat their own foundations, we will need AI to be the primary engine that drives, coordinates, and tests our progress.

Not the only engine. But the primary one.

Questions I’m sitting with, going into 2026

  • Where are we still treating AI as a convenient assistant, instead of giving it the harder role of testing our assumptions and surfacing uncomfortable truths?
  • In which parts of our systems are human bottlenecks still causing avoidable harm or delay – and how could accountable AI agents relieve that pressure without eroding responsibility?
  • If AI is becoming the primary engine for progress, what progress are we actually pointing it at – and who gets to decide what “progress” means?

If we don’t answer those questions with care, someone else will answer them for us. And their definition of progress may not look anything like the future we’re trying to build.

define progress with qi
CTA Image

Qi is for:
- Anyone building or funding something that involves people and AI working together
- Founders use it to plan and prove progress
- Investors use it to track outcomes with evidence.
- Developers use it to orchestrate agent workflows that are verifiable and secure.

Early Access

Spread the word

Keep reading