Intelligence to Do and Learn
The release of OpenAI's Operator is yet another significant milestone in the evolution of AI agents. As an autonomous tool that is capable of performing complex web-based tasks through its own browser interface, Operator signals a shift from passive AI assistants to proactive digital collaborators that can take over knowledge and actions in our everyday human domains. Operator introduces new possibilities for automation, accessibility, and efficiency. However, its launch also raises critical questions about safety, accountability, and the balance between autonomy and control.
Against this backdrop, I want to explore the trajectory of AI agents and their growing autonomy, to provide some thoughts on how frameworks such as our IXO Intelligent Oracles and Cognitive Digital Twin Systems could shape a future where agents that are capable of knowing, learning, and acting, with a growing role in solving real-world problems.
Knowing What We Know
The concept of knowledge-based agents, as popularised by Stuart Russell and Peter Norvig in Artificial Intelligence – A Modern Approach (2021), once offered a tidy paradigm: that agents perceive their environment, make decisions, and act on those decisions in a neat, procedural loop. That framework worked well for relatively static conditions where rules could be laid out, updated occasionally, and traced by human programmers. But as AI’s scope expands into ever-messier real-world domains—where conditions shift, data is unstructured, and the demands on intelligence grow—modern agents can no longer rely solely on rigid, handcrafted knowledge bases. Instead, they must learn on the fly, discovering patterns from massive data streams, act, and adapt their responses to new or unpredictable scenarios.
This takes us into a space of not knowing what we don't know.
Learning from What We Don't Know
Earlier AI systems relied heavily on explicit taxonomies and rules—great for traceability, but unwieldy in the face of rapid change. Each new circumstance demanded updated rules, requiring human experts to manually encode knowledge. The next generation of AI agents, by contrast, will employ learned representations. Neural networks ingest huge volumes of data, adjusting millions of internal parameters to pick up on subtle correlations and patterns—ones that might escape even the most meticulous human rule-maker. Though these representations can be opaque, they allow AI agents to adapt to shifting realities, often without direct human intervention. That adaptability is becoming a key advantage, fuelling breakthroughs in language modelling, robotics, and complex decision-making.
Reflecting the Real World
As learned representations supplant static rule sets, new approaches to gathering and interpreting real-world data have emerged—epitomised by Intelligent Oracles that are built using the IXO framework, which is a framework that we have been developing over a number of years, with an intuition that we would be living in a future world fuelled by verifiable data, and optimised by AI (check out our 2018 explainer video below).
The AI Oracles we are now building integrate diverse sources of biological, environmental, and socioeconomic data— ranging from pathogen genomics to climate metrics and claims about the actions of organisations and people—to feed this information into sophisticated Causal AI analyses and state-change action models. The role of an oracle here is both to provide timely, validated data to cognitive digital twin systems, and to support decision-making that is guided by data-driven predictions about the probable outcomes of different decision paths. Cognitive Digital Twins (CDTs) in the IXO Spatial Web are virtual counterparts that mirror real-world systems in which we can apply verified data and AI-driven logic to predict, plan, and act.
Where a traditional digital twin might model the maintenance schedule of a wind turbine, a cognitive digital twin can think about what actions should be taken when parts are wearing out, automate the scheduling of repairs, and even learn from the data generated by other turbines to refine its predictions. IXO Oracles deliver constant streams of fresh, high-fidelity data for these cognitive digital twins to be perpetually updated, so that strategies can be optimised in real time. Whether our goal is to achieve disease control using pathogen genomics, or to design intelligent climate financing mechanisms, the fusion of oracles and cognitive digital twins opens the door to near-autonomous problem-solving at scale.
Making Sense and Responding Correctly
Behind the success of both cognitive digital twins and intelligent oracles lies a fundamental shift in how AI systems operate: Agents once functioned as sophisticated command-line helpers that were good at answering questions and completing specific tasks, but not inherently self-directed. Now, as AI gains more autonomy, the line between responding and acting is blurring.
Autonomous AI is both an opportunity and a threat.
Agentic AI systems can accept a high-level goal and independently break it down into sub-tasks—executing them, passing results along, and combining outcomes into a coherent solution. Foundation Capital spent much of 2024 exploring such systems, arguing that a System of Agents might replace decades of enterprise software with service-as-software: smaller, interconnected services that tackle specific tasks and hand off results to each other. Tech leaders at companies like Salesforce, Hubspot, and Microsoft have recently echoed this view, predicting that agentic systems will be the new apps in an AI-powered world.
Getting It Wrong
With increased autonomy, the question becomes: how do we prevent AI from making serious mistakes? Agents are not only self-directed but increasingly self-building, capable of generating new tools on demand. This unlocks tremendous opportunities but also introduces serious risks. The situation parallels a familiar workplace scenario: bringing in an intern. You want the intern to tackle meaningful tasks, but you don’t want them making high-stakes errors. Similarly, if we hand over critical business processes or personal finances to AI agents without guardrails, even a small oversight could escalate into costly or reputational damage.
At Foundation Capital’s AI Unconference last November, Yohei Nakajima, creator of BabyAGI and Pippin the Unicorn, identified four levels of autonomy for such agents, offering a framework for understanding how—and how much—AI should self-build:
- Level 0: Basic tools with fixed capabilities. Most AI operates here, relying on a predefined function library without the ability to expand it.
- Level 1: Request-based self-building. The agent can generate new tools only when the user explicitly asks for them.
- Level 2: Need-based self-building. The agent automatically identifies gaps in its toolkit and creates new functions to address them, without waiting for a user request.
- Level 3: Anticipatory building. The agent predicts future needs and proactively evolves its own architecture or algorithms, building tools before users even realise they’re required.
Learning To Trust
Self-building, especially at Levels 2 and 3, can yield remarkable efficiency gains but also raises the spectre of rogue behaviour. An agent tasked with increasing efficiency might overreach, creating invasive monitoring tools or excluding key vendors in ways that harm trust and partnerships. A personal assistant agent granted direct access to a user’s credit card could unwittingly fall prey to manipulation by travel sites or malicious third parties.
Nakajima’s proposed solution centres on incremental trust-building.
Self-building agents should start with low-risk tasks, proving their reliability over time.
As they demonstrate sound judgment, they can graduate to more complex domains—still under careful oversight. This system parallels how human interns evolve into full-time employees: by repeatedly showing they can handle responsibility without stepping outside the rules or harming the organisation.
Slowly Doing More
Enterprise software is already moving toward an agentic future. Salesforce, for instance, has deployed Einstein GPT agents to more than 150,000 companies, and Microsoft’s Copilot agents have reached over a billion Windows users. As more businesses shift from third-party AI solutions to building their own internal AI stacks, getting self-building capabilities right becomes critical. The long-term benefits—richer functionality, adaptability, and continuous learning—must be weighed against the risks of unauthorised or unintended agent actions.
Safety measures and user feedback loops play a crucial role here. Spending caps, multi-factor authentication for transactions, and guidelines for acceptable vendor choices can help AI agents avoid mistakes.
We believe that embedding agentic AI systems in Web3 architectures with software-mediated governance, accountability, and cryptographic verification mechanisms, has the potential to make AI both useful and safe.
Frequent feedback from human supervisors steers agents away from suboptimal decisions. Like an intern who gradually takes on more responsibilities, an agent with training wheels can learn the nuances of corporate culture, ethical practice, and risk management.
Making an Impact
In this evolving landscape, we believe that the IXO framework for implementing Intelligent Oracles within Cognitive Digital Twin systems provides exciting potential to use data-driven AI for tackling real-world challenges, such as disease control, financing climate impact mitigation, engaging youth as a workforce in green economies, and so much more—in ways that allow us to continuously learn and adapt, with AI, to create positive impacts on society, nature, and the changing world. As more powerful foundation models enable ever more capable agentic systems to emerge, we can expect organisations to increasingly rely on self-building AI agents that will learn to manage intricate tasks with minimal oversight. The ultimate challenge is forging the right balance between autonomy and safety, ensuring these agents remain a net positive force for humanity.
Beyond What We Know Now
Can AI agents thrive on human knowledge? In one sense, yes—they still depend on knowledge, just represented in more fluid, learned ways rather than explicit rule sets. But the coming generations of AI agents will also more than that: self-directed, self-building, and capable of forging connections across broad data streams. We are seeing the potential of AI that’s not just about retrieving facts or following static procedures but about anticipating needs, collaborating across networks, and evolving in real time.
This new era of Agentic AI demands as much careful governance as it does technological innovation, ensuring that AI remains a responsible intern—one that grows into a seasoned professional without running wild in the process. We are exploring how by building and employing AI agents to have real-world impacts with the IXO framework and infrastructure for autonomous, governed, intelligent AI Oracles and Cognitive Digital Twin systems.