How We Decide
In the autumn of 2004 I found myself in a small lecture room in the basement of the London School of Hygiene & Tropical Medicine, grappling with a deceptively simple question: what does “making a decision” actually mean?
The course—Human Judgement and Decision-making led by Professor Jack Dowie—wasn’t about memorising statistical techniques. It was a guided confrontation with our cognitive wiring. We used influence diagrams instead of spreadsheets, probability trade-offs instead of gut feel, and we spoke the language of preference, utility and regret rather than that of policy statements, or “evidence-based” public health decisions using randomised controlled trials—which was becoming the mantra of the frequentist majority in this era of Cochrane doctrines.
My classroom experience, layered on my already deep experience of designing and implementing complex health systems interventions, triggered a fundamental pivot in my worldview. Decisions were no longer endpoints; they were systems that could be designed, audited and, crucially, improved. So I decided to make this the focus of my doctoral research. But in 2003 the digital tooling available for this stopped at static decision trees and Monte-Carlo simulations running on a desktop. The vision of real-time, data-rich decision support—let alone decentralised autonomous decision engines—was still out of reach.
Fast-forward Twenty Years: What Changed?
- The SUBSTRATE — Web3 and Permission-less Compute
The first enabler was the emergence of blockchain and smart-contract platforms. For the first time we could guarantee that a piece of logic—from a budget allocation algorithm to a climate-credit oracle—would execute exactly as specified and leave an immutable audit trail. Decision policies could be encoded as executable public artefacts rather than PDF guidelines buried on an intranet. Human values could be crowd-sourced through tokenised voting and signalling mechanisms.
- The CATALYST — Modern AI
Transformer models, vector databases, and frameworks for streaming inference unlocked pattern recognition at a scale that my 2005 PhD prototype could only gesture towards. What once required manual feature engineering now flows from self-supervised representation learning. The upshot: decision modules can ingest unstructured evidence—remote-sensing pixels, lab notes, social sentiment—without months of data-wrangling gatekeeping the analysis.
- The COMPASS — Causal Reasoning
Statisticians long warned against mistaking correlation for cause, but Judea Pearl provided a calculus—do-calculus—to do something about it. Instead of asking “What predicts X?” we can formalise “What action would change X, and by how much?”. It was immediately obvious to me that Causal Models could turn Theories of Change that are implemented with narrative tools such as Logframes, could be explicitly modelled to include counterfactuals and causal discovery of features and relationships hidden in data. This field has been maturing just as Web3 is being used to provided transparent data provenance, and as AI has become really good at delivering scalable function approximations. Suddenly an end-to-end pipeline that estimates causal effects, encodes them in smart contracts and triggers automated incentive flows became plausible.
Context Matters
One of Jack Dowie’s mantras was that “No probability exists in a vacuum”, as he introduced us to Bayesian inference.
A prior reflects everything we already know—or believe—about a question before seeing the latest evidence; the posterior is how that belief shifts once the new data arrive.
Get the prior wrong, and even perfect data analysis will mislead you. Get it right, and you can make reliable decisions with surprisingly little data because your context is doing half the work.
A practical illustration
Imagine two sequencing devices under evaluation:
- Lab A has a long track-record of working with Illumina machines, so its prior says any unfamiliar platform carries a 30 % chance of teething problems.
- Lab B is green-field and vendor-agnostic; its prior is closer to “all devices start equal.”
Both labs run identical three-week pilots and observe a single failure on the new platform. Bayes updates Lab A’s sceptical prior slightly—failure probability nudges to 32 %. Lab B’s neutral prior jumps to 14 %. Same evidence, two rationally different conclusions.
Context—expressed as priors—determines how heavily the new evidence bends the decision path.
Priors at planetary scale
Google’s early PageRank treated every hyperlink as equal evidence of relevance—an implicit, uninformative prior. Over time Google learned that who links, how often, and in what language are context signals that should weight the prior. Today, when you type “mercury,” Google quietly checks your geolocation, search history and even device modality. If you are a Singapore-based engineer who yesterday queried “thermometer calibration,” the prior probability for “liquid metal element” shoots up, while the planet and Roman god recede.
The ranking you see is the posterior of billions of micro-Bayesian updates, executed in milliseconds.
That contextualisation has turned a static index into a near-prescient oracle—and, in doing so, it has concentrated extraordinary gate-keeping power in a single algorithm.
Social-media feeds moved down the same Bayesian slope:
Early Facebook showed posts chronologically—effectively a flat prior over friends’ updates. Once engagement metrics became the objective, every click, pause and share was folded back as Bayesian evidence, refining a personal prior about what you are likely to find irresistible. The result is a feed that anticipates your curiosity before you are conscious of it—sometimes for good (surfacing a long-lost friend’s milestone), sometimes amplifying outrage because that, statistically, keeps you scrolling.
The prior keeps tightening; the posterior keeps locking in.
Context-aware ranking systems haven’t just organised information—they have reorganised attention, public discourse, and, by extension, democratic processes.
Why this matters in the work we are doing
Our decision oracles must therefore expose their priors—whether they come from domain expertise, historical datasets, or governance policy.
A travel-restriction model that starts with a prior of “low outbreak → no ban” will behave very differently from one that encodes a precautionary principle.
By making priors explicit and adjustable, we guarantee that context is a transparent design choice, not an invisible bias.
Bayesian-inspired mechanisms then become features—not hidden determinants—of how we decide.
The IXO Journey: Operationalising Decision Science
When we created IXO —the Internet of Impact— the ambition was straightforward: embed rigorous decision logic inside the fabric of economic transactions to optimise the outcomes of impact-generating systems. Climate-credit issuance, youth-skills funding, pathogen-genomics data exchange—all are, at heart, sequences of decisions under conditions of uncertainty with competing objectives and asymmetrical information.
We borrowed three design axioms straight from my LSHTM notebook:
- Frame every material choice explicitly. Enumerate options, clarify objectives, expose trade-offs.
- Couple judgement with evidence through modelling. Use probabilities and utilities, not hunches and hierarchies.
- Record and review. A decision without an audit trail is an opinion; with a trail of verifiable claims and independently certified outcomes, it becomes a learning asset.
From Decision Trees to Agentic Oracles
IXO’s agentic oracles are micro-services that consume cryptographically-signed claims, run causal or probabilistic inference, and issue verifiable attestations. Under the hood they blend methods such as multi-criteria decision analysis (MCDA)—something Professor Dowie would recognise—with modern deep-learning components that infer criterion scores in real time. The final aggregation remains interpretable; the evidence extraction is data-hungry AI.
Why “Do-Why” Matters
Vanilla machine-learning pipelines optimise prediction accuracy. Impact programmes need counterfactual confidence: would emissions have dropped if the intervention hadn’t happened? Pearl’s framework, and libraries like doWhy, allow us to encode assumptions transparently, compute causal effects, and stress-test sensitivity. Embedding those causal graphs with on-chain oracles gives us executable “why” and auditable actions, along with the numerical “how much”.
Three Illustrative Decisions
Decision Context | Old World (2003) | IXO Approach (2025) |
---|---|---|
Choosing a sequencer for a genomic lab | Spreadsheet scoring matrix, vendor brochures, expert panel debate | MCDA with causal priors: AI predicts throughput, error rates and total cost; causal model estimates impact on diagnostic turnaround; weighted sum published on-chain with explainable breakdown |
Imposing travel restrictions during an outbreak | Manual risk matrices, political negotiation | Real-time risk classifier fed by genomic surveillance and mobility data; causal module simulates counterfactual spread under restriction vs. no action; policy token triggers only if net-benefit probability exceeds threshold |
Routing users to the right oracle service | Keyword search & menu browsing | Embedding-based recommender ranks services; explanation layer surfaces which intent tokens matched which oracle capabilities; user feedback loops into causal bandit to avoid popularity bias |
Each outcome is not merely a recommendation but a cryptographically signed decision artefact: inputs, reasoning graph, confidence intervals, and distributional impact all preserved for post-hoc evaluation.
The Road Ahead: Design Principles for Next-Gen Decision Systems
- Make assumptions visible. Whether statistical priors or ethical guardrails, encode them in machine-readable form and publish.
- Fuse correlation and causation. Use deep nets to spot patterns; use causal graphs to test interventions.
- Keep humans in the loop at the right junctions. Automation excels at consistency; humans excel at value judgements when objectives conflict.
- Reward learning, not just outcomes. Every decision artifact is data. Feed the misses back into model refinement as eagerly as the hits.
- Govern by protocol. Decisions that allocate resources or restrict freedoms should execute via transparent, tamper-proof logic, not by opaque committee decree.
Closing the Loop
Looking back, that LSHTM course did more than introduce decision analysis; it implanted a conviction that better choices are engineered, not accidental. Two decades later, IXO’s architecture—rooted in Web3 transparency, AI prediction, and causal inference—feels like the practical answer to the questions we debated in a basement classroom during 2004.
We still gather uncertain evidence, weigh competing objectives and live with the consequences. The difference is that today our decisions can be explicit, data-rich, algorithmically consistent and publicly auditable. In short, we finally have the tooling to do why—to act on causes rather than chase correlations—and to make every consequential choice a learning opportunity.
This is how we can decide to decide now, and how we will decide better tomorrow.
Footnote:
Professor Jack Dowie is a prominent academic in health impact analysis and decision-making. He served as Professor of Health Impact Analysis at the London School of Hygiene and Tropical Medicine (LSHTM) and is now Professor Emeritus. His work focuses on decision technologies, multi-criteria decision analysis, and preference-sensitive approaches in healthcare. His classroom was a Socratic arena where clinical guidelines, public policies, and personal choices were all dismantled into objectives, uncertainties, and preferences, then rebuilt under the uncompromising lens of Bayes and expected utility.
Key Contributions
- Decision Analysis: Jack Dowie has extensively explored decision analysis, particularly its application in clinical settings. He advocates for integrating evidence-based, cost-effective, and preference-driven methods into medical decision-making.
- Teaching: He taught modules on professional judgment and decision-making, including courses at LSHTM and the Open University. His courses often challenged participants to reconsider intuitive resistance to analytical approaches.
- Publications: Dowie authored significant works like Professional Judgment: A Reader in Clinical Decision Making and numerous articles on health economics, clinical decision support, and patient-centered care.
Philosophy
Dowie emphasises the balance between intuition and analytical reasoning in decision-making. He developed frameworks like the "Cognitive Continuum" to assess the interplay between rapid intuitive processes and slower analytical ones. He also highlights the importance of processing emotions into value judgments for effective decision analysis.
Influence
Jack Dowie’s work remains influential in shaping how healthcare decisions are analysed and implemented globally. For me personally, attending his 2004 course detonated two assumptions:
- Technology is neutral. Dowie showed that every model encodes values; hiding them behind equations is camouflage, not objectivity.
- Institutions decide, people implement. He flipped the script: people decide; institutions should clarify and honour those decisions—or be redesigned.
Those insights recalibrated my trajectory to initiating PhD research on decisioning systems, the eventual birth of IXO, and a two-decade obsession with making agency choices explicit, auditable, and aligned with real human stakes. Dowie didn’t just teach me how to model decisions; he taught me why transparent decision-making is itself a moral act.