Updating mental models of risk ⊗ An AI tool for learning critical thinking ⊗ What machines don’t know
No.375 — Future Risks Report ⊗ AI for scientific discovery is a social problem ⊗ Mega batteries are unlocking an energy revolution ⊗ Codex Atlanticus

Updating mental models of risk
In this piece on Issues, the authors mention that a “well-documented tendency of humans is to notice and focus on immediate, visible dangers rather than long-term or abstract ones.” This sidetracked me because I was reminded of other shortcomings hindering our understanding of the current human predicament. Edward O. Wilson said that “we [humans] have Paleolithic emotions, medieval institutions and godlike technology.” We also have exponential growth bias where we “intuitively underestimate exponential growth.” And people underestimate the income of the top 1%, to my mind because they don’t quite grasp the difference between owning a few million dollars and what a billion dollar is. Finally, as I mentioned in No.372, LLMs are probabilistic and we seem to be badly evaluating them or have misaligned expectations, because we are used to machines being deterministic.
Getting back to the risk piece, the authors argue that disasters are no longer discrete events but interconnected crises that demand a fundamental shift in how we understand and manage risk. They use recent catastrophes—the 2025 Los Angeles wildfires and Hurricane Helene’s devastation of Appalachia—to illustrate how hazards now cascade and compound, overwhelming traditional response systems. As I mention above, they contend that our mental models remain stuck in an outdated framework that treats disasters as isolated incidents, leading to reactive policies rather than proactive prevention.
The authors propose reconceptualizing risk through four components: hazard (including slow-moving threats like disease vectors and invasive species), vulnerability (recognizing that wealth can create new fragilities through technological dependencies), exposure (extending beyond physical proximity to include supply chain connections), and response (acknowledging how misinformation and reactive governance undermine effective action). While they point to successful models from Japan and the Netherlands, the authors acknowledge that local resilience—though necessary—cannot substitute for coordinated, large-scale governance. Communities must organize to protect themselves in the short term, but this local action should ultimately build the political pressure needed to demand institutions capable of addressing planetary-scale threats.
Look back → Alll the way back in No.126, I featured Jamais Cascio’s Facing the Age of Chaos, detailing his BANI (Brittle, Anxious, Nonlinear, and Incomprehensible) framework. Related and still worth a read.
Research shows that neighborhoods with stronger social ties and more opportunities for connection—what some scholars call social infrastructure—have better disaster outcomes. Even if citizens distrust distant politicians, communities can still cooperate internally to strengthen risk education, create warning systems, maintain emergency stockpiles, and pursue avenues to limit vulnerability. […]
A fire, for instance, can damage power infrastructure, crippling water treatment plants and leading to a lack of potable water. Similarly, widespread economic immiseration can erode public trust and fuel social discontent, escalating into political strife. These are not merely sequential events but intricate feedback loops where systemic failures in one domain exacerbate adverse outcomes in another, creating reinforcing cycles of disruption. […]
Even as the need for integrated, future-oriented resilience systems becomes clearer, the United States is moving in the opposite direction. Recent policy shifts move the burden of foresight and preparedness onto states, cities, tribal governments, and counties. In these jurisdictions, trust in public institutions may still be largely intact, and decisive adaptive leadership is still possible. […]
In sum, the notion that wealth and technology invariably reduce vulnerability is flawed. Wealth and technology can be used to harden societies against risk, but when complex systems break down, the very advantages enabled by wealth and technology can become failure points.
“Ambitious, thoughtful, constructive, and dissimilar to most others. I get a lot of value from Sentiers.”
If this resonates, please consider becoming a member—it keeps this work independent.
An AI tool for learning critical thinking
Oh I really love this post and project by Vaughn Tan. It’s a fantastic example of starting from what LLMs can do and completely reinventing how we (in this case students) interact with them. He built a tool called CONFIDENCE INTERVAL that inverts the typical AI interaction model. Instead of generating content for students, it guides them through a structured process of developing their own arguments through what he calls “iterative scaffolding.”
The tool treats AI as a “Socratic mirror” that reflects back what students write in reframed ways, helping them spot weaknesses in their logic and evidence. Students maintain control over all value judgments—what Tan calls “meaningmaking,” the distinctly human work of deciding what matters and why. The AI handles information organization and systematic prompting, but students do the actual thinking. Initial testing with first-year undergraduates showed dramatic results: after a single two-hour session, students transformed vague proposals into sharp, well-justified arguments. The tool addresses a practical problem in education—how to teach critical thinking at scale when expert instruction is scarce—by encoding effective pedagogical approaches into a system students can use independently. Tan explains that this represents a fundamentally different approach to educational AI, one that preserves human agency while leveraging machine capabilities for appropriate support tasks. The tool is now ready for beta testing and you can sign up to try it out.
The result is a slowly burning educational catastrophe. Students develop dependency on these AI tools without developing the judgement needed to use them well. They lose practice making the decisions that prepare them for leadership roles where human reasoning about what matters most cannot be delegated to machines. […]
My criticism of AI tools is, in fact, more a criticism of the superficial approach to thinking about the interaction logics that are designed into these tools. If we understand the difference between what humans must do (meaningmaking) and what machines can do better than humans, we can design this understanding into AI tools that help humans learn how to do meaningmaking better. […]
This matters because the critical thinking skills students develop — making value judgements, surfacing assumptions, understanding audiences, evaluating evidence — are precisely what humans need to thrive in an AI-saturated world. As AI handles more routine work, sophisticated human meaningmaking becomes the rate-limiting resource for innovation, strategy, and social progress.
What machines don’t know
In this one, Eryk Salvaggio explains how large language models operate by placing words into shifting numerical vector spaces, producing plausible text without genuine understanding or imagination. He’s really trying to unfold every phrase and expression to make every concept and difference clear, which makes the piece a bit hard to parse. For my taste anyway, as I had to reread some a couple of times. But the exercice is worth the time, it’s a useful brick for a clearer understanding of LLMs’ “thinking” vs our thinking.
Unlike human language, which is driven by the articulation of personal experience and meaning, LLM language is governed by rules about where words can be placed. The imagination and creativity in language come from social activation and human interpretation, not from the models themselves. While LLMs are sophisticated next-token predictors, they do not truly understand or reflect on the language they produce.
The decision to equate human thought with complex machine slotting has significant social implications. It presupposes that human expression is only and without exception the automation of grammar, that words always and without exception determine, for themselves, when they will appear. The mind becomes a vast mathematical vector space through which words assert themselves rather than a personal library through which words are, sometimes, found. […]
It navigates not through conscious reflection of where it ought to be, but as a result of following a structure that shifts around it. Language is “slotted in,” rather than “produced.” And it is humans who do all the work. […]
Human language is motivated by the articulation of thought; machine language is crafted through structure.
Futures, Fictions & Fabulations
- Future Risks Report. “The Future Risks Report explores the risks we may face in the future. This report is based on an annual survey asking 3,600 experts from 57 countries and a representative sample of 23,000 individuals from 18 countries to rank their top 10 risks, based on their potential impact on society over the next five to ten years.”
- The Future of Sport and AI 2025. “AI is no longer on the sidelines, it’s transforming how sport is played, managed, and experienced. Discover how to navigate this shift and stay competitive in an increasingly AI-driven arena.” Which reminded me of Winning Formula by Near Future Laboratory and Changeist/Scott Smith in 2014.
- Anticipation Conference 2026. In Milano, Italy. “Anticipation 2026 is an interdisciplinary conference for rethinking how ideas about futures operate within conditions of uncertainty, indeterminacy, and unknowing. Bringing together researchers, designers, philosophers, policy makers, and practitioners, the conference opens space for exploring how futures are shaped through aesthetics, ethics, epistemologies and material practices.”
Algorithms, Automations & Augmentations
- Pay-per-output? AI firms blindsided by beefed up robots.txt instructions. “Leading Internet companies and publishers—including Reddit, Yahoo, Quora, Medium, The Daily Beast, Fastly, and more—think there may finally be a solution to end AI crawlers hammering websites to scrape content without permission or compensation. Announced Wednesday morning, the ‘Really Simple Licensing’ (RSL) standard evolves robots.txt instructions by adding an automated licensing layer that’s designed to block bots that don’t fairly compensate creators for content.”
- Across the US, activists are organizing to oppose data centers. “As this boom accelerates, public scrutiny is intensifying. Communities across the country are raising questions about environmental impacts, energy demands, and the broader social and economic consequences of this rapid buildout.”
- AI for scientific discovery is a social problem. “Artificial intelligence promises to accelerate scientific discovery, yet its benefits remain unevenly distributed. While technical obstacles such as scarce data, fragmented standards, and unequal access to computation are significant, we argue that the primary barriers are social and institutional.”
Built, Biosphere & Breakthroughs
- How mega batteries are unlocking an energy revolution. Scrolling story by FT, with lots of visuals. “As cheap, plentiful solar power floods the grid in the middle of the day, hundreds of battery installations bank the energy and discharge it in the evening when people return home from work and demand — as well as prices — spike. This has shored up the grid, extended the state’s use of renewable energy and reduced its reliance on fossil fuels.”
- A ‘Secret Weapon’ for fighting climate change comes surging back. “Capturing carbon 35 times faster than the Amazon, seagrasses have faced centuries of decline. Now restoration projects across North America are seeing their meadows quadruple in size.”
- The Amazon’s trees might be more resilient to climate change than we thought. “A team of nearly 100 researchers monitored and analyzed tree sizes across 188 permanent plots between 1971 and 2015. They found the average tree size—including both small and large trees—increased by 3.3 percent every decade. The researchers attribute the growth to rising carbon dioxide concentrations linked to Earth’s warming climate, suggesting the trees have some level of resilience to climate change, at least in the short term.”
Asides
- Codex Atlanticus. “The largest existing collection of original drawings and text by Leonardo da Vinci and is presented at the Biblioteca Ambrosiana in Milan.”
- The movies that defined Gen X. “Ferris Bueller was who we wanted to be—skip school, clown on authority, hijack a parade. But the same Matthew Broderick shows up in WarGames, almost nuking the planet from his bedroom. That contradiction was us: cocky on the surface, terrified underneath that the adults in charge didn’t have the answers. In fact, they might just drive us straight into nuclear winter.”
- How The Studio created a convincing Frank Lloyd Wright building. “Rather than design in the style of Wright, the production team aimed to create a building that was a realistic design from the architect, with Wright name checked as the architect several times in the show.”