Writing with probabilistic machines ⊗ What is the right atomic unit for knowledge?

No.381 — Large language mistake ⊗ The world lost the climate gamble ⊗ 2026 Trend File ⊗ Imaginaries of Artificial Intelligence ⊗ A dam removal and a river’s rebirth ⊗ Lo—TEK Water

Writing with probabilistic machines ⊗ What is the right atomic unit for knowledge?
Writing with probabilistic machines. Created with Midjourney.

If, post Black Friday and pre Cyber Monday, you feel the urge to spend money on stuff but want to keep it low carbon and low crap, may I humbly yet totally self-servingly suggest supporting your favourite newsletters? Today to December 8th, you get 30% off on the first year when signing on as a supporting member or friend.

Due to some Ghost Pro limitations I need to create a separate promo url per price tier and period (monthly or yearly), so I'm only offering the rebate for the two yearly plans. Sorry about that.

Also on the membership beat, should I re-start the Sentiers Discord server? Link sharing, discussions, and potentially some community calls, something like AMAs on specific topics or reading clubs around article selections. Hit reply is this speaks to you.


Writing with probabilistic machines

Fabien Girardin reflects on how writing has always been a tool for thinking, and where the struggle to shape ideas remains deeply personal despite new LLM-assisted articulation. While generative LLM tools help clarify and structure thoughts quickly, they risk promoting shallow, average ideas and bypassing the slow, difficult intellectual work needed for authentic and original thinking. (You should click through to see the graphics he uses to visualise the LLM-augmented process.)

The piece is a great example of how “we” are still in the process of figuring out the most appropriate use of LLMs. What can they replace, what shouldn’t be replaced, what can be enhanced, which new rituals might one establish? If you spend the same amount of time writing with LLMs as you would by yourself, and you do it correctly, then shouldn’t you have a better (or more) output? More questions, more challenges, more research done quicker, more time to ponder.

To counterbalance the shallowness and averaging mentioned above, Girardin also experiments with social writing practices like tertulias—regular group discussions that foster deeper reflection, challenge ideas, and nurture maturity beyond what AI alone can offer (“also called cénacle in France or salon in the English-speaking world”). He emphasises that true thinking demands active engagement, time, and vulnerability, highlighting the ongoing challenge of preserving meaningful, slow thinking in an era of rapid AI-driven writing.

I’ll day dream something here. I don’t have a great hope of seeing it spread/be possible at a societal level, but extrapolating on Fabien’s experiences and my own, and going back to the cloister and the starship from No.373. One could imagine space and time in each day or each week where the modern tools are dropped and in-person, hand-written, read-on-paper work is done. For most people, the use of LLMs will just grow the demands and they’ll work as many hours “just” faster outputting more stuff, no leisure society for you! Perhaps some can keep enough agency and alignment with their values to carve out space for rest, or for low-tech knowledge work that engages their brains. Perhaps.

Over the last twenty years, I learned that the person who writes things down holds a particular kind of power. The power to shape how ideas spread and take root both in my own mind and with others when I share them. […]

Mostly, I accumulate stacks of half-baked drafts that serve as notes for ongoing thinking. Until recently, this was largely a solitary struggle. It still is at its core. The thinking, the choosing, the crafting of ideas remain mine alone. But the surface of writing, the articulation itself, has changed.

Practically, once a week, I brought together 4–6 colleagues (known as tertulianos) for a 1-hour online discussion. Each of us brought something in progress, a draft, a project, an outline for a presentation, readings, etc. We all use AI tools regularly, but the tertulia became a place to reflect and to let our ideas mature outside the rush of work. […]

Deep and authentic thinking demands time, curiosity, vulnerability, and willingness to sit with questions. As Lisa often says, it is not a “spectator sport.” Thinking requires active making (e.g. writing, sketching, prototyping) to develop and practice the skills at its core.

What is the right atomic unit for knowledge?

This one by Steven Johnson actually lines up perfectly with the above, as another experiment with LLM-enhanced thinking. He’s been working with Google’s NotebookLM team since the beginning, and here he argues that traditional knowledge containers like the peer‑reviewed paper and the book shaped scientific consensus for centuries but might now face a opportunity for reinvention. He describes Google’s Notebook and Deep Research tools, which aggregate, annotate and continuously update source material to create richer, AI‑assisted research briefs. These notebooks combine human curation and automated synthesis to make complex research accessible to both specialists and non‑specialists, adding new visual modes like infographics and slides. Johnson suggests that we may be on the brink of a new atomic unit of knowledge designed around AI that enables different kinds of scholarly collaboration and living syntheses.

This kind of idea, to my mind, “clusters” with transdisciplinarity, cross domain translation, increasing complexity, superspecialisation of researchers, and the half-life of knowledge. Beyond McLuhan’s worries that “every augmentation is also an amputation,” cited by Girardin above, there are situations where LLM tools can/should help us in coping with the breadth and complexity of the ideas we are juggling with. So even though Johnson’s angle, akin to “we invented the next peer-reviewed paper,” seems a bit grandiose to say for himself, the need for a new unit of knowledge does seem clear.

The basic idea is that you give Deep Research a topic or a complex question, and it scours the web, evaluates dozens of sources, and then writes a structured overview of what it has learned, effectively building a starter research brief on the fly. […]

Future versions might be able to generate a complex meta-analysis of all the recent research on a topic that can be updated instantly as new publications become available. […]

It feels like we are on the cusp of a new framework that might be even more significant, a knowledge unit built from the ground up with AI. These new notebooks are our attempt to imagine what that future might look like.

Large language mistake

I didn’t enjoy this one as much as the few people linking to it made it sound essential, but it’s worth a share. The author argues that current generative AI, especially large language models, are fundamentally models of language rather than true intelligence. Neuroscience and clinical evidence show human thought largely operates independently of language, so better language modelling alone is unlikely to produce human-level general intelligence. We never expected Boston Dynamics’ running dog bots to become intelligent because they could run, or AlphaFold to be smart after folding just one more protein, yet here some expect language-focused models to become smart as they become really really good at language.

It’s not directly linked to what the piece is saying, and some of you might have thought of it already, but at some point, reading the word “model” in the context of the article, I finally made the connection to the saying “all models are wrong, but some are useful.” George Box was talking about statistical models and people citing the phrase tend to be talking about mental models, but one could easily apply it to Large Language Models. That’s basically putting words to my own opinion that all models make mistakes and are imperfect, but some are useful in many ways.

“Alfred Korzybski noted in 1933, ‘A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness.’” LLMs are not maps, but perhaps they can be as useful as them, if “we” (strident CEOs and some of us) could just stop hocking them to everyone as if they were seconds away from becoming territory (intelligence).


§ The world lost the climate gamble. Now it faces a dangerous new reality. A bit depressing (ok, probably a lot) but quite short so I’m not summarising it, sharing for the visual, which I’d never seen before. “We are now at a critical juncture. We are at or very close to human caused environmental change that will fundamentally unpick the life-sustaining systems on Earth. These risk triggering feedback loops, for example, the accelerating die back of rainforests which would release billions of tons of carbon dioxide which would raise temperatures even further.”


Futures, Fictions & Fabulations

  • The massive 2026 Trend File is out, and will keep growing for a few months still. “MASSIVE thanks to superstars Iolanda Carvalho [Portugal], Ci En L. [Singapore], Gonzalo Gregori [China] and myself, Amy Daroukakis [Everywhere]”
  • 2026 Global Predictions: Insights from Mintel. “Our 2026 Global Predictions go beyond traditional trend analysis. We use predictive intelligence to connect today’s consumer signals to tomorrow’s opportunities, giving you the clarity to shape the future of your industry, not just react to it.”
  • Top Trends 2026. “Horizon Futures has identified key cultural shifts that will define consumer behavior in the year ahead. These trends touch on critical areas: the tension between AI-driven content and the premium on human creation, a revival of traditional values as an anchor in chaos, and the gamification of risk as a new form of entertainment.”

Algorithms, Automations & Augmentations

  • We just launched the Imaginaries of Artificial Intelligence notebook, a growing collection aiming to “bring together a collection of imaginings, works, and creators related to AI that can inform our understanding of the world.” It’s written in French but every text is then followed by the English translation. You should keep this one bookmarked when reading Sentiers, loads of topics I mention are now documented in the “carnet.”
  • David Sacks tried to kill state AI laws — and it blew up in his face. “But crucially, they noticed how much power would have been handed to a certain South African tech-billionaire-turned-special-government-employee who’d tunneled his way into the West Wing — not Elon Musk, but the other one.”
  • “Holy shit”: Gemini 3 is winning the AI race — for now. “Google’s newest release is topping leaderboards and wowing rivals, but users aren’t dropping other models just yet.” (Anthropic’s Opus 4.5 passed it for some tests just a few days later.)

Built, Biosphere & Breakthroughs

Asides

  • Lo—TEK Water wants to reshape the world through indigenous technologies. “For designer, author, and activist Julia Watson, pinpointing myriad approaches to these all-consuming problems is one of the most critical and urgent tasks today. Her new book Lo-TEK Water, published by Taschen, highlights various Indigenous technologies and aquatic systems that could be utilized in adapting to a climate-changed world.”
  • Chinese tech companies are changing Mexico City. “That’s because authentic Chinese food options in Nuevo Polanco, an upper-class cluster of corporate buildings, high-rise apartments, and luxury stores, are everywhere. She can grab a bowl of biang biang noodles — thick, hand-pulled noodles in a spicy, garlicky sauce — at Shaanxi Sabor, a two-floor noodle shop nearby. Or a bowl of Lanzhou beef noodle soup at Yiwan Ramen.”
  • Activists are using Fortnite to fight back against ICE. “Players are role-playing ICE raids in Fortnite and Grand Theft Auto to prepare for real-world situations.”

“Ambitious, thoughtful, constructive, and dissimilar to most others. I get a lot of value from Sentiers.”

If this resonates, please consider becoming a supporting member—it keeps this work independent.

Support Sentiers

Your Futures Thinking Observatory