Chatbots undermining the Enlightenment ⊗ Flounder mode ⊗ Learners will inherit the earth
No.366 — Interviews with Brian Eno ⊗ Future Imaginaries: Indigenous Art, Fashion, Technology ⊗ Transcribing eyeglasses put subtitles on the world ⊗ NASA satellite may be destroyed on purpose ⊗ Deep sea cables that power the world

And we’re back! I hope you had time for a summer break yourself and were lucky enough to be away from the dystopia.
This is the last issue with classic scifi art chosen by guest curator Adam Rowe’s, thanks Adam! (It was supposed to end before the break but I needed me some Michael Whelan!)
A treatise on AI chatbots undermining the Enlightenment
Princeton professor of history David A. Bell wrote a piece for the NYT where he argues that AI chatbots contradict Enlightenment values by flattering users instead of provoking sceptical inquiry and intellectual engagement. Finding that he “hits some good notes,” Maggie Appleton took the time to riff off her own thoughts on Enlightenment v AI.
Part of this problem is not just the prompts, but the generic interface of the helpful chatbot assistant. We are attempting to use an all-in-one text box for a vast array of tasks and use cases, with a single system prompt to handle all manner of queries.
I’ve argued the same numerous times. The frontier labs are trying to fit everything in a chat and ignore basic UI fixes because they believe their LLMs will just become smart enough to make problems go away.
Appleton argues that this is a result of current model and interface design, not an inherent AI flaw: models aren’t trained to criticise and it’s unreasonable to expect end‑users to become expert prompt engineers. She cites studies linking heavy generative‑AI use with reduced critical thinking but says the fix could be better training (for example with Constitutional AI and RLAIF) and domain‑specific, critique‑friendly interfaces that embed critical thinking into workflows. Her examples of critical-thinking-forcing prompts are excellent, funny if you try them, and an insightful look in some ways we can push LLMs towards more useful chats.
In other words, we could have better synthetic thinking partners—tools that challenge assumptions, foster deeper reasoning and help people to actively question, debate, and deepen their understanding rather than passively consuming information.
Remember the first Enlightenment ? That ~150 year period between 1650-1800 that we retroactively constructed and labelled as a unified historical event? The age of reason. Post-scientific revolution. The main characters are a bunch of moody philosophers like Locke, Descartes, Hume, Kant, Montesquieu, Rousseau, Diderot, and Voltaire. The vibe is reading pamphlets by candlelight, penning treatises, sporting powdered wigs and silk waistcoats, circulating ideas in Parisian salons and London coffee houses, sipping laudanum, and retreating to the seaside when you contracted tuberculosis. Everyone is big on ditching tradition, questioning political and religious authority, embracing scepticism, and educating the masses. […]
All of these specialist areas will eventually get their own dedicated interfaces to AI, with tailored prompts channelled through fit-to-purpose tools. Legal professionals will have document-heavy case analysis platforms that automatically surface contradictory precedents and challenge legal reasoning with Socratic questioning. Scientists will work in computational notebooks that actively critique their experimental designs and suggest alternative hypotheses. Designers will have canvases embedded with creative reasoning tools that challenge aesthetic choices and push for deeper conceptual justification. Each interface will put domain-specific critical thinking skills directly into the workflow. […]
I don’t think it’s hyperbole to suggest we’re heading into a second Enlightenment. Not just in terms of access to information and reshuffling power structures. I should be clear: I’m exceptionally bullish on AI models being able to act as rigorous critical thinking partners. They have the potential to embody those idealistic values of enabling intellectual engagement and critical inquiry. Far more than current implementations suggest.
Flounder mode
Considering my despising of broligarchs and love for Brian Eno’s thinking (see the § below), this phrase by Brie Wolfson, who interviews Kevin Kelly, provides a good summation of my ambivalence with the character: “Brian Eno and Jeff Bezos are active collaborators.” Wait, what?
Kelly could be the patron saint of this newsletter … if not for him being so steeped in techno-optimist Koolaid. I don’t want people to be one thing. As a matter of fact, the whole interview can be read as a paean to being a happy generalist, but it remains weird for me to observe someone so well-traveled, curious, multi-talented and multi-interested, yet to also find him so largely uncritical of big tech and technosolutionism. We contain multitudes I guess.
Anyway, here he’s not talking about tech, Wolfson is interviewing him to reconcile her chosen eclectic and non-optimised career with a feeling of underachievement. “Kevin Kelly would say it’s good to have an ‘illegible’ career path—it means you’re onto interesting stuff. But I wasn’t so sure anymore.” I wholly agree and share much of her feelings. If you are anywhere near calling yourself a generalist, their chat will resonate (also, lots of pics of his studio). And, despite my misgivings, Kelly remains fascinating and has had a heck of a life.
I said that there is an idiosyncratic magic to the way he follows his interests, which is that they’re not just an input; Kelly turns his interests into an output that he can share with others. When I asked if I was onto something, I learned that Kelly doesn’t think in outputs. For him, doing is part of learning. “I don’t really pursue a destination,” he said. “I pursue a direction.” […]
It’s not about finding a hole in the market or a path to global domination. The yard stick isn’t based on net worth or shareholder value or number of users or employees. It’s based on an internal satisfaction meter, but not in a self-indulgent way. He certainly seeks resonance and wants to make an impact, but more in the way of a teacher. He breathes life into products or ideas, not out of a desire to win, but out of a desire to advance our collective thinking or action. […]
I thought I was here to go deep on working Hollywood style, but as I sat there with Kelly in a room of what are best described as his toys, I realized that the most interesting thing about him is that he seems happy. At ease in the world and in his skin. I wasn’t there with Kelly for permission to work Hollywood style. I was there for permission to work with both ambition and joy.
Learners will inherit the earth
Paul Jun argues that in the face of AI disruption, adaptation through learning is the only viable strategy, using the historical example of the Luddites to illustrate how resistance without adaptation leads to defeat (his summary, I don’t agree with his take, although it’s not uninformed, like most mentions of Luddites). He presents a worldview where technological change is inevitable and those who embrace new tools like AI coding will gain power over those who resist.
Jun acknowledges the ethical concerns around AI but dismisses much criticism as “selective outrage” from privileged people who benefit from other problematic systems while lecturing about AI’s downsides. He’s not wrong. His core message is pragmatic: regardless of one’s feelings about AI, the technology will advance, and individuals must choose between learning to use these tools or being left behind by those who do.
To me, his very techno-deterministic framing is a false binary between critiquing and learning—one can simultaneously engage with AI tools while being very critical and adjusting some choices. His argument that resistance often comes from a position of comfort that others don’t have is certainly one I don’t contemplate often enough. Still, I do find that his either/or framing oversimplifies the range of possible responses to technological change.
Someone with a laptop, curiosity, and ruthless determination now has more power at their fingertips. They can research any topic, learn skills, build things they couldn’t before, and play a new game that involves new experiences. […]
The person lecturing me about AI’s environmental impact also doomscrolls on TikTok for six hours, lives in one of the top 10 most expensive cities, orders GrubHub, wears Nikes, and upgrades their iPhone every year? They’re not thinking about the kid in a poor country with a failing education system whose chances of becoming “big” are slim to zero. […]
Maybe I realize my privilege and deeply understand how the average person lives, and I don’t want to waste it. The way I grew up colored my worldview to be Default Apocalypse. Maybe I subconsciously believe that the world is too far gone and I’ve reached the point of not giving a fuck. […]
Because here’s how change works: People build new things that make old systems obsolete. They don’t ask for permission. They don’t wait for regulations. They don’t tweet about fairness. They build. Because big fucks small.
§ Is AI the death of creativity? Interview with Brian Eno. “All my misgivings about AI really are to do with the fact that it’s owned by a group of people that I don’t trust at all. I don’t trust their taste, I don’t trust their morals, and I don’t trust their politics, and that’s a problem for me — that the whole technology is in the hands of the wrong people.” Also Inside Brian Eno’s studio and Brian Eno on what art does. All on YouTube, had a bit of an Eno rush during the break.
Futures, Fictions & Fabulations
- Future Imaginaries: Indigenous Art, Fashion, Technology. “Future Imaginaries explores the rising use of Futurism in contemporary Indigenous art as a means of enduring colonial trauma, creating alternative futures, and advocating for Indigenous technologies in a more inclusive present and sustainable future.”
- Future Jobs: Robots, Artificial Intelligence, and Digital Platforms in East Asia and Pacific. “Looking ahead, digitization will enhance the tradability of services, and artificial intelligence (AI) will transform production processes. EAP countries can benefit by equipping workers with the necessary skills and opening the services sectors to trade and investment.”
- Global Foresight 2025 by the Atlantic Council. “Our next-generation foresight team spots six “snow leopards”—under-the-radar phenomena that could have major unexpected impacts, for better or worse, in 2025 and beyond. And our foresight practitioners imagine three scenarios for how the world could transform over the next decade as a result of China’s ascendance, worsening climate change, and an evolving international order.”
Algorithms, Automations & Augmentations
- These transcribing eyeglasses put subtitles on the world. “TranscribeGlass are smart eyeglasses that aim to do exactly what it says on the tin: transcribe spoken conversations and project subtitles onto the glass in front of your eyes. They’re meant for the Deaf and, primarily, the hard-of-hearing community who struggle to read lips or pick out a conversation in a loud room.”
- HMC AI — Human Classification. “Described in detail in our white paper, our aim is to support transparency in research and provide – at a glance – a standard mechanism that allows readers, researchers and decision-makers to see the extent to which research outputs have been shaped by machines, i.e. a process based approach.”
- AI industry horrified to face largest copyright class action ever certified. LoL! Right! “They’ve warned that a single lawsuit raised by three authors over Anthropic’s AI training now threatens to “financially ruin” the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement.”
- Demis Hassabis on our AI future: “It’ll be 10 times bigger than the Industrial Revolution – and maybe 10 times faster”. Riiiight. “The head of Google’s DeepMind says artificial intelligence could usher in an era of ‘incredible productivity’ and ‘radical abundance’. But who will it benefit? And why does he wish the tech giants had moved more slowly?”
Built, Biosphere & Breakthroughs
- Why a NASA satellite that scientists and farmers rely on may be destroyed on purpose. One guess as to who thought this up. “The administration has asked NASA employees to draw up plans to end at least two major satellite missions, according to current and former NASA staffers. If the plans are carried out, one of the missions would be permanently terminated, because the satellite would burn up in the atmosphere.”
- The world’s smartest city is a tiny German village. “This unlikely digital pioneer didn’t achieve global recognition through wealth or top-down tech investments. Instead, Etteln faced down rural depopulation and the looming closure of its only elementary school by leaning into collective action and homegrown innovation.”
Asides
- How the deep sea cables that power the world are made. “The conduits, which are spooled in big stacks on a boat before being buried in an underwater trench, are a crucial part of the grid as demand for electricity increases.”
- Peacock feathers can emit laser beams. “Peacock feathers are greatly admired for their bright iridescent colors, but it turns out they can also emit laser light when dyed multiple times, according to a paper published in the journal Scientific Reports. Per the authors, it's the first example of a biolaser cavity within the animal kingdom.”
- A man read 3,599 Books over 60 years, and now his family has shared the entire list online. I’ve got some catching up to do.
- The colors of the world, seen from the International Space Station. “Recent photographs from crew members aboard the ISS show some spectacular views of auroras, moonsets, the Milky Way, and more, seen from from their vantage point in orbit.”
