A new political compass ⊗ Normal technology at scale ⊗ Illusion or reason?
No. 361 — The future needs better storytellers ⊗ TikTokers pretending to be AI creations ⊗ Google battling ‘fox infestation’ ⊗ Humpback whales blowing bubble rings at people

A new political compass
I’d never heard of Dan Zimmer, the author of this essay, and he doesn’t seem to have a large online presence. Which is a shame, because I’d love to read more of his thinking. Here he traces the emergence of a new political compass moving beyond traditional left-right divisions. This framework contrasts “Up-wing” thinking, which prioritizes technological transcendence and information processing, with “Down-wing” perspectives focused on ecological balance and systems complexity. Both approaches shift allegiance from human welfare to Life itself, but diverge fundamentally in their vision of Life’s future: Up-wingers like Elon Musk seek to free Life from biological constraints through technological advancement and cosmic expansion, while Down-wingers envision restoring planetary homeostasis through self-limitation and ecological wisdom—although the extreme down, in Zimmer’s argument, goes all the way down to depolulation.
Cybernetics, introduced by mathematician Norbert Wiener in 1948, provided the intellectual foundation for both wings. Wiener described cybernetics as the study of how complex systems survive in a hostile universe, redefining Life as systems that “locally swim upstream against the current of increasing entropy.” According to Zimmer, this reconception of living things as complex information processing systems allowed researchers to place human-built artifacts—from thermostats to computers to economies—on the same continuum as biological organisms. By the 1970s, this cybernetic framework had crystallized into a vision of Life as the interconnected system of humans, their technologies, and all earthbound organisms.
The cybernetics movement eventually split along two paths: those drawn to complex systems ecology emphasized interconnectedness, developing fields like Earth System Science and Gaia theory, while others focused on information processing, pursuing artificial intelligence and cognitive science. The environmental crises of the 1960s-70s further polarized these camps. Systems ecologists adopted posthumanist positions that prioritized planetary health over human expansion, while information-focused researchers developed transhumanist visions of overcoming biological limitations through technology. This ideological divergence explains today’s political realignment, with tech oligarchs finding common cause with right-wing movements not out of shared values but because they see in figures like Trump allies who won’t impede their pursuit of technological advancement at any ecological cost.
I’ve been wondering what the next such axis might be, and I have to say that this one makes a lot of sense to me. The author also shares a quadrant view, merging left-right with up-down. Now, the fact that proponents of each position exist, doesn’t mean that there are parties anywhere near having positions like these, even the broligarchs haven’t re-formed the Republicans, they have “just” used the big TACO. Post-him, I don’t expect that party to stick to those positions. Who might be the strongest tenants of center-down politics? Which parties internationally might be classified there?
One of the most defining features of Down-wing thinking proves to be an embrace of transience, finitude and self-limitation. The Down-wing refines the tragic ethos that Wiener placed at the heart of cybernetics when recasting Life as “an island here and now in a dying world.” […]
Microbiologist and Down-wing luminary Lynn Margulis proposed: “No matter how much our own species preoccupies us, life is a far wider system. Life is an incredibly complex interdependence of matter and energy among millions of species beyond (and within) our own skin.” […]
Whatever the near-term human or ecological cost, all Up-wingers must do is keep the interlocking gears of economic and technological progress grinding until they trigger the intelligence explosion. Then, if the world of organic biology still matters on the far side of the Singularity, it will be a comparatively simple matter for the coming artificial superintelligence to reverse all the ecological harm done. […]
This will require convincing those who remain on the modern political left and right that no vision of human flourishing can succeed without accounting for human beings’ ecological entanglements, technological entailments and the broader demands of Life itself.
Normal technology at scale
Mike Loukides argues against the notion of superintelligent AI, positioning artificial intelligence as “normal” technology rather than an existential threat—he’s riffing off of Arvind Narayanan and Sayash Kapoor’s AI as Normal Technology, a 1h23m read I haven’t started yet. He wants us, rightly, to focus on more concrete dangers: AI’s ability to amplify existing problems through unprecedented scale. While humans have always made biased decisions, AI systems can now make those same mistakes at scale, rejecting applicants en masse or wrongly profiling entire populations instantly. Loukides explains that these risks stem not from AI itself but from economics and consolidation—the “ethics of scale.” The economic shift toward consolidation across industries created the conditions for data at scale, which in turn enabled AI, forming a vicious cycle where AI further amplifies scale-related problems.
The second part of his piece is a bit weird, he’s making an argument I might summarise as “dark forest with bits of AI.” I.e., saying that “we need to build new communities that are designed for human participation, communities in which we share the joy in things we love to do” but also that “AI can help with that building, if we let it.” I agree that AI could be used in very different ways—and I’d add that ideally it would have been controlled by entirely different people—but his argument through hippy music of the 60s is kind of awkward.
Once we realize that the problems we face are rooted in economics and scale, not superhuman AI, the question becomes: How do we change the systems in which we work and live in ways that preserve human initiative and human voices? How do we build systems that build in economic incentives for privacy and fairness? […]
I think we’re blessed. We live at a time when the tools we build can empower those who want to create. The barriers to creating have never been lower; all you need is a mindset. Curiosity. How does it work? Where did you come from? What does this mean? What rules does it follow? How does it fail? Who benefits most from this existing? Who benefits least? Why does it feel like magic? What is magic, anyway? It’s an endless set of situationally dependent questions requiring dedicated focus and infectious curiosity. […]
Humans can want to do things, and we can take joy in what we do. Remembering that will be increasingly important as the spaces we inhabit are increasingly shared with AI. Do what we do best—with the help of AI. AI is not going to go away, but we can make it play our tune.
Illusion or reason?
Apple researchers released a paper saying LLMs don’t really reason. Gary Marcus wrote about it and believes it proves that LLMs are not doing the thing that we call reasoning. Nathan Lambert takes his turn and kind of moves the goalposts, saying that “just because an AI doesn’t have all the tools that we use to interact intelligently with the world does not mean it isn’t reasoning” and arguing that demonstrating specific failures doesn’t disprove reasoning capabilities.
I’m sharing both for two reasons, first if you want to keep up with the reasoning argument, those are two widely-shared opinions from influential writers. Second, because the most annoying aspect of the whole thing, and both pieces, is the insistance on talking about human intelligence and the assumed and un-debated view that AGI is something “we” should be focused on. The much better, less dangerous, less resources intensive, less pushed by crazies frame for LLM technology is this one by Karen Hao:
AI is such an interesting word because it's sort of like the word transportation in that you have bicycles, you have gas guzzling trucks, you have rocket ships, they’re all forms of transportation, but they all serve different purposes and they have different cost benefit trade-offs.
And to me the quest to artificial general intelligence has the worst trade-offs because you are trying to build fundamentally an everything machine, but ultimately it can’t actually do all of the things. So not only do you confuse the public about what you can actually do with these technologies, which leads to harm because then people start asking it for things like medical information and instead getting medical misinformation back.
But also it requires all of these things that I described, the colossal resource consumption, the colossal labor exploitation. But there are many, many different types of AI technologies that I think are hugely beneficial. And this is task-specific models that are meant to target solving a specific well scoped challenge, something like integrating renewable energy into the grid, weather prediction, drug discovery, health care, where you identify cancer earlier on in an MRI scan.
These are all very task-specific. It’s very clear what the use case is. You can curate very, very small data sets, train them on very, very small computers. And I think if we want broad based benefit from AI, we need broad based distribution of these types of AI technologies across all different industries.
Which brings me to this summary of my thoughts on LLMs right now.
- There will be a diversity of forms and sizes of AI. Whether you see them as a variety of animals, as modes of transportation, or as alien familiars, those are all better framings than “general human-like intelligence.”
- A significant if unmeasurable part of the “sentience” bursts of excitement are based, as neuroscientist Anil Seth proposed, in the fact that our “language exerts a particularly strong pull on these biases, which is why people wonder whether Anthropic’s Claude is conscious, but not DeepMind’s protein-folding AlphaFold.”
- On the other hand, even if there is no there there in terms of “true intelligence” in LLMs, that doesn’t mean there is nothing worthwhile happening. They might be stochastic parrots, but at that volume of training data, what emerges defies some of our expectations of what “super-sized autocomplete” should be able to do.
§ The future needs better storytellers: Designing with imagination in mind. “Rather than accepting the stories we’ve inherited about progress, success, or what’s ‘realistic’, this approach asks: what if we rewrote them? Together, we surface the dominant narratives shaping our systems and then play with alternatives that are more life-giving, inclusive, and just.”
Sentiers is made possible by the generous support of its Members and
the modern family office of Pardon.
Futures, Fictions & Fabulations
- When the future stops moving forward “Mike Mills directed it, and Saoirse Ronan stars as someone cycling through the mundane anxiety of daily life—waking up, brushing teeth, working in sterile offices, coming home, repeat. It could be a scene from Groundhog Day or Eternal Sunshine of the Spotless Mind, but it's actually about a 1977 song that somehow captures exactly how 2025 feels. The video doesn't try to recreate the late seventies. It just uses Byrne's lyrics about tension and nervousness as a lens for contemporary life.”
- How merging creative design and futures at Defra led to collaborative strategic insights. “…developed a new horizon scanning capability and emerging trends and global megatrends outputs. Unique approaches and methods were used in this work, notably a highly diverse horizon scanning practice, the use of ‘future artefacts’ to bring emerging trends to life and developing a methodology for determining the origin points of global megatrends.”
- Where is the social internet taking us?. “Through a combination of in-depth expert interviews, internal workshops and desk research we identified several areas of concern, as well as optimism as interviewees shared their views on what public service media could be doing to remain relevant and make a positive contribution to online communities.”
Algorithms, Automations & Augmentations
- Real TikTokers are pretending to be Veo 3 AI creations for fun, attention. “Among all the AI-generated video experiments spreading around, I’ve also noticed a surprising counter-trend on my TikTok feed. Amid all the videos of Veo-generated avatars pretending to be real people, there are now also a bunch of videos of real people pretending to be Veo-generated avatars.”
- Amazon is reportedly training humanoid robots to deliver packages. “The robots would be driven around in Rivian vans to jump out and drop packages at homes.” These days it takes very little imagination to see them with ICE badges, carrying someone out instead of a package in, doesn’t it?
- AI therapy is a surveillance machine in a police state. “The internet has been a surveillance nightmare for decades. But this is the setup for a stupidly on-the-nose dystopia whose pieces are disquietingly slotting into place.”
Built, Biosphere & Breakthroughs
- Google battling ‘fox infestation’ on roof of £1bn London office. I’m with the furry ones on this one! “The vulpines have taken over the rooftop garden of the new ‘landscraper’ in King’s Cross and had an impact on construction – although the company stressed it was ‘minimal.’”
- UK government harnesses Gemini to support faster planning decisions. What’s the german word for “Nice!” and “Gah!” all in one? “Extract, built with Gemini, uses the model’s advanced visual reasoning and multi-modal capabilities to help councils turn old planning documents—including blurry maps and handwritten notes—into clear, digital data, speeding up decision-making timelines for council staff.”
- World Bank ends its ban on funding nuclear power projects. “The decision, a major reversal, could help poorer nations industrialize, cut planet-warming emissions and boost U.S. competitiveness on next-generation reactors.”
Asides
- Humpback whales are approaching people to blow rings. What are they trying to say? Classic good cop bad cop: “In contrast to the orcas’ aggressive behavior, researchers say the humpbacks appear to be friendly, relaxed, and even curious.”
- Bill Atkinson, architect of the Mac’s graphical soul, dies at 74. “Creator of MacPaint, HyperCard, and pull-down menus shaped modern computing.”
- Why is everyone getting their tattoos removed?. I’ve been predicting this for 15 years ;-). “We speak with the patients going under the laser, the tattoo-removal technicians whose business is booming, and the tattoo artists whose work is being erased to understand how something so permanent became so ephemeral.”