Taste is the new intelligence ⊗ Using ChatGPT won’t make you dumb ⊗ AI first puts humans first

No.364 — The future of human health ⊗ Ten truths I’m holding about futures ⊗ A new social contract for the Age of AI ⊗ A lush tour of Fallingwater

Taste is the new intelligence ⊗ Using ChatGPT won’t make you dumb ⊗ AI first puts humans first
Leo and Diane Dillon’s art is impossible to pin down, but this detail from the 1972 cover for Barefoot in the Head, by Brian Aldiss, captures some of the avant-garde surrealism at its heart. The full artwork is reproduced in Adam Rowe’s Worlds Beyond Time - find it or Rowe’s art newsletter here.

Taste is the new intelligence

Great piece by stepfanie tyler, where she argues that in an age of information overload and AI content generation, taste—defined as discernment and the ability to filter what matters—has become more valuable than traditional intelligence. While we once valued those who accumulated the most knowledge, what now distinguishes us is our ability to curate meaningful content and experiences in a world of infinite options. This form of intelligence involves protecting your attention from algorithmic manipulation, developing coherent personal standards across different domains, and recognizing quality without being swayed by trends or viral content.

tyler presents taste as a skill that can be cultivated through intentional consumption, patience, and self-awareness. It’s described not as superficial preference but as internal coherence—a throughline that connects your choices across different aspects of life. Kind of what I’ve called the red string, a way of making sense of my different projects and jobs through the years. It’s, all at the same time, what I enjoy doing, a lot of the value in this newsletter, and sometimes the hard thing to explain. Why is there a link to “Star Wars Lofi” next to one on “The book cover trend you’re seeing everywhere” at the bottom of this issue? It’s not “futures thinking,” it’s just… my taste? Hard to put in one line on the homepage or give as a two or three word answer when introducing myself.

I try to not have AI-only issue, and his was going to be my non-AI piece, but then I got to the end and tyler mentioned she’s been “accused” of writing with AI. In it’s my party and i’ll use AI if i want to, she explains her thinking on the use of this “tool.” Also worth a read. In one of those serendipitous connections I love, it connects quite well with the piece(s) below, with “engaged thinking + AI” vs “quick prompt + AI.” In this case tyler is thinking deeply about her ideas, she’s engaged, and then streamlines her writing process with an LLM. Could she have reached different (better?) insights by working five times longer on her own? Perhaps. Did she reach other insights because she could think more fluidly? Also perhaps. That’s where we are, figuring this out.

Making a parallel with other technologies, how would Gibson’s Neuromancer be different if, instead of using a typewriter, he wrote it all by hand or with a word processor? The one that exists and is enjoyed is the one he wrote, how he wanted to write it. tyler’ pieces are the ones she wanted to share, written how she wanted to write them.

You curate your life whether you mean to or not. Your feed, your home, your bookshelf, your calendar—all of it reflects how you see. And increasingly, we’re being asked to see through systems that don’t care about coherence—they care about clicks. […]

But taste requires subtraction. It means not participating in every viral moment. It means not resharing something just because it’s getting attention. It means opting out of the churn. […]

Curation is care. It says: I thought about this. I chose it. I didn’t just repost it. I didn’t just regurgitate the trending take. I took the time to decide what was worth passing on. […]

Good taste isn’t restrictive. It’s expansive. It allows you to contain multitudes without becoming incongruent. […]

This is the difference between eclectic and scattered. Between multidimensional and messy. Taste is what gives your multitudes a spine.

Using ChatGPT won’t make you dumb (unless you do it wrong)

Alberto Romero actually read that MIT study everyone’s been talking about and has summarised it and written up his understanding of it, their conclusions, and his interpretations. You should have a read but super quickly; it’s a small study, the final stage included only 18 people, using only an LLM is not good, thinking/learning about a topic and then using LLMs has some benefits. “AI isn’t harmful but helpful if you engage your brain often enough and hard enough”

I’ve mentioned several times, here, in writing elsewhere and talking with various people, that it’s likely, in my opinion, that what we lose by using AI a lot, could at the same time be the early stages of new ways of reading, writing, and thinking. As long as you engage your brain. Obviously, if you prompt something rather simply and paste it into whatever—like those lawyers who got caught citing non-existant court cases—you’re not exercising much of anything and that’s bad. If you’ve done research before hand, prompt precisely, review, and exchange with the LLM, it’s another thing. Is that other thing better than not having an LLM? I tend to say yes, that part of the study tends to say yes, but we still don’t really know.

This is critical for the education system. It means that a non-professional writer’s brain may not be the best writing tool for them, so they’re incentivized to use ChatGPT. This increases their reliance on an external tool, getting substantial gains in the short term (higher grades) at the expense of a long-term cognitive deterioration (lower mental skill). […]

“AI is a tool, and like any other, it should follow the golden rule: All tools must enhance, never erode, your most important one—your mind. Be curious about AI, but also examine how it shapes your habits and your thinking patterns. Stick to that rule and you’ll have nothing to fear.” […]

If you rely heavily on AI, you’ll get dumber: “AI tools, while valuable for supporting performance, may unintentionally hinder deep cognitive processing, retention, and authentic engagement with written material. If users rely heavily on AI tools, they may achieve superficial fluency but fail to internalize the knowledge or feel a sense of ownership over it.”

Also → In Is ChatGPT really rotting our brains? Anne-Laure Le Cunff has roughly the same comments on the study. Worth a read for her five practical tips to keep your brain in the loop. Doug Belshaw wrote About that MIT paper on LLMs for essay writing… a couple of weeks ago. He also has some comments on the methodology, draws our attention on the paper’s “uncritical reference to ‘Cognitive Load Theory’,” and to an article at the Times mentioning that the study as not been peer-reviewed.

AI first puts humans first

Tim O’Reilly challenges the prevalent interpretation of “AI first” as replacing human workers, arguing instead that true “AI native” approaches should augment human capabilities to solve previously impossible problems. He explains how his company uses AI to expand services (like translating content into more languages) while maintaining human involvement and fair compensation practices. He compares the AI transition to previous technological shifts like mobile computing, emphasizing that successful implementation requires reimagining processes rather than merely applying AI to existing workflows. His thesis is that AI native design should start with the AI interaction itself before considering interfaces, creating hybrid human-AI systems that leverage the strengths of both. O’Reilly concludes that properly implemented AI doesn’t eliminate human roles but enables “a more advanced, more contextually aware, and more communally oriented” approach that ultimately serves human needs better.

I agree with his argument, there are even a couple of points I’ve made a number of times in there. I’d just like to note that he basically starts with (I’m paraphrasing), “we don’t want to put people out of work, we want to augment them… yet we do offer AI-generated translations, even if they aren’t as good as one edited and curated by humans.” I don’t know if they employ translators, but it does sound like (again, paraphrasing), “I don’t want to fire my people, but cutting out some outsourcing is just fine.” I’m not pointing to this as an accusation, but as a great example that—even when you want to do the right thing and think about this stuff—it’s complex and contradictions like that pop up. As with the other two pieces above; we’re figuring it out.

Now, with AI, we might ask AI to assess a programmer’s skills and suggest opportunities for improvement based on their code repository or other proof of work. Or an AI can watch a user’s progress through a coding assignment in a course and notice not just what the user “got wrong,” but what parts they flew through and which ones took longer because they needed to do research or ask questions of their AI mentor. An AI native assessment methodology not only does more, it does it seamlessly, as part of a far superior user experience. […]

We’re now programming with two fundamentally different types of computers: one that can write poetry but struggles with basic arithmetic, another that calculates flawlessly but can’t interact easily with humans in our own native languages. The art of modern development is orchestrating these systems to complement each other.

Sentiers is made possible by the generous support of its Members and
the modern family office of Pardon.

Futures, Fictions & Fabulations

  • The future of human health. How will we take care of our health in 2040?. “How will we prioritize health in 2040? This question encompasses multiple dimensions and stakeholders, necessitating a comprehensive strategy to address personal health, healthcare systems, and the health environment from a One Health perspective.”
  • Global Economic Futures: Competitiveness in 2030. “Explores the future of competitiveness through the interaction of geopolitics and regulations – two drivers reshaping the environment for countries and businesses alike. It presents four scenarios for the future of competitiveness in 2030, a data-driven assessment of exposure across 12 sectors and a set of strategies to help decision-makers navigate an increasingly fractured and fast-evolving global landscape.”
  • Ten truths I’m holding about futures. “Sometimes contradictory, sometimes complementary, definitely not comprehensive.”

Algorithms, Automations & Augmentations

  • Introducing CC Signals: A new social contract for the Age of AI. “Based on the same principles that gave rise to the CC licenses and tens of billions of works openly licensed online, CC signals will allow dataset holders to signal their preferences for how their content can be reused by machines based on a set of limited but meaningful options shaped in the public interest. They are both a technical and legal tool and a social proposition: a call for a new pact between those who share data and those who use it to train AI models.”
  • IBM and ESA launch AI model with ‘intuitive’ understanding of Earth. “The model was trained on 9 million samples drawn from nine different data types, including satellite images, climate records, terrain features, and vegetation maps. The broad dataset covered every region and biome on Earth. It was designed to minimise bias and ensure the model can be used reliably across the globe, the researchers said.”
  • Sketched Out: An illustrator confronts his fears about A.I. art. The illustrator is Christoph Niemann and it’s one of those scrolling The New York Times pieces, with lots of illustrations by Niemann. “My survival as an artist will depend on whether I’ll be able to offer something that A.I. can’t: drawings that are as powerful as a birthday doodle from a child.”

Asides


Leo and Diane Dillon always worked as a team on their art, blending their range of styles to form what they termed the "third artist" - an egalitarian synthesis neither one could do alone. Here's an illustration panel for a 1968 paperback of Past Master, by R. A. Lafferty, as it appears in Adam Rowe's Worlds Beyond Time.

Your Futures Thinking Observatory