Reading and writing with AI

How we research, write, and read with AI—or not. Some tricks and recommended apps and services to leverage AI as a thinking assistant.

Reading and writing with AI
A bearded middle-aged man reading and writing with AI. Created with Midjourney.

My friend Peter Bihr asked about (re)searching with AI and it sparked some thoughts. Here’s what he was saying.

Can we be real for a minute? Would you share if/how you incorporate ChatGPT in your work? Because I find it delivers such a brutally wide range of quality from some things that seem brilliant, like the “PhD equivalent” that Sam Altman sometimes teases; and at other times like the bad intern who just makes things up to get you off their back.

Since I can only spot the bad outputs in areas I know really well, it basically doesn’t save me any time, but introduces a whole lot of risk, which is why I basically only use it for copy editing, a job it’s pretty good at, even though I don’t like its default style very much. But for research? It seems waaaaay to inconsistent. But I’m curious, maybe your experience is different?

Here’s my slightly expanded answer:

  • First, use Perplexity instead of SamGPT, it’s way more reliable for research, and more “auditable” since it’s much better at providing sources. It will sometimes include “bad” sources but it doesn’t hallucinate and the UI is pretty good to see what’s coming from where and to open up sources and validate/dig deeper. It’s follow up questions are usually useful.
  • If you are worried about hallucinations or false connections between things you get back, ask another model to fact check the reply. I sometimes do this when using Claude first, I then take its reply, paste it in Perplexity and ask it directly, “please fact check this for me.” Bonus: the fact check is also a good way to learn more on the topic you were discussing with Claude (or other). And/or ask for second and third opinions, see below.
  • To summarise or write, use Claude which has a much better voice, in my opinion, and is just more fun to “collaborate” with. Level-up: When you’ve included everything you wanted in an article, use Lex as your editor and keep polishing. Ask for feedback, it’s part of the built-in prompts and can be ruthless. It also tends to just critique, not suggest text on its own, so you are the one writing, with a “smart” editor.
  • Next level and worth another post in itself; use pretty much any online model (except Lex) with their respective API through the Msty desktop or web app. You pay per use instead of per month, in my case it ends up being roughly the equivalent of one subscription. (≅$20/month)
  • There are also a couple of “aggregators” like You.com and ChatLLM, where you can use multiple models in the same interface for roughly the same price as one. You lose some UI/UX details, or more important things like the Claude artefacts, but for “regular chatting” it works.
  • In Msty you can use multiple models in side by side columns working on the same thing and compare. They are synced and answer the same questions, or you can “unhook” and finish your chat with the one that was working best for that inquiry. Openrouter’s chat also does that but they are displayed one after the other, which is annoying (it’s a dev tool but the chat is like any other chat).
  • And of course the caveat/restriction/think about your usage for all of them is their electricity consumption. It’s a topic I’ve discussed often in the newsletter and something I think about every day. All of the above is for new tasks/augmentation, just keep doing your regular searches on DuckDuckGo and the like, don’t use effing ChatGPT to ask what the weather is or to find you a website, please!

Finally, and this could be yet another article, local models are easy(ish) to use but still way too slow, unless you have a GPU and lots of memory. However, learning how to set it up and work with them is roughly at the same difficulty level as learning html and css was 25-30 years ago. I.e. doable with a bit of time and some technical inclination. Ollama is great and you can use the models it downloads within Msty.

Reading

This brings me back to something I’ve been thinking about for years, so while we’re at it let’s also talk about reading. My archive is still not all moved to Ghost, but I just republished Dispatch 2 sent to members in 2019 (!!), Ideas & tools from my process. Since then I’ve changed almost every app I use, although the process is very similar, and lots of links are broken, but it’s still worth a read, imho. Here’s what I had to say (pardon the Ben Thompsoning long quote of myself):

This is something I’ve started noticing recently and find quite useful. A lot of people say they have a hard time reading books anymore, their brains are so used to streaming feeds, they can’t concentrate properly on long form, never mind whole books.

I’m not going to go into the various tips and tricks to develop a book reading habit (although some help, like starting small with 15 pages or 15 minutes every day) or how to make room for it (mercilessly cut the number of people you follow or make a super short list), and I’m not going to tell you to meditate (though it’s a good idea) to develop mindfulness but it’s roughly what I do more and more and what I recommend. Start noticing how you are reading at that moment and try to switch modes.

For example; each morning I’ll go through a number of newsletters, a lot of them are lists of “signals” similar to my own or perhaps weekly digests from a specific media, I’ll open links in the background. When I hit one that requires more concentration, I’ll usually skip over it and get back to those few heftier reads separately.

Here’s where the modes come in: when I switch to the browser and start reading, I need to pay attention and specifically slow down my brain, almost like going from jogging to walking. Otherwise [I’ll be] scrolling through the article like it’s a feed of paragraphs and not paying close attention.

The same kind of thing happens when reading a book with more complex ideas; I need to slow my brain down even more and pay attention to what’s being said, not just understanding the words but the concepts.

All of this is pretty self evident but if you read a lot, especially different lengths and forms, it’s a useful practice to develop awareness, various speeds and the mental switches between them.

Gliding down a feed efficiently and settling down in a book are both useful skills, even though some people try to convince us that only deep work / deep reading is important.

Also → I’m not talking about Focused and Diffuse modes of thinking but they are very useful to know about too!

A few days ago I watched Beat Knowledge Rot: The 3 Types of Reading You Need in the Age of AI, where Nate B Jones discusses how reading strategies must evolve in the age of AI to combat information overload and the concept of “knowledge rot”—where AI regurgitates existing information without creating new knowledge. He presents three frameworks for effective reading and explains how to work with AI as a reading partner.

Unlike the Wallstreet Journal, I do not think that AI is bad for reading, I think people who passively use AI are bad readers.

“Awareness reading” and “information retrieval reading,” as he calls them, work well with AI assistance—let the machines handle the scanning and fact-finding. But “connectome reading,” is the kind that forms new neural pathways, that’s where we need to be fully present without digital intermediaries.

Jones also suggests reading some book with AI, for example in non fiction, asking which are the chapters that might fit your current research, or asking it to give more context to a more complicated chapter. Some books require deep reading end-to-end, some others might be only partially useful for our specific purpose.

At a higher level, I even see this as an added element of my triage—by the way I spoke about some of this during my chat with Mitch. Here’s how I consider everything I’m pointed to or discover:

  • Uninteresting. Easy enough, close tab.
  • Good but doesn’t require reading right now. Bookmarked in Raindrop.
  • Of interest but not for client work or Sentiers, more as a citizen. Lots of local news is like this. Saved in Reader to read at some point, more for information than thinking about.
  • New When it looks like something I’d really like to know more about, but I just don’t have the time. Sometimes I’ll have it summarised and read that, but I’m planning on setting up an automation to summarise this use case, with important points, and save it in my Obsidian vault. It’s where I do my writing and, just like everything that goes into Reader and gets archived post-reading, I want to have it accessible when I’m thinking about something.
  • Very good, I want to read this! Saved in Reader.
  • Nice! This might go in the newsletter. Saved in Reader and shortlisted for newsletter writing on Fridays.

Ok, that’s quite enough for an “expanded LinkedIn comment” (!!). Overall, AI can be our research assistant for the fast stuff, clearing the noise so we can identify what deserves our slow, focused attention. The new item above is like that. The electricity consumption worry I mentioned is about environmental responsibility, and everyone should consider that impact, but also read it as about intentionality. Every AI query should earn its carbon cost (and forgoing of thinking) by genuinely improving how we think, not just helping us process more information faster.

The goal isn’t to read more per se—it’s to read better, with intention, and picking what gets our deep attention versus what gets processed efficiently.

Your Futures Thinking Observatory