City data commons for the climat crisis ⊗ Jane Jacobs versus the Kardashians ⊗ The slab and the permacomputer

This week →{.caps} City data commons for the climat crisis ⊗ Jane Jacobs versus the Kardashians ⊗ The slab and the permacomputer ⊗ How AI is reinventing what computers are

A year ago →{.caps} A favourite from issue No.148 was Professor Ruha Benjamin’s Mossman Lecture. Race to the Future? Reimagining the Default Settings of Technology & Society.

◼{.acenter}

Some pretty big personal news this week: I am now editor of Offscreen magazine! Over the next few months Kai Brach and I will work closely together on the upcoming issue 25 (set to arrive early February), after which Kai will slowly transition into more of an advisory role, giving me time to settle in comfortably over the next two to three issues.

What does this mean for Sentiers? If all goes according to plan: only good things. Offscreen is “an independent print magazine that explores critical perspectives on technology through earnest conversations,” which overlaps very well with the technology part of what I write in the newsletter, so my reading, thinking, and connecting in various networks will benefit both.

Basically, combining the two—in my day-to-day, not in the publications, who stay as they are—allows me to become a full-time publisher working on two titles. Since Kai was extremely generous in sharing all is tools, methods, and in collaborating on a slow transition, I’ll be able to do both at the level readers expect.

Feedback is always very welcome but that’s especially true right now. Comments, tips, suggestions, recommendations, ideas, are all welcome for both publication and I’m very interested in hearing from long-time Offscreen readers. Contact me.

City data commons for the climat crisis

Excellent article at Branch magazine by Renata Avila and Guy Weress, proposing a bold vision for city data commons against the enclosure of citizen data and for the climate crisis.

The vast majority of so-called smart city projects have been pushed by large corporations with goals of automation and data collection for profit, purposefully or unwittingly (if we’re being generous) grabbing data which should, in the authors’ view and mine, be a public good. While cities often ‘don’t see’ civic data (they don’t track them, don’t legislate around them, and don’t pay attention to their citizens’ data), corporations do and position themselves to capture it.

Beyond this enclosure, the global value of all that city-level data could be aggregated as a shared resource between cities to better understand, prepare for, and act when confronting the climate crisis. Imagine cities asserting control on all the data created within their limits, building resources locally to leverage that data for the good of their citizens, and then collaborating globally, pooling their learnings for the benefit of their collective populations.

Cities are where we live, and as procurers of the tech that surrounds us, city governments find themselves in a unique position. They are the custodians of personal and aggregate data from the largest human concentrations of more than half of the world, and this is especially critical in the context of the climate crisis. […]
It aspires to define city data as commons, instead of property, and enable a space—a data commons space as opposed to a “data marketplace” where collectives can access and get the benefits from high-quality datasets collected with public funds, including data about water quality, the environment, public transport and energy systems, all the data collected by privately-managed bike sharing systems, water sensors, and taxi platforms. […]
Cities are epicentres of knowledge of many kinds. Cities concentrate universities, practitioners and experts and, when a city procures data & tech driven solutions, consideration and priority should be given to processes that place the local knowledge above companies offering “in-a-box” solutions produced somewhere else.{.highlight}

Jane Jacobs versus the Kardashians

Steven Johnson riffing off of Chris Hayes’ On the Internet, We’re Always Famous, on celebrity versus ‘celebrity’ to a few people versus serendipity in cities versus, as I would frame it, the interest graph instead of the social graph. Are you following people because they are your friends or because they are well known? Are you paying attention to ideas materializing around you and following your interests? Johnson believes that even without Facepalm, there would be many of the same problems, that “the ‘pernicious effects’ of democratized fame were in our cards no matter what.”

Every single day on Twitter I stumble across probably at least a dozen clever or funny or provocative things that total strangers have shared, many with links leading off to longer articles or podcasts or videos. These are not op-ed columnists or television anchors; they’re folks who I would have had no way of eavesdropping on thirty years ago. And now they just drift into my consciousness, day after day, a constant source of discovery and serendipity. But they’re not stars or celebrities in my world; they’re peers.{.highlight} […]
I believe that a significant part of the values shift (sometimes called the “Great Awokening”) that we’ve seen over the past decade or two—starting with gay marriage, continuing through Black Lives Matter and MeToo and now trans-rights movements—ultimately the consequence of the radical increase in these sidewalk-style stranger interactions, the ever-larger pool of people and experiences that we now have access to thanks to the Web and social media

The slab and the permacomputer

Robin Sloan presents “three glimpses of the future of computing that all seem to ‘rhyme’,” followed by his vision of ‘slabs’ instead of ‘clouds’ and how we might be going from distinct computers to almost ambiant ‘compute.’ A stage where computing would be like textiles, which used to be the high tech industry and is now “just industry.” Sloan also goes into permacomputers, hypothetical machines built to last for decades or centuries. But I’d like to draw your attention to his mention of models “produced at great expense by many computers with very fast GPUs” and the resulting model files which don’t require as much computing to run and which he collects. It’s something I find is not included often enough in discussions about AI, the gap between the massive amounts of data and compute vs using the resulting model. This piece from a few issues ago had a good perspective on it.

I think these are glimpses of an accelerating reformulation of “computers”—the individual machines like my laptop, or your phone, or the server whirring in the corner of my office — into “compute”, a seamless slab of digital capability. […]
[I]t’s easy to imagine future permacomputers that rely, for some of their functions, on artifacts from a time before permacomputing. It would be impossible, or at least forbiddingly difficult, to produce new model files, so the old ones would be ferried around like precious grimoires…

More →{.caps} Co-writing an album with an AI. Sloan again, this time with some making-of details behind the new album he crafted with Jesse Solomon Clark and ‘centaured’ with OpenAI’s Jukebox.

[W]hen I talk to AI guys who are working on their model and they’re like “Oh, yeah, I tuned those parameters for like six months.” It’s just like a guitar pedal. They’re turning knobs, trying to get the system to perform the way they need it to.

How AI is reinventing what computers are

If you forget for a moment all the baggage around AI hype, this is actually a good piece on the different microchip needs for the last few decades vs the chip needs for AI, which also changes how computing is evolving. “Unlike traditional chips, which are geared toward ultrafast, precise calculations, TPUs [and GPUs] are designed for the high-volume but low-precision calculations required by neural networks” (Tensor Processing Unit and Graphics Processing Unit).

AI changes that on at least three fronts: how computers are made, how they’re programmed, and how they’re used. Ultimately, it will change what they are for. […]
With machine learning, programmers no longer write rules. Instead, they create a neural network that learns those rules for itself. It’s a fundamentally different way of thinking. […]
For Bishop, the next big breakthroughs are going to come in molecular simulation: training computers to manipulate the properties of matter, potentially making world-changing leaps in energy usage, food production, manufacturing, and medicine.{.highlight}

Asides

  • 😍 📸 🧵 Perfection! I would so watch this! Wes Anderson presents the X-Men: a thread ✨ and 🎥 What if Wes Anderson Directed X-Men?
  • 🤯 🧱Fantastic!! “Ok you’re going to have to trust me and just watch this — telling you anything else would spoil it.” Mindbending LEGO Sculpture. Also have a look at the guy’s whole channel, insanity.
  • 👏🏼 ☀️ 🗺 We mapped every large solar plant on the planet using satellites and machine learning. “We searched almost half of Earth’s land surface area, filtering out remote areas far from human populations. In total we detected 68,661 solar facilities. Using the area of these facilities, and controlling for the uncertainty in our machine learning system, we obtain a global estimate of 423 gigawatts of installed generating capacity at the end of 2018.”
  • 💨 🏡 🤔 Kind of sceptical but I’d get one of these once proven. Wind Turbine Wall. “The average annual electricity consumption for an American home uses a little over 10,000 kilowatt-hours per year. One of these walls would be enough.”
  • 🚞 🇪🇺 🤩 Trans Europe Express Trains Could Make a Comeback. “The new network would build on existing services to run new connections between Barcelona, Berlin, Amsterdam, Frankfurt, Rome, Brussels and Warsaw. Over the longer term, the network would expand to Sweden and Hungary and include night trains—overnight rail service, which has been the focus of growing interest across Europe as a lower-carbon alternative to air travel.”
  • 🇨🇳 📚 China is reinventing the way the world reads. “There are two things that make Chinese web novels distinct: the speed with which authors write, and the pricing model. Chinese web novels are supposed to be consumed while the author is writing. Every day, fans log on to the platform, find the latest chapter (usually updated on a daily basis) and pay for it. The cost is usually less than $1, but when a novel has thousands of fans and also thousands of chapters, the profits can be immense.”
  • 🇺🇸 ☀️ 🤔 👏🏼? Google’s New Green Campus Brings Sustainability to Silicon Valley. “To hit its goal, Google is relying on unorthodox procurement contracts and a grab bag of novel technologies such as lithium-ion battery storage, algorithms that predict wind patterns, and geothermal wells that drill into the Earth’s crust.”
  • ⚛️ Reads like the script for a season of 24. Radiant aims to replace diesel generators with small nuclear reactors. “California company Radiant has secured funding to develop a compact, portable, “low-cost” one-megawatt nuclear micro-reactor that fits in a shipping container, powers about 1,000 homes and uses a helium coolant instead of water.”

{.miscellany}

Your Futures Thinking Observatory