I really wish Google hadn’t purchased DeepMind (though admittedly the company would probably not be where they are if that were the case), it’s like a huge ‘yes but’ permanently hanging over their achievements. “Brilliant yes but, what will Google do with this?” Still, I’d say they are the most impressive AI company out there (point me to others if you think I’m wrong) and in this article/interview, Douglas Heaven explains how AlphaGo lead to AlphaFold, the massive advance it is, what they are doing with these tools, and where DeepMind and Hassabis want to go next.
You can read this as a purely tech piece, but it’s also useful to focus on the protein and protein folding parts, and marvel at how they work, the complexity of their interactions, and how much we still don’t know.
The catch is that it’s hard to figure out a protein’s structure—and thus its function—from the ribbon of amino acids. An unfolded ribbon can take 10^300 possible forms, a number on the order of all the possible moves in a game of Go. […]
“I was thinking about what we had actually done with AlphaGo,” says Hassabis. “We’d mimicked the intuition of incredible Go masters. I thought, if we can mimic the pinnacle of intuition in Go, then why couldn’t we map that across to proteins?” […]
Over the past year, AlphaFold2 has started having an impact. DeepMind has published a detailed description of how the system works and released the source code. It has also set up a public database with the European Bioinformatics Institute that it is filling with new protein structures as the AI predicts them. The database currently has around 800,000 entries, and DeepMind says it will add more than 100 million—nearly every protein known to science—in the next year.
Also at DeepMind: Predicting the past with Ithaca.