Pause for Thought

“Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

These aren’t the ravings of a lunatic, but a public letter signed by many of the world’s leading entrepreneurs and AI researchers – including Apple co-founder Steve Wozniak, former US presidential candidate Andrew Yang, and, of course, Elon Musk – calling for a pause on artificial intelligence research.

For many, these concerns will seem a million miles away from their harmless dealings with ChatGPT. But as I wrote back in here back in 2018, AI technology is a risk worth taking seriously.

It’s instructive to return to that 2018 newsletter – not least because the current pace of progress suggests that AI experts were actually too conservative with their predictions:

“On average they believe AI will outperform humans in many activities in the next ten years: translating languages by 2024, writing high-school essays by 2026, driving a truck by 2027, working in retail by 2031, writing a bestselling book by 2049, and working as a surgeon by 2053.”

It’s only 2023 and GPT-4 can already perform complex tasks close to human-level performance in everything from mathematics, coding, medicine, law and psychology without special prompting.

So what’s the policy response to the incredible innovation, disruption and hard-to-quantify risk of AI? Assuming it’s not the opening gambit of a policy game of 4D chess, it’s not to call for AI research to be paused, which is a simply unworkable idea (unless you’re willing, like some, to risk nuclear war with China to enforce it). Or to ban ChatGPT, as Italy has just done over privacy concerns.

Better to focus on real issues and workable solutions – i.e. the less dramatic part of the letter calls for AI developers to work with policymakers to dramatically accelerate development of robust AI governance systems. And I don’t just mean “AI ethics”, of which there has been plenty of research. I mean work on actual alignment.

Leopold Aschenbrenner puts the case well, setting out the potential problem and solutions. In essence, governments should be spending a lot more money on this. The need for scalable AI alignment is such that the ambition of our efforts should rival Operation Warp Speed or the moon landing.

If all goes well, before too long we’ll be catching crumbs from the table.

Boxed In
Perhaps we should have got ChatGPT to plan UNBOXED. Despite claims to the contrary, Theresa May’s ‘Festival of Brexit’ has failed. The evaluation numbers have been fudged – relying on three television broadcasts, including claims of “meaningful engagement” with over 6 million people by being included in an episode of Countryfile – to claim success.

As Damian Green, Conservative chairman of the Commons’ Culture Select Committee, has said in response: “As a proposed great national festival, it clearly did not engage mass public enthusiasm, partly because of its lack of an obvious focus.”

This was entirely predictable. In fact, we did exactly that back in 2021. More’s the pity as it gives ambitious festivals a bad name. The Great Exhibition of 1851 inspired wonder in a generation of inventors, makers, creators and entrepreneurs. In contrast to people watching a short segment on a TV programme, almost a tenth of the entire population of Great Britain attended it in person, most of them returning again and again.

Building on a previous essay, our head of innovation research Anton Howes is in the process of putting the finishing touches of a report on how we could put on a Great Exhibition for the modern world. We’ll soon be going out to people for feedback and endorsements, so get in touch if this is something you’re keen to read.