Preface

Tony Blair
Founder, Tony Blair Institute

The ambition of my Institute is to become one of the foremost places in the world where policy makers and change makers can come together to discuss, debate and decide the key issues around the Technology Revolution - the 21st Century equivalent of the 19th Century Industrial Revolution.

As Patrick Collison says in his Foreword to these essays, this revolution is transformative, extraordinary in its consequences and impact and will and should dominate our thinking in the years to come.

These essays are just a small illustration of what the power of technology can achieve. No one doubts technology can also have negative effects. But the critical point is that for good or ill, it is changing the world. This is the real world event that is happening in our time, to our people and the world over. The challenge for politics is to understand it, master it, and harness it for good.

Yet too often policymakers either ignore its importance or focus on questions like those to do with privacy which are important but limited; when the real debate should be around how we use technology to usher in a new advance for humankind.

My thanks to all those contributing to this collection. The UK in particular has a tremendous opportunity in this field. But we have to act fast to access it.

Foreword

Patrick Collison
Co-founder, Stripe

“The consequences for human welfare involved in questions like these are simply staggering: once one starts to think about them, it is hard to think about anything else.”

This was Robert Lucas’s verdict when he turned his mind to questions of long-term economic growth.

The good and the bad news is that many of the most important considerations that will determine the long run rate of economic growth are not the foremost policy issues of our day. How we organize and practice science; the mechanics around the migration of certain highly skilled populations; how reputation and prestige shape career choices and talent allocation; certain prosaic aspects of government administration; and many more questions besides.

This omission is bad news in an obvious way: many dimensions of these problems are underappreciated and underexplored. Too few politically relevant constituencies focus on these avenues as routes to greater prosperity.

But it’s also good news. Precisely because these questions are often neglected, stalemate shouldn’t be the assumed equilibrium. Underappreciated areas may be more amenable to progress than those that are properly valued. On matters that will influence long-term economic growth, a few ambitious individuals, policymakers, and agents of change might be able to make a significant difference.

The industrial and scientific revolutions that blossomed in the UK were the product of a deliberate ambition, an emphasis on technical and scientific understanding, a willingness to contemplate the unusual, an appreciation for experimentation in institutions and incentives, a dissatisfaction with the status quo, and internalization of the basic truth that improvements to our material state are both possible and urgent. These attitudes, which are not the default cultural orientation in any society, helped initiate a durable, multi-century trajectory that propelled standards of living to heights previously unimagined.

All evidence suggests that this is also the mindset that will help us discover the best ways to improve the society that we live in today. You’ll find plenty of it embodied in the essays that follow.

Executive Summary

Over the next decade, innovation has the potential to transform almost every aspect of our lives for the better, from the healthcare we receive and the food we eat to the way we travel and how we interact with public services.

At the same time, many economists believe we have experienced a great stagnation. Over the decade, growth in total factor productivity has stalled. It also appears the link between investment in research and breakthrough discoveries is becoming weaker. An influential study notes “the number of researchers required today to achieve the famous doubling of computer chip density is more than 18 times larger than the number required in the early 1970s.” Researcher productivity appears to be falling across a range of fields too.

Early in his tenure, Prime Minister Boris Johnson set out an ambition for the UK to become a ‘science superpower’ and pledged to improve the UK’s  ability to attract scientific talent while increasing funding for research and development. He was right to recognise science and technology not only as an area of strength for the UK, but also as an area of growth.

The UK has many strong fundamentals in science and technology. We are home to DeepMind, the Francis Crick Institute and the “Golden Triangle” of research institutions in Cambridge, London, and Oxford. London is the European capital of venture capital, in 2020 having received a quarter of all European VC investment.

However, we cannot be complacent. The rest of the world isn’t. Europe, for example, is not only increasing investment in deep tech, but is also reforming the outdated tax treatment of stock options that has long held its entrepreneurial sectors back.
Outside of the EU, there is an opportunity for the UK to innovate not only in terms of technology, but also in policy. We can identify and test new models for funding research and regulating emerging technologies. The case for such an approach goes beyond attracting international investment and creating new jobs at home. Rather, we can become a model for the rest of the world to follow and accelerate the global pace of innovation. 

This matters. The past year has shown how interconnected the world is. New ideas to reduce emissions, to cut agricultural land use, and to track and treat new diseases are in everyone’s interest.

The ideas set out in this collection are designed to supercharge the innovation process and ensure they translate into tangible benefits for the public. They aim to tackle the problem at every stage, from developing the talent pipeline by raising the status of invention and entrepreneurship, to taking an active pro-migration approach, and applying existing technology to make every citizen’s interaction with the public sector as seamless as possible. The contributions in this collection draw on expertise from a range of fields from genetics and AI to economic history.

We look at the problem of scientific bureaucracy and propose ways to not only streamline funding processes, but also to ensure that the most promising early-career researchers have greater opportunities to lead laboratories. A major problem we identify is the lack of a scientific roadmap, to help guide collaboration between researchers and identify the most promising areas to fund. Genomics and AI warrant special focus and so we explain how creating a world-class centre for applied multi-omic research and a national research cloud would aid their development.

But if the public is to benefit from science and innovation, then everything depends on translating ideas from university labs into viable businesses. Regulation can be a key problem. In many cases, it’s not clear what is and isn’t legal. In other cases, a complete lack of regulation can create uncertainty for investors and make consumers reluctant to try new services. Regulatory innovations such as sandboxes are a positive move, but the UK should be even more ambitious. We should aim to become the testbed nation, the first port of call for any entrepreneur wanting to test new innovations – from autonomous delivery drones to gene edited crops.

The public sector can also influence innovation by opening up its procurement processes to startups. This is easier said than done. Incentives for procurement managers can create excessive risk aversion – nobody ever got fired for buying IBM – and without support from above, they will always fall back on tired-and-tested products. The exception to the rule, the US Department of Defence, works because there is a genuine political desire to innovate. We argue that the government should focus on the few areas of public spending where innovation is a major political priority, we suggest net zero and genetics.

Technology has the potential to revolutionise the way we interact with public services digitally. Expectations raised in the private sector are often not met by the public sector. We should look to Estonia as an example of what can be achieved. They have user-friendly online systems to manage taxation, residency, identification, healthcare, road administration, and even voting. Since 2007, they have followed the “once-only principle”, where citizens and businesses are only required to provide information once.  This not only cuts down on administration for individuals and businesses, but also keeps their data more secure.

If the UK is to become a science superpower, it will require policymakers to  have an open dialogue with technologists and adopt radical new  ways of thinking about policy. This short collection, which brings together experts in disciplines as diverse as economic history, genetics, and AI, aims to show the way forward.

A Digital State
Upgrading ambitions from a one-stop-shop to a no-stop-shop

Philip Salter, The Entrepreneurs Network
Kirsty Innes, Tony Blair Institute

While bringing much of the world to a near-standstill, the pandemic sped up the need for technological solutions to deal with its consequences. Governments were just as affected as businesses, with countries around the world – many for the first time – forced to deliver education online, offer medical appointments virtually, switch to online courts, or simply embrace technology to ensure the day-to-day business of parliamentary democracy didn’t grind to a halt. 

When it comes to govtech (digital technology to improve public services), the UK has a solid track record of innovation. The much-emulated Government Digital Service (GDS) delivered the ground-breaking single government website Gov.UK – before losing its way. The current government also has grand ambitions, at least on paper, pledging in its 2019 manifesto to improve the use of data, and more recently committing to “One Login For Government”, a single sign-on for digitally enabled services, that is “simple and safe to use, and available to everyone.”

A one-stop-shop for all government services would certainly be an upgrade to the status quo. According to GDS there are currently more than 100 different places where citizens have to log in to use government services, using various means from Government Gateway for tax and benefits, NHS Login for health, a separate username and password for benefits claimed via the Department for Work and Pensions, and so on. It stands in stark contrast to Estonia, where citizens can easily prove their identity – even digitally, without a physical ID card – in their interactions with the state. Data in Estonia is seamlessly shared between government departments, saving over 844 years of working time for Estonians every year. Nearly all public services are available digitally, with tax forms even pre-filled with data the government already holds so that they can be filed in a matter of minutes. 

Advanced digital states aren’t resting on their laurels, however. In education, pedagogy will increasingly be tailored to individual students based on their performance and preferences, so that no child is left permanently behind (or bored). In healthcare, genetic information is being used in conjunction with digital records so that doctors can deliver bespoke personal medicine. And anyone who becomes unemployed is automatically being targeted with jobs based on their skills and experience, or offered training based on the potential job opportunities in their local area.

On Australia’s Gold Coast, analytics have been used to predict hospital emergency admissions with an accuracy of up to 93 per cent. As well as ensuring a better service, this saves money by reducing overtime payments. Singapore meanwhile is using artificial intelligence for predictive healthcare, automating the time-consuming process of characterising a thyroid lump.

In the US, the Chicago Department of Health uses analytics teams to predict food store safety violations. Las Vegas’s health department turns to advanced AI technologies in deciding where to deploy its health inspectors, analysing Twitter for posts indicating food poisoning. And in Indonesia, tweets have been used by the startup PetaJakarta and the government to crowdsource details on flooding and focus rescue efforts

In Finland, the AuroraAI programme will pre-identify life events to support children after compulsory education and recommend them extra courses for upskilling and improving their employment prospects. And Singapore is using data to predict the type of skills the public sector will need in the future, helping to identify future talent as well as investigating why some people leave the public sector and what might induce them to stay.

Moving Towards Predictive Services

This is just the tip of the iceberg, with data being used in areas as diverse as crime, tourism, and the choice of library books, to deploy governments’ limited resources more efficiently and target individuals based on their specific needs.

One thread running through many of these examples is the move towards predictive (sometimes called proactive or anticipatory) services, where citizens are automatically offered services rather than being expected to apply for them. The future of the digital state isn’t just a one-stop shop, it’s a no-stop shop.

The potential for proactive services extends into every aspect of the relationship between the state and the individual. 

Imagine you’ve booked a trip abroad but your passport needs renewing because it expires in less than six months. The purchase of the flight could set off a series of automated actions: the Passport Office would pre-fill all the required information, simply sending a prompt for you to take a photo of yourself on your phone (with an AI tool on hand to check that it meets the criteria before you submit it); the visa appointment or process would be automated, with a message to your phone offering you appointment times in coordination with the embassy (and your calendar, if you choose to share it).

So how does the UK get to a one-stop-shop, let alone a no-stop-shop? First, we need everyone to have a digital identity. Without unique identifiers, data can’t follow us as users. This doesn’t need to be a physical ID card. It could be completely digital, with the experience feeling no different to logging into online banking.

This would be more secure than the current system in which data sits across numerous databases, with varying degrees of security, and which can be accessed by people without any record of the fact (in advanced digital states you can see precisely who has accessed your data and why).

While the big idea is to share data to make life for everyone easier, opt-outs could be built in for those who value privacy above all else. The UK already has a data ethics framework to deal with some of these issues, but it should be noted that advanced digital states offer more privacy and security than the status quo. The choice isn’t between having a digital identity or not. Instead it’s between the current situation of having dozens of identities, digital or otherwise, all of which contain slightly different sets of information about you; or having a system where you can immediately prove specific attributes or facts about yourself. This would mean that each time you wanted to inform a government department about your age or income, you would not have to share all your information with that department all over again. Government work to develop GOV.UK accounts needs to be accompanied by clear communications and radical transparency about the current system, as well as the aims and operation of the new one. 

We also need government departments to share data with each other. This is harder than it sounds, and something all digital states have struggled with. In addition to clear principles and best practice around data governance (an area where the Open Data Institute has delivered faster progress than central government initiatives such as the National Data Strategy), strong and determined political leadership is required. Firm commitments like mandating that e-services need to be delivered by a set date can help. This needs to be connected to the right skills and experience: departmental leaders need to gain the sort of product owner mindset more familiar to tech companies, and it would pay dividends to employ (and pay competitively for) proven product managers from the private sector.

As a very first step, adopting Estonia’s once-only principle would help concentrate minds. It would ensure that citizens, institutions, and companies only have to provide certain standard information to the authorities and administrations once. The UK is playing catch up – the EU already plans to incorporate this principle by 2023.

As a founding member of the Digital 5 (D5), which has now grown to 10, the UK is nominally one of the world’s Digital Nations. Now is the time to turn our big ambitions into reality.

The UK Research Cloud
Treating cloud compute as a form of digital infrastructure to unlock more innovation

Seb Krier, Stanford University Cyber Policy Centre

Cloud computing provides the infrastructure required to develop and train AI algorithms. Yet the resources necessary to innovate with AI are virtually inaccessible outside the world’s biggest corporations and a handful of elite universities. Journalist and AI expert Will Knight gives the example of a large supermarket chain that attempted to deploy an AI system to predict sales. Although it led to a 75 per cent decrease in the number of errors, it required vast computer resources - the central and graphics processing units (CPUs and GPUs), which may be considered the brains and processing power AI systems need to carry out demanding computational tasks. The algorithm required so many compute resources that it was no longer cost-effective. Indeed, research and deployment company OpenAI estimates that, since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially, doubling in less than three and a half months. The high cost and unavailability of cloud compute thus heavily hampers research and development in academia, small and medium enterprises, and civil society organisations.

All of this illustrates why treating cloud computing as a form of digital infrastructure, and facilitating access to it, could unlock more innovation and encourage more diverse applications of AI. The National AI Research Resource Task Force Act, passed by the US Congress in December 2020, seeks to do just that: the initiative aims to spur on and democratise AI-centred studies and applications by developing a national cloud for scientists and students to use. This means that typically expensive experiments could become available to a wider range of institutions and researchers. Championed by both universities and tech companies, the proposal is the brainchild of John Etchemendy and Fei-Fei Li, both co-directors of the Stanford Institute for Human-Centered Artificial Intelligence. Similarly, in the EU, the Gaia-X initiative aims to build a trusted, sovereign digital infrastructure facilitating cloud provision, data sharing, and interoperability.

Researchers are rightly concerned about the environmental impacts of data centres and cloud services. But moving to the cloud incentivises more efficient computation, which when combined with more energy efficient standards for data centre equipment can be extremely impactful: a recent study observes a large decrease in the energy consumption of data centre infrastructure systems, enough to almost offset the recent growth in total IT device energy usage. Importantly, advances in AI that lead to system efficiencies can also lower environmental impacts. Ekkehard Ernst, chief of the Macroeconomic Policy Unit at the International Labour Organisation, states that “well-trained AI routines, for example regarding electricity management or water consumption in agriculture already reduce the burden on the environment today and offer possibilities to address climate change effectively.”

A national cloud computing resource would also allow a wider pool of universities and labs to undertake ambitious research projects. For example, running and scrutinising compute-heavy algorithms such as GPT-3 (for language prediction) would become far less costly for academics. The Volkswagen emissions scandal of six years ago was discovered, in part, thanks to researchers being able to scrutinise car emission systems. But giving the same scrutiny to AI systems is impossible without access to significant compute resources. A UK-based research cloud would solve this, allowing academics to identify and remedy any unsafe or unfair algorithms being used.

Early-stage start-ups could also train algorithms on the cloud without having to depend on a particular provider. And it would incentivise some researchers to stay in academia and mitigate concerns about a hollowing out of the talent pool for public-interest AI research. This is important because high-profile departures from AI faculties have negative effects on students’ specialised knowledge, which is a crucial determinant of the success of AI start-ups.

Recruiting a Multidisciplinary Task Force

The forthcoming National AI Strategy represents an excellent opportunity for the UK to catch-up with the US and propose the creation of a National Research Cloud. This will first require a multidisciplinary task force: Professor Daniel E. Ho recommended including “hardware engineers who will consider the computing infrastructure, computer scientists who will draw on their AI training to conceive of the best innovation environment, lawyers who can navigate the privacy, security, and IP thicket of liberating data to be offered as part of the cloud, and policy analysts and business school students who will consider the economic and business model of such an initiative.”

The UK version should prioritise three things: 

  • facilitating and subsidising access to state-of-the-art AI hardware at scale through credits

  • hiring the personnel necessary to deploy these technologies across the country at competitive rates

  • facilitating access to high quality large-scale datasets through standardisation and monetary incentives. 

So that consumers and companies can easily switch between providers, and to incentivise a healthy level of competition, APIs and mechanisms to promote technical interoperability should be established too – as happened in the financial sector with the Open Banking initiative. This is also an excellent opportunity for both the British and American blocs to cooperate on technical interoperability, data protection and cybersecurity standards, perhaps even creating a “Trans-Atlantic Research Cloud”. If designed well, this could be a gamechanger in the UK. In the words of Jerome Pesenti, VP of AI at Facebook and co-author of the UK’s major AI review, “we really need to look at how we get most out of the compute we have. This is the world we are going into.”

Procuring Innovation
Using public procurement more effectively to drive innovation 

Chris Haley, Global Entrepreneurship Network

Can we use public procurement to drive innovation more effectively? It’s a question many policymakers have asked over the years, prompted partly by the sheer size of the public procurement budget: £113 billion in 2020 for the UK, and on average 12 percent of GDP globally. 

Certainly, private-sector procurement is a crucial route for bringing innovations into firms whilst also building supply chains. Public procurement appears to be an important tool, not only for introducing innovations into the public sector (resulting in efficiency savings and improved public services), but also for supporting innovation in the private sector. Moreover, procurement can stimulate other forms of innovation: there is evidence that procurement contracts can encourage scientific publications which are not used in firms’ internal inventions. These ideas can spill over to rivals' inventions; and are often not protected by patents enabling easy follow-on innovation.

However, despite multiple government reviews, facilitating genuinely innovative procurement remains an obstinately elusive goal. 

One reason is that public bodies often set excessive risk and qualifications criteria for suppliers, partly because of the (laudable) desire to avoid wasting public money, and partly because of the (less laudable) desire to avoid political blame if things go wrong. As a result, as the Office for Government Commerce reported more than a decade ago, public bodies have a “tendency to opt for low-risk solutions, low-margin players and mature technology [meaning that] innovation is not routinely welcomed or rewarded.”

This problem particularly afflicts startups. Demands for proof of an established track record are common, as are requirements for ISO certification or multi-million pound indemnities. These demands reduce the risk for the procurer, but at the cost of tilting the playing field away from innovative young firms and in favour of established incumbents. Moreover, these risk thresholds are subject to upwards “creep”: every failed procurement process creates pressure to add another qualifying check to avoid making the same mistakes again, whilst any pressure in the opposite direction to simplify and streamline the procurement process is typically less focused. 

Buyers are also often stuck in a legacy mindset: their prior experience in tackling a problem shapes not only the solutions they will consider, but also their framing of the problem itself. Procurement requests are often specified in terms of legacy capabilities and legacy performance indicators, which tend to favour incremental innovation over radically new approaches. The legacy mindset often leads to overly prescriptive tenders. An NHS tender for improved fertility treatment, for example, might assume that the desired approach is to make existing in-vitro fertilisation (IVF) cheaper, rather than allowing for innovative non-IVF alternatives. The tender might thus demand that suppliers provide performance indicators like “cost per cycle of IVF” as opposed to approach-agnostic indicators like “cost per healthy live birth”.

Third, public procurers typically seek turn-key solutions - complete packages which are ready for use. This again favours mature solutions and established suppliers, even if these solutions are outdated or inferior. By contrast, startups and other innovative firms may have radically better core capabilities, but only be able to offer part of the solution that is requested. This means that in order to be accepted they may need to build out their offer, for example by developing bespoke “front ends”, or have to partner with other firms. 

So, what needs to be done?

First, we need to encourage an appetite for greater risk among public bodies. Innovation always entails risk, and hence there will sometimes be failures; if these are penalised excessively, either by internal forces or public responses, then we drive innovation out of the system. In order to make the case for increased innovation, we need to be clearer that the risk of not innovating includes missing opportunities to improve public services. We need more innovation champions within organisations, in order to counterbalance the bureaucratic demands for certainty, and to make the case that innovation risk should be actively managed, rather than minimised. Incentives are part of this: in any organisation, “not invented here” syndrome is often a rational consequence of incentive structures that dissuade staff from going out of their way to experiment or break with existing processes. 

We also need to shift attention to procurement’s systemic risks, as well as the risk of individual startups or contracts failing: the 2018 collapse and liquidation of British construction and facilities management company Carillion demonstrated that bigger is not always better, and that there is value – albeit difficult to quantify – in cultivating a diverse supply base.

Moreover, since risk-aversion sometimes stems from fear of public opprobrium, there may be particular value in trialling innovative procurement in specific areas – such as defence or climate change – where there is already a public acceptance of higher risk, or at least an understanding that different approaches are needed.

Second, we need a massive expansion of “alternative” procurement processes. Some of the most innovative firms in the private sector have concluded that there is no point in forcing their traditional procurement teams – the buyers of tables and chairs – to become innovation specialists, and so have instead created entirely parallel innovative procurement structures. Typically these structures include not only specialist staff who understand the needs of startups, but also a portfolio of innovative mechanisms, such as Challenge Prizes, where innovation solutions to pre-identified problems are financially rewarded, and Advance Market Commitments, where guaranteed markets were used to accelerate the development of vaccines for Covid-19.

The US Department of Defense provides a useful case study of the benefits of opening up procurement processes to startups and innovators. In 2018, they experimented with granting researchers applying for funding more freedom to define their own projects, rather than following the US Air Force’s tight specifications. As a result, more startups and smaller businesses applied, and the experiment paid off for the military and civilians alike. Successful applicants went on to receive additional venture capital investment, obtained more patents, and subsequently won more Department of Defense contracts, widening the pool of suppliers for the US military and giving them access to a greater range of innovative products. It is an example of what is possible when we open up procurement to startups: more innovation, more investment, and improved public services.

Embracing Experimentation 
Experimenting with new ways to fund experiments

José Luis Ricón Fernández de la Puente, Independent Researcher
Joao Pedro de Magalhaes, University of Liverpool

When deciding on matters of science funding, the UK has long followed, or at least claimed to follow, the so-called Haldane Principle. In its original 1918 formulation, the principle states that decisions about scientific research should be left to those who know their subjects best – in other words, rather than politicians, the scientists themselves. In practice, this has translated into “research decisions through peer review”, a principle enshrined into law as part of the Higher Education and Research Act of 2017. The approach is similar to the way in which the US requires peer review in its traditional form to be the way most funding is granted by the National Institutes of Health. It is also the method by which most grantmaking agencies operate, including UK Research and Innovation (UKRI, the UK’s main R&D funding agency, with a budget of over £8 billion).

At face value, using peer review is a sensible way to decide whom to fund, but recently concerns have been raised. Peer reviewers are not always as effective as one would assume at selecting “good research”. Even if they do identify and fund research that will be highly cited, that research may not translate into real-world applications down the line. Alternatively, we could identify specific researchers to fund, supporting them to work on open-ended research programmes for longer rather than via narrowly defined grants — an approach commonly referred to as “Fund people, not projects”. Learning from the past successes of such programmes will be key to designing selection processes that are not too risk averse.

The alternative to peer reviewed grant proposals should not be politically appointed science czars imposing a national agenda from the top down. While there may be a case for new top-down funding mechanisms – the Advanced Research and Invention Agency (ARIA) is one example – it is generally possible to maintain the spirit of the Haldane Principle while still innovating and experimenting with fresh ways to allocate funding.

Grants or fellowships following the “Fund people, not projects” approach will typically involve financial support, for a longer period of time, with fewer defined application requirements than other funding. The goal is not to enable a researcher to carry out a minutiously specified piece of work but rather to let them pursue their own research programme without the constraints of a detailed application. This strategy is already practiced by, among others, the Howard Hughes Medical Institute (HHMI) and BP’s Venture Research Unit.

The Lottery Model

A model with an element of randomisation, which was until recently a mere academic curiosity, is now being taken seriously by major science funding bodies. For example, the Swiss National Science Foundation (SNSF) will be using a lottery, in lieu of longer review processes, to decide which grants to fund when their assessed merit is similar. This accomplishes several aims: 

  1. It increases the types of projects that get funded, helping to overcome particular biases that the reviewers may hold. 

  2. It cuts down on bureaucracy. If the odds of getting funding are uncertain, there is less of an incentive to spend lots of time on crafting the perfect grant application. Rather, a short, minimally robust proposal will do. As a result, scientists will spend less time applying for grants.

  3. And more subtly, it moves us towards a system under which we experiment with science funding and public policy more broadly. Introducing a mechanism for lottery-based funding today opens the door to a meta-lottery that allocates researchers to funding by lottery, peer review, or other approaches (like algorithmically-driven funding mechanisms).

The Portfolio Approach

A review of the literature on funding mechanisms, makes one thing clear: no method or approach is a panacea. But by adopting a portfolio of novel mechanisms we can potentially accelerate the pace of innovation. While science reform proposals tend to make sweeping statements or claims that one mechanism is better than the other, it is difficult to swap an entire funding system for another. It is equally challenging to try several potential funding mechanisms at once. But by implementing incremental changes and measuring their outcomes, we can build a body of evidence upon which to base future reform. 

Using this approach, we recommend the following change to UKRI: introduce an HHMI-style fellowship (the “Horizon Research Fellowship”), awarded to a small number of (up to 20) promising researchers every year. Selected researchers would receive generous funding for the next seven years, with the possibility of extending by an additional seven after an interim review. They would not be evaluated on their completion of a series of pre-established research aims, but rather their overall portfolio of scientific production. Moreover, as is the case with HHMI, candidates would not apply with a highly specific research proposal but rather a multi-year research program or agenda. Applications would be open to researchers at any stage of their career and the selection committee would be asked to reserve a number of positions for young candidates.

Novel mechanisms and a bigger appetite for risk and experimentation are critical as we look to increase the bounds of our knowledge. We need to break free from a constrained and narrow view of research funding. We need less control, more randomisation, and more experimentation. This should apply greater competitive pressure to traditional mechanisms and institutions, as well as create new networks of individuals outside the strictures of science – all of which are essential if we are to continue to push the frontiers of our scientific understanding and discover innovations that will shape our collective future.  

Omics UK 
Creating a biomedical knowledge powerhouse

Saloni Dattani, Works in Progress
Henry Fingerhut, Tony Blair Institute

There's a quiet revolution underway in the biomedical sciences. The cost of sequencing a human genome has fallen by an estimated 10,000 times in the last 15 years. New gene therapies and immunotherapies, which are growing rapidly, are now able to target specific disease-associated genes and proteins with extraordinary precision. Algorithms able to predict the structure of proteins from their sequences alone have reached the same accuracy as equivalent research carried out by careful experimental methods.

Our ability to understand the structures of genes and proteins at such a precise level is slowly being translated into an improvement in many aspects of human health. Sequencing can enable doctors to diagnose genetic disorders at birth and potentially provide life-saving treatment. And the development of specific antibodies has demonstrably reduced the illness faced by patients with various cancers and immune disorders. Antibody therapies are now being successfully used to treat coronary heart disease, which is estimated to be the leading cause of death worldwide. This progress would not have been possible without the genetic studies that identified specific proteins that were commonly disrupted in families with these diseases.

Research will undoubtedly uncover even more potential therapeutics, but scientists face a number of obstacles. The genes responsible for various diseases are tricky to identify and the mechanisms by which they cause disease are difficult to decipher. A key problem is that we lack complete data from different biological sources in the same patients, such as their DNA, gene expression, biomolecules and samples of damaged tissues. Without these, it is more challenging to understand how disease develops at a molecular level, and so we have fewer options in terms of where we can target treatment. Solving this is one major goal of multi-omics, which incorporates data from various layers of omics (i.e. genomics, transcriptomics, proteomics, metabolomics, and so on) to understand how these cogs interact with each other and regulate the human body.

For years, multi-omics research has faced several major roadblocks

  1. Collection: It has been difficult to acquire multi-omics data for sufficiently large samples, due to the cost of the technologies involved. Omics studies measure thousands of biomolecules and biomarkers, making it crucial to have large numbers of participants to avoid being misled by chance differences in some of these biomolecules. 

  2. Standardisation: It has been difficult to standardise the data that has been collected. Most multi-omics research comes from pooling together data from different types of omics or is limited to data from particular organs and tissues only. These multiple sources often mean there are large differences in the conditions under which data are collected, prepared, stored and measured, which have to be accounted for by researchers. 

  3. Aggregation and Collaboration: There are currently only a few centralised repositories available that aggregate multi-omics data, and there is a lack of collaboration or sharing of best practices between researchers who work with the different types of omics data.

In combination, these constraints reveal the enormous benefits that would arise from a large-scale effort to collect and share multi-omics data from a consistent set of patients and biological samples, with a high degree of curation and standardisation. Given the falling costs of biomedical research and the successes of other collaborative efforts to collect biological data – many of which were established in the UK – the benefits of developing a multi-omics project at a national level are clear. This is the motivation for establishing a multi-omics research centre, Omics UK, that would be the next step in an ambitious agenda for public research in the biomedical sciences.

There is precedent in the new era of biomedical research ushered in recent decades. The Human Genome Project, for example, was a moonshot programme that successfully identified and sequenced the whole human genome. The project was notable not only for the scientific advances at its core but also for the scale of international collaboration it involved, the new public-sector structures that were developed to support it, and the principles that were pioneered to enable large scale data sharing between researchers. Together, these features strengthened coordination between researchers, ensured that data was curated to a high level of quality, and protected the genome from being monopolised by private patents. 

Building on this success, the biomedical community extended their ambitions to fully map comprehensive libraries of other important biomolecules. Just as the Human Genome Project mapped the entire set of human genes, the Human Proteome Project is set to map all proteins encoded by  genes and the Human Epigenome Project to map physical markers that regulate the expression of genes. They have already shown rapid success, with the Human Proteome Project mapping 90 per cent of the Human proteome in just ten years. Once more, they have brought together academic and government research organisations across borders, including the UK’s Wellcome Sanger Institute, the private company Epigenomics AG, and France’s National Centre for Genotyping. 

These projects lay the foundations for advances in the future of applied biomedicine. With maps of the genome, proteome, and epigenome in hand, researchers will be able to more easily identify biomarkers for disease and design targeted treatments. In the UK, several programmes have been organised to use these tools to advance precision medicine in the coming years. The ambitious Genome UK programme comprises the 100,000 Genomes Project, which carries out large scale genotyping, and the NHS Genomic Medicine Service, which sequences the genomes of patients and DNA from cancer cells as part of routine care providing data that researchers could use to develop targeted cell and gene therapies.

Multi-omics research is the next step, enabling researchers to uncover the connections among the genome, biomolecules, cellular processes and the broader environment, in both healthy and diseased contexts.

Omics UK would be a centre for this research: collating clinical data from patients and applying the foundational work carried out during the Human Proteome Project and Human Epigenome Project into knowledge to understand, predict and treat diseases. Most importantly, an Omics UK body would take a systems approach, uniting researchers in different fields of biomedical science, enabling data collection and sharing, as well as joint analysis to understand how distinct cellular structures and processes interact to affect disease. The UK Biobank has laid the basis for this important work, future-proofing against long-term research needs by collecting and storing biological samples that can be used not only to generate genomic data but crucially to also link genomics to hormone, metabolite and protein levels. Omics UK would provide a unifying structure and mission to carry out the multi-omic research enabled by the UK Biobank samples.

At its core, Omics UK would enable researchers from diverse areas to collaborate and share data with each other, with data governance practices that streamline the process. This would include curating biological samples to collect multi-omic assays, developing novel approaches to hosting and analysing multi-omic data, and building new tools and research methods on top of shared data, such as the Cancer Omics Atlas, a resource helping researchers to explore experimental targets across specialities.

The project would also translate research into targeted treatment and precision medicine by linking across sectors to biomedical companies. These links would be reciprocal, enabling researchers to examine high-quality practice data, to evaluate the effectiveness of treatments and identify new challenges.

Just 30 years on from the launch of the Human Genome Project, biomedical research has changed profoundly and our knowledge of the genome and proteome is already enabling scientists to develop targeted treatments to improve lives. Omics UK would be the next step in one of the most fruitful research enterprises of our time, bringing together scientists from disparate fields of biomedical research and building on comprehensive maps of the genome, proteome, and epigenome. It would advance our ability to treat clinical disease by connecting these layers of knowledge and translating them into practice, seizing the opportunities provided by this acceleration in biomedical sciences.

The Atlas Institute 
Mapping dark corners and exploring white spaces

Henry Fingerhut, Tony Blair Institute
Benedict Macon-Cooney, Tony Blair Institute

In 2009, Fei-Fei Li published ImageNet, a groundbreaking database of more than 14 million images that would deeply accelerate AI research. Li’s ambition was simple but boundless: “We're going to map out the entire world of objects.” In less than a decade, the accuracy of the algorithms that would win the large-scale visual recognition competitions she led between 2010 and 2017 had surpassed human abilities.  

Under the umbrella term of Atlas projects, pioneering research competitions are successfully unifying scientific fields around a common vision of mapping existing and emerging knowledge and clarifying future agendas. In proteomics, for example, the Human Protein Atlas involved more than 2,000 teams analysing where proteins are found in cells, tissues and organs, in the process helping us to develop new drugs to target diseases with greater precision. Recently, the European Hematology Association Roadmap identified 60 disease groups and priority areas to focus on in the area of blood disorders while the Human Cell Atlas project unites researchers and computer scientists who are seeking to increase understanding of what those involved describe as the “fundamental units of life”. 

In the past year, we have witnessed how a global pandemic has rallied the scientific community to generate new knowledge at an unprecedented pace. Scientists from the fields of epidemiology, biomedical sciences, social sciences, health systems and economics have contributed to policy and clinical responses, helping to create and distribute multiple vaccines in record time. Yet these successes have also shone a light on our limited ability to apply knowledge to more longstanding societal challenges – and the benefits we could see from increasing our frontiers. In pursuing progress post-pandemic, we need to elevate our ambitions and begin to more fully map the world of scientific knowledge, identifying gaps that need to be filled. This should be an international endeavour: with the US already revitalising its research infrastructure, the UK needs to step up and create a new Atlas Institute.   

In an age of information, the growth in research, researchers and journals is coinciding with the use of more intensive quantitative data sources than ever before. At the same time, a greater focus on societal challenges that span traditional disciplines has meant that researchers are concurrently working on similar problems with limited interaction. The Covid-19 research successes, for example, have come at a cost: it is estimated that 200,000 articles have been written about this pandemic alone – too many for traditional literature-review methods to track. This has resulted in missed opportunities of knowledge sharing and connection across disciplines and countries. Indeed, while it has never been more challenging to follow academic progress, it is more important than ever that cross-disciplinary researchers, funders and policymakers are able to share an understanding about the current state of scientific knowledge.  

Traditional mechanisms have their limitations: textbooks are slow-moving and focus on long-resolved theory, literature reviews are ad hoc and may not capture relevant work from other disciplines, and research funding conflates the importance of a problem with the merit of a specific proposal. Established scientists hold tacit knowledge about the state of their field but this is accumulated over the course of their careers, often left unwritten and therefore difficult to systematise and share.  

Science therefore faces a problem of knowledge management. We need valid, systematic methods to synthesise findings across journals and disciplines in order to establish the state of specific fields and identify their most pressing unanswered questions. These methods would dynamically document the current picture, providing scientists with the tools and techniques to comprehensively track new knowledge in their fields as well as related work in other areas, and to prioritise problems and work across disciplines to solve them. With these tools in hand – and a common language for scientific progress – we could better identify scientific priorities and design more effective prescriptive interventions in order to direct the agenda towards responsible research and innovation.   

The foundations already exist. Science and technology studies (STS) and the philosophy of science present the frameworks which describe how scientists and engineers create and evaluate scientific findings and how theoretical paradigms begin and end. Information science spans interdisciplinary fields to capture how people collect, store, retrieve and use "relevant" information. Digital tools like Google Scholar and PubMed help organise and link individual studies, enabling scientists to find relevant papers across fields and track citation-based metrics to assess impact. The Research on Research Institute systematises STS knowledge to understand related cultures, strategies and outcomes.  

Early work in this field is already demonstrating benefits. For example, Professor Chaomei Chen captures the dynamics of scientific advancement by using citation data to map fields and to find links among key findings and points of transition. Similarly, Florian Metzler measures capability shifts in industry, using patent portfolios to visualise technological expertise and identify disruption. In biomedical sciences, the startup Causaly uses AI to help scientists find and visualise causal relationships unearthed from evidence hidden in tens of thousands of research papers and articles across fields, even when they are not directly linked by citation.   

Ambitious applied research centres have been successful in bringing together researchers across disciplines to solve these problems in specific domains. For example, the Alan Turing Institute unites data scientists, computer scientists, social scientists and ethicists to conduct practice-oriented AI research. Calls for more of these centres, for instance in neurotechnology, demonstrate the increased need for interdisciplinary forums across science and engineering.   

With this clear, meta-level need to understand scientific progress and its relationship to real-world problems, our proposed Atlas Institute would provide essential insight into how fields are progressing, advising governments on how to fund and support targeted research while systematising research translation into new technologies. 

The Atlas Institute would study the state of scientific fields by mapping knowledge, identifying research gaps and challenging conclusions about the current body of knowledge.  

Exploring the white spaces and dark corners of science, it would provide mechanisms currently missing:  

  • for scholars across fields to collaborate on complex interdisciplinary societal challenges 

  • for policymakers to more regularly incorporate evidence into decisions 

  • for funders to shape fields to address societal needs  

It would create new knowledge-management structures, processes and techniques that meet the needs of the information era.   

The Atlas Institute would have intellectual foundations in STS, philosophy of science, and information science. It would be an applied, interdisciplinary laboratory where scholars in these fields work directly with researchers to map scientific progress and design ambitious funding and policy programmes to support innovation. It would apply STS theory and information-science methods to measure, track and advise on the dynamics in scientific knowledge that are related to the most important societal questions of our time, including precision medicine, neuroscience, climate change, ageing and longevity.   

Serving as an application programming interface, it would provide the protocols, functions and common knowledge needed to interface across academic disciplines while facilitating collaboration by sharing norms, theoretical foundations, methods and existing problems among researchers from different fields – and across sectors – helping policymakers, funders and private-sector stakeholders to understand scientific priorities. Crucially, it would facilitate the systematic translation of evidence into policy and practice.   

 As a groundbreaking centre for applied research in scientific and technological progress, the Atlas Institute would provide the knowledge-management structures and techniques necessary for researchers, policymakers, funders and students across fields to advance frontiers in a given area, thereby complementing existing problem-focused research facilities and funding mechanisms by improving their ability to solve social challenges.   

In essence, it would echo the ambition of Li's ImageNet but be applied to all knowledge. The Atlas Institute would start to build a knowledge compass, pointing to uncharted territory so that modern-day explorers could chart new domains and discover innovations to propel us forward through the 21st century.   

Building Talent Density
Network and talent density is the key to entrepreneurial success

Matt Clifford, Entrepreneur First

Policymakers often ask what a city or region can do to become the "next Silicon Valley". The good news is that the most important part of the answer is quite simple: if the UK wants to be a technological superpower, we need more of our most talented individuals to start and work in high-growth businesses. 

This aspiration is already the reality in the San Francisco Bay Area, and it helps explain the region's extraordinary success. In Silicon Valley, starting a technology company is the career path of choice for the most ambitious, which has resulted in exceptional and self-reinforcing network density. Nowhere in the world is it easier for would-be entrepreneurs to meet and learn from experienced founders, investors and advisers and to find employees, customers and suppliers to work with. The UK can take big strides towards the same position by proactively making entrepreneurship more attractive to high-skill individuals and working to become the top destination globally for aspiring founders.

Encouraging people to become entrepreneurs has developed a bad reputation in innovation policy circles - and understandably so. On average, individuals earn more and have higher job security as employees than as founders. But as innovation is a domain where outcomes appear to be power law distributed, we should be less concerned with averages than with outliers. There's a growing weight of evidence that what matters most is who becomes a founder and that for the highest skill individuals it can be the right choice for themselves and the economy.

The good news is that the supply of great founders is not fixed. There’s long been a line of thinking in some circles that entrepreneurs are born, not made, and therefore there’s little that governments (or anyone else) can do to get more of them. Fortunately, recent research shows this is not true. The evidence suggests that when high-skill sectors of the economy hire less – for example, in recessions – we get more entrepreneurs. Is this just a case of hapless individuals being forced into a career path they are unsuited for? No – studies suggest we get more successful entrepreneurs too.

The bad news is that means the entrepreneurial economy is in competition with some of the highest paying jobs in the world – above all, financial services. At least one study suggests that finance deprives the technology sector of many of its most talented engineers – so when banks are riding high, we get fewer technical founders and those we do get have worse outcomes. This is a particular challenge in the UK, which has an unusually large and lucrative financial services sector. 

So what can we do? A range of policy and private sector initiatives would be valuable. In rough ascending order of radicalism, we should look to legitimise and raise the social status of entrepreneurial career paths; we should ensure the UK is the destination of choice for the world’s top technical talent at the earliest stage; we should engage in planning reform to allow our most successful innovation clusters to grow; and we should look to remove the implicit subsidy enjoyed by the financial services industry.

First, we need to raise the status of entrepreneurship – and we can start by telling better stories about it. It’s unfortunate that our most prominent portrayals of the entrepreneurial process in popular culture depict a zero-sum, adversarial, often frivolous activity. The opposite is true in real life, but The Apprentice and Dragon’s Den do much to shape perceptions. Innovation is not natural, but it is contagious and we know that cultures that tell positive stories about risk-taking are more entrepreneurial. We should also pursue other initiatives to raise innovation’s prestige, such as Anton Howes’ proposal elsewhere in the collection for a new chivalric order. 

Second, we need to become the destination of choice for top technical talent, as early in their careers as possible. The UK’s world-class universities give us a strong foundation, but we must be careful to avoid a “short term greedy” approach and treat international students as cash cows to subsidise our institutions. The long-term value to the UK of having top talent start businesses here exceeds the short-term value of their fees. We’ve made a lot of progress on visas – some of the UK’s options for entrepreneurs and exceptional technical talent are the best in the world – but we could go further. We should consider starting a talent agency focused on encouraging and subsidising extraordinary individuals to begin their careers in Britain (Anton Howes, Sam Dumitriu and Philip Salter talk more about this in “Operation Paperclip 2.0” in this collection).

Third, we have two world-class innovation clusters in the UK - Oxford and Cambridge - whose growth is artificially held back by planning regulations. In both cities, it’s extremely difficult to build new housing, which has knock-on effects for our ability to cluster talent. (We shouldn’t limit ourselves to Oxford and Cambridge, of course, but both places already benefit from agglomeration economics, so removing the restrictions is low hanging fruit)

Finally, and more radically, we should take steps to wean ourselves off our national dependence on the financial services industry. The studies cited above suggest that the exceptionally high pay on offer in finance represents one of the biggest barriers to building a world-leading innovation ecosystem. Salaries in finance were roughly equal to other high-skill professions 50 years ago; today they are around 70 percent higher. We should ask whether this remuneration represents a market clearing price or a policy failure. Pay in finance is at least in part policy choice: it is enabled by the implicit “Too Big to Fail” subsidy and deregulation that has allowed gains to be privatised and losses socialised.

These are not simple steps, but if we get them right, the results are self-perpetuating. More high quality entrepreneurs means greater network density for the next generation of founders. And entrepreneurs with denser networks have better ideas, get better advisers and generate more revenue

The UK has many natural advantages that make it a plausible innovation superpower. If we can nudge our most talented and ambitious people into entrepreneurship – and open our doors to those from the rest of the world – the upside is enormous.

Upstream Innovation
Raising the status of innovation and innovators

Anton Howes, The Entrepreneurs Network

Most innovation policy focuses on those who are already innovating, simply tinkering with the incentives they face or funding they receive so that they apply themselves to particular industries. Yet to increase the total number of innovators, we must look further upstream, to the decisions young people make when embarking upon their careers. Encouraging people to self-identify as innovators, and to pursue innovation as a career should be one of the major aims of any innovation policy. 

Indeed, it may be the most cost-effective approach that any government can take. Titles and recognition, after all, cost almost nothing to bestow, while potentially having a major influence on people’s attitudes towards an activity. At present, innovation rarely brings any inherent fame or financial success, and tends to be neglected by existing status-conferring institutions too. The government’s aim, then, should be to make innovation a more viable and attractive career path, not just financially, but in terms of the social standing and prestige that it brings -- something that the Crown could do through the British honours system, and that government could support in terms of innovation’s visibility via exhibitions of industry.

Innovators today are occasionally recognised by the honours system, but only rarely, and often as a result of their philanthropic activities rather than for the innovations they have created to improve people’s lives. Honours appointments overwhelmingly recognise people’s political service, and charitable activities, or confer additional prestige on already-visible careers in music, sport, literature, or acting. Charity and service are of course praiseworthy activities and deserve recognition, but as it currently stands, the honours system does little, if anything, to raise the status of inventors and innovators, whose achievements are typically much less obvious or well-known. The Queen’s Awards for Enterprise fail in this regard too. Although one of their principal categories is for innovation, they are awarded to businesses rather than to individuals, and only by Lord-Lieutenants rather than by members of the Royal Family. They thus give firms some favourable PR, and might be sought after by a CEO or manager, but they do nothing to motivate people to embark on careers as inventors. 

One easy solution would be to create an entirely new order of chivalry, in parallel to the OBE, with its own knights, dames, commanders, officers, and members, specifically designed to recognise the achievements of inventors and innovators. At an estimated cost of about £66,000 per year, but with the potential to significantly raise the status and visibility of inventors, establishing this new honour is likely the most cost-effective policy that the government could adopt to promote innovation. 

Another solution to raise the visibility and status of innovation would be to periodically hold major national events that highlight the UK’s inventive achievements. Such a strategy was successfully employed by France in the early 19th century to catch up with Britain’s rapid industrialisation. It was also the original motivation behind the Great Exhibition of 1851, famous for its Crystal Palace. Although World’s Fairs, the successors to the Great Exhibition, continue to this day, their motives and organisation have changed substantially. They tend to be highly curated events, aimed largely at promoting countries’ image and reputation. 

By contrast, the original exhibitions focused on industry, with significant contributions from manufacturers. They highlighted inventive achievements and new commercial products, materials, and scientific findings, which visitors could see in the same place and compare. Manufacturers could use the events to identify the technologies they needed to adopt in order to keep up with the cutting edge of innovation. Consumers were likewise exposed to the best new products, thereby raising their demands upon producers. And governments were able to use the events to provide a snapshot of developing technology, identifying lagging industries that might require policy intervention to help them to keep up, while highlighting industries worthy of celebration. 

Exhibitions of industry thus serve a number of useful roles. They directly encourage innovation through firms emulating one another, so as not to be seen as laggards. They create demand for innovative products, by exposing consumers to things they may not have been aware of. They provide useful information to governments in formulating their innovation policies. And they raise the status and visibility of innovation. Just like the creation of a new honour for inventors, they are likely to be highly cost-effective. The Great Exhibition of 1851, for example, was entirely self-financed through loans, subscriptions, and ticket sales. The role that the government should play in such an event is in enabling it to take place, possibly with some funding, but largely by ensuring that it has the appropriate infrastructure, site permissions, and high-level political backing.

Seeing Is Believing

A great exhibition of today would be the equivalent of all existing industry-specific fairs combined. Like the popular Consumer Electronics Show, but for everything. It would be a place where visitors would actually get to see drone deliveries in action, take rides in driverless cars, experience the latest in virtual-reality technology, play with prototype augmented-reality devices, witness organ tissue and metals and electronics being 3D-printed, and watch industrial manufacturing robots in action. They could have a taste of lab-grown meat at the food stalls, meet cloned animals brought back from extinction, perform feats of extraordinary strength wearing the same exoskeletons used in factories, fly in a jet-suit, and listen to panel interviews with people who have experienced the latest in medical advancement. Perhaps a commercial space launch using the latest technology might be timed to coincide with the event, to be livestreamed on a big screen for all visitors to see. And visitors would, naturally, meet the inventors, scientists and engineers who developed it all, inspiring the next generation to dream bigger and go further still.

Operation Paperclip 2.0
Beyond an open door for innovators 

Anton Howes, The Entrepreneurs Network
Sam Dumitriu, The Entrepreneurs Network
Philip Salter, The Entrepreneurs Network

The UK tends to take a build-it-and-they-will-come approach to attracting the world’s top scientific and innovative talent – an important mission given that so much of our innovation depends upon immigrants. While just 14 per cent of UK residents are born outside the country, 49 per cent of the UK’s hundred fastest-growing startups and 11 of its 16 startup unicorns had at least one foreign-born co-founder. 

Making the UK more attractive to foreign innovators while reducing their barriers to entry is a significant first step, as demonstrated by the newly-created Innovator, Startup, and Global Talent visas. But more needs to be done to reduce the frictions associated with moving. 

Adopting a digital residency model like that of Estonia, Georgia, Croatia, Norway, Malta and an increasing number of other countries would help. In Estonia, for example, even those not living in the country can benefit from e-residency, meaning entrepreneurs can build a business from any part of the world. The new Digital Nomad Visa allows remote workers to live in Estonia and legally work for their employer or their own company registered abroad.

In the UK, the Startup and Innovator Visas are welcome steps, but they are still not working in practice. They aim to give incubators, accelerators, and venture capital firms – some of the organisations best placed to identify, and most incentivised to search for top talent  – a key role as external endorsing bodies for these visas. But as part of the announced review the schemes need reforms. Primarily, the Government needs to redouble its efforts to ensure there are more high-quality endorsing bodies with ‘general’ acceptance criteria – not just sector or geography specific. In addition, more clarity needs to be given about the definition of an innovative business idea, endorsing bodies need to get subsequent endorsements quicker after they have got through a batch of 25, the requirements for Indefinite Leave to Remain should be eased, and the schemes need more international promotion from Government and our embassies. In addition, entrepreneurs who have raised significant funding from institutional investors should be automatically granted a visa with the investor acting as the endorsing body for the period of sponsorship.

Aside from this change, the processing of all visas should be streamlined and sped up while reducing the complexity associated with maintaining a visa or transferring between different visa routes. Even the smallest delays and confusion can put people off from applying, with many entrepreneurs understandably wanting to avoid the risk of not knowing how long or whether they can stay in the country. In the meantime, people who legitimately switch between different visa routes should not have to go back to their home country – a friction that often results in them giving up and never coming back, causing significant and needless cost to British jobs, businesses, and living standards as a whole.

Likewise, many leading innovators may need to look after family, particularly ageing parents, so we risk losing them again to emigration (or to them never coming in anticipation of this). We should therefore relax the rules on dependency, reducing the bureaucracy associated with innovators bringing close relatives with them.

Apart from reducing frictions and barriers to entry, however, the UK can afford to go even further, by proactively identifying and persuading innovators to settle in the country.

Such policies have a long and global history, the most famous recent example being Operation Paperclip: shortly after the Second World War, the US actively recruited over 1,600 German engineers, persuading them to move to America. These recruits became many of the chief architects of the US space programme, including Wernher von Braun. A similar approach in the 19th century ensured that Isambard Kingdom Brunel was a British engineer, rather than a French, Russian or American one, because of the government’s active steps to recruit and retain the engineering talent of his father.

We should not just learn from these past successes, but try to surpass them. The new Office for Talent, for example, should do more than just attract researchers, but actively recruit them, especially in areas where the UK hopes to be at the forefront of technology, such as AI. It could help broker deals between universities and prominent international academics to set up their labs at UK universities. At the moment, international talent sees the visa system as a hurdle to overcome. This approach would flip the perception on its head.

There is also a case for focusing on potential over past achievements. One recent study finds a powerful link between performance at the International Mathematical Olympiad and future achievements in research. Just as Premier League clubs invest heavily in scouting the next Neymar or Messi from South America, the UK should do the same in terms of finding the next Demis Hassabis or Katalin Karikó. The government should actively fund scholarships targeted at the next generation of scientists across the developing world and investigate the best predictors of success. Utilising the full talents of the best and brightest in the developing world would not just boost innovation in the UK, but have powerful spillovers for the rest of the world. One study finds reducing immigration barriers – by addressing financial constraints for top foreign talent – could increase the global scientific output of future cohorts by 42 per cent. 

There is intense global competition for talent. Open-door policies are no longer sufficient. We need to rediscover the lessons of Operation Paperclip and adopt an active pro-migration approach.

Testbed Nation
Making Britain a nation of early adopters

Anton Howes, The Entrepreneurs Network
Sam Dumitriu, The Entrepreneurs Network

When it comes to cutting-edge technology, most attention is focused on developing it within the UK. But if we are to take the basic tenets of free trade and consumer surplus seriously, then we should be significantly more concerned about our ability to adopt technologies, not just to create them. “Consumption”, to quote Adam Smith, is after all “the sole end and purpose of all production”. It is all very well to have world-leading universities producing great leaps forward in science, or to have home-grown entrepreneurs applying them, but we should also be concerned about whether foreign companies that have already done the hard work of developing technologies are actually deploying them here first. The challenge then is not only to be a nation of inventors and entrepreneurs, but also a nation of early adopters – indeed, achieving this will only encourage home-grown inventors and entrepreneurs too.

Take the example of Sweden and digital payments. As early as 2006, public transport operators there made the shift to cashless payments on trains and buses in response to a series of robberies. With the early adoption of payments technologies, the country became a much more sophisticated market for new entrants to the sector too, helping local entrepreneurs to develop their ideas. Sweden is now home to world leaders in fintech and payments such as online financial services provider Klarna and mobile payment and card reader specialists Zettle. The UK has a similar story to tell when it comes to e-commerce. Although much of the internet infrastructure to make it possible was developed in Silicon Valley, the UK has consistently seen higher rates of e-commerce penetration than the US and EU. This has been a boon for British retailers who have been able to get a head start on the e-commerce revolution. 

The key lesson from these examples is that for the UK to become the most attractive place for innovative investments, it needs to do all it can to support domestic demand. This means making the political decisions that enable the adoption of new technologies. Singapore, for example, has already approved lab-grown meat not just for testing, but for sale. The UK should do the same forthwith. The technology promises to revolutionise the food industry, radically reducing consumers’ environmental impact worldwide and freeing up vast tracts of land for other crops, reforestation, and areas of natural beauty, as well as preventing animal suffering. The sooner the UK follows Singapore’s lead on this, the sooner it will provide a chance for innovators to test the commercial viability of their businesses under conditions of real competition, and with real consumers, rather than under regulators’ trial conditions. Only then will the next generation of entrepreneurs be able to develop new applications for existing technology, in this case developing the food, agriculture, forestry, fishing, and leisure industries in ways that are currently hard to even envisage. Thus, even though the UK has not been at the forefront of developing the initial technology for lab-grown meat, being the first to develop a vibrant domestic market for it would enable the UK to become the eventual world leader in related industries. Becoming a nation of early adopters means becoming the place of choice for subsequent generations of entrepreneurs and innovators. Pay attention to consumer surplus, and production will follow.

Encourage Early Adoption Across the Board

In many cases becoming an early adopter of cutting-edge technology is not a matter of regulation, but of making political choices – the kind that can only be taken by ministers or by Parliament. 

The nascent drones industry provides another case in point. The great promise of the technology is to reduce the costs of our existing infrastructure through more efficient inspection services, to free up our roads and improve logistics with faster and more direct deliveries of everything from takeaway orders to urgent medical supplies, and even to add a whole new kind of passenger transportation that will extend access to urban amenities into the suburbs and countryside, and strengthen links between cities and their surrounding towns. It should, in short, be a core technological component of the government’s levelling-up agenda.

To the credit of the regulator for drones, the Civil Aviation Authority (CAA), it has worked closely with many drone entrepreneurs to craft the regulations necessary for the industry’s development, running trials and sandboxes, and designing new regulations in response. Although in some respects trials of some drone services appear more progressed in other countries – with both the US and Ireland already running trials of drone deliveries in small towns – the CAA’s approach has generally been impressively pro-innovation. Where there have been potential barriers, these have been in areas that the regulators understand well, but which are beyond their remit to address, as they involve imposing new regulations on existing flyers: in this case, making it compulsory for all recreational aviators in the least regulated kinds of airspace to have to be electronically conspicuous – that is, giving off signals that make them electronically visible to drones. 

At the moment, existing recreational aviators would have to bear the cost of making their aircraft electronically conspicuous, and so the requirement is not something that the CAA has the democratic legitimacy to impose – it boils down to choosing between two interest groups. It must therefore be resolved by politicians (most easily, perhaps, by simply providing funding for casual flyers to make themselves electronically conspicuous). Resolving the issue will not remove all barriers to the growth of the drone industry, as drone entrepreneurs will still have to demonstrate the safety of their services and solve further technical issues. It is no panacea. But this major obstacle must be overcome if the UK is to become an early adopter of the new technology, regardless of whether the drone-operating companies are foreign or home-grown. 

What the drone example reveals is that in order for a country to become an early adopter of a new technology, issues that prevent adoption often need to be resolved at the political level. As the CAA’s approach indicates, regulators can be attuned to the need for innovation. Indeed, the CAA applied to the Regulatory Horizons Council – an expert committee advising on matters of technological innovation and associated regulatory reform –  for support in developing regulations for the emerging drone economy. Yet the effectiveness of the Regulatory Horizons Council ultimately relies on individual civil servants taking the initiative, and then supporting initiatives originating from the bottom-up, rather than innovation being promoted systematically throughout government. 

What is required, then, is a simultaneous sense of urgency from the top-down, with ministers providing the vision for how their departments might better promote innovation, and acting immediately to remove political barriers to the early adoption of new technologies.