Dissents #2: Alan Turing Institute boogaloo
How not to build an AI institute 2: this time it’s personal
As a free speech enjoyer, I periodically share a summary of the most interesting critical responses from readers to my work, along with a few reflections.
My posts on null results in policy, cultural explanations of economic performance and European rearmament proved relatively uncontentious, but my account of what went wrong with the Alan Turing Institute, the UK’s national AI and data science institute, gained more traction than I’d expected.
While the majority of responses were supportive, not everyone was happy. Life is too short and I’m not petty enough to respond to every point, but there are a few things I’d like to address.
Separately, a few days after the piece was published, Jean Innes and Doug Gurr, the CEO and Chair of the Alan Turing Institute respectively, gave an interview to the FT about their planned ‘revamp’. I have no idea if this was done consciously in response to this piece or if the timing was coincidental, but I’ll share a few reflections on it at the end, along with some thoughts about what I’d do if I were in government.
Is Tonty Blair behind this?
A common theme across responses was an attempt to read something sinister into my motivations, along with a peculiar fixation on the Tony Blair Institute (mentioned once in a 4,000 word piece).
Joanna Bryson, a professor of ethics and technology I referenced in the piece, was among those with questions. Her full comment is quite long and available in full here, but the bit that jumped out at me:
If British work on AI ethics was marginalised in 2023 it might be because Brexit was excluded them from working on the EU's AI Act, or because they were panicking about the financial collapse of the country, making enormous portions of their staff redundant, and so forth.
Tony Blair and the AI Safety Institutes seem overly captivated by the same West Coast Accelerationists that are dismantling the US state, committing crimes against humanity, and merging US foreign policy with Putin's. I know there are some extremely smart, yet weirdly ignorant people at both. It's not easy to keep up with the spiral of interlocking concerns – economy, security, technology, governance – but between them the TBI & ASI have enough money and brains I really think they ought to be able to come up with something better if they stopped giving too much credence to charletons whose primary drives seem to be ending restrictions on money laundering – well, that, and validating their horrific lack of concern for the lives of other people.
Meanwhile, back to the universities. They are one of the UK's leading sectors, or were. Maybe still are and the whole UK is crashing, IDK. But we were until recently leading the world along with the US and Switzerland in terms of quantity in the top 100 globally per capita. I'm presently working in Germany because it's still in the EU, but everyone knows German universities are weaker because they have to focus so much on teaching – the research € go to the Max Planks. As far as I can tell, having researchers mixed in with undergraduates, postgraduates, postdocs, and professors has been a winning Anglo-American strategy for a few centuries. I was honestly astonished that the US and UK didn't stand out more in your Figure 1. This is part of what makes me wonder if this is hatchet job, and what the motivation for the piece is.
Bryson then followed this up with posts on LinkedIn and BlueSky that made similar points, including one that asked why I had been ‘set’ on the Turing.
An anon also weighed in to accuse me of writing a ‘hatchet job’:
This is quite entertaining as a hatchet job, but is not a balanced or well-enough informed piece. For every criticism mentioned here you could equally find an opposing view. The EPSRC was not 'politely scathing', but made some pretty standard recommendations for improvements to governance, alongside lots of praise for certain of Turing's research and community-building activities.
Likewise there is a fundamental misunderstanding here about the Institute's role – as there was in the Tony Blair (techno-libertarian) Institute report that criticised the Turing for not having produced 'UKGPT': it isn't, and never was meant to be, a research lab, and has a budget thousands of times smaller than OpenAI (which is losing vast sums of money each day). The Turing's role was to convene university and other research, which it has done very effectively, while building an outstanding team of research software engineers.
To disappoint members of the reality-adjacent community, I was not ‘set’ on the Turing by a shadowy west coast ‘techno-libertarian’ cabal and I don’t … advocate money-laundering? The editorial calendar of this Substack is determined entirely by my personal whim. One of the most common themes since I started it last October has been institutional failure, across technology, finance, and defence. The Turing is a really good example of it.
I quoted a Tony Blair Institute report in the introduction, because it articulated some good criticisms of the institute, not because of some malign intent. The TBI report also doesn’t, as the anon contends, criticise the Turing for not creating a ‘UKGPT’. Similarly, unless the anon has a long track record of working at dysfunctional institutions, I’m not sure the EPSRC (the Turing’s main funder) requesting ‘urgent’ changes to financial reporting and an entire new constitution constitute “standard recommendations for improvements to governance”, but we may have to agree to disagree…
A bizarre comment on motivations came from ‘Jay’:
Really interesting to see the framing here. "Caring about the environment" is political but funding defense research is apolitical and "just makes sense." Huge bias in this post.
This misrepresents the section on defence. The point I was making was that a national institute is well-positioned to work with the government on more sensitive issues. Anyone at a start-up or a private lab can attest to the endless world of clearances that you have to navigate before you can even get through the door. This is not the case for private actors trying to work on applications of AI to environmental challenges. No good faith reading can interpret this post as saying that ‘caring about the environment’ is ideologically suspect.
Gopal Ramchurn, who runs Responsible AI, was interested in why I hadn’t given the AISI and ARIA the same treatment as the Turing and RAI:
I think many in the AI community would agree the ATI could do more. On your evaluation of other UK AI initiatives, I see you've missed an evaluation of ARIA and AISI in your report. Would be good to get your thoughts how they are doing as they have similar budgets to Turing.
I don’t think either of these organisations is above criticism, and if I had any reason to believe they weren’t accomplishing their mission or were being mismanaged, I’d consider writing about them. I think the comparison is unhelpful in this context, however. ARIA is a new organisation and its entire raison d’être is high risk, high reward research. If ARIA had notched up a load of big wins this early, it would suggest that the team wasn’t aiming high enough.
Meanwhile, the point of the AI Security Institute is to evaluate the security of frontier models. Big labs are open about the fact they are working with them on exactly this. ‘Organisation fulfills intended function’ didn’t strike me as a hugely interesting story, even if it is an increasingly rare one.
DeepSeeking justice
Much ink was spilled on my criticism of Mike Wooldridge, who ran foundational AI research at the Turing between 2022 and 2024. I later updated my piece to reflect the fact he left the institute last year, as Turing had listed him as a current employee until mid last-week.
Simon Thompson, a former industry fellow at the Turing, led the charge on this point. His full comment and follow-up replies are worth reading, in part, because beyond Mike and DeepSeek, we probably agree on quite a lot. It’s clear that he sunk a great deal of personal and professional effort into the institute. I respect his openness and honesty.
For a start the weird ad-hominem attacks on Mike Wooldridge. Mike took on his leadership role at the ATI in 2022 when it was (I think) a mess. He is not responsible for the mess, I don't know what he's done about it, but to attribute it to him is just flat wrong. And, yes - as part of his mission to communicate about AI (including doing the Christmas lectures) he has described LLMs as prompt completion - which is exactly what they are. The shock was twin fold, that massive computation and data would allow prompt completion as good and useful as it is and that someone would spend $10m on a single training run for 175bn parameter one. These two things show that our understanding of natural language and economics were both radically wrong... Also AI is not a monolithic discipline and Mike is not an NN or NLP researcher, he is a specialist in game theory and multi-agent systems, so this is like criticising a condensed matter physicist for not predicting dark energy. Anyway, take it from me, you've taken aim at the wrong person and it's unpleasant and unfair.
The thing about Deepseek is especially egregious. Of course he hadn't heard of them. Until V2 hardly anyone I know had, they made some interesting models but nothing really significant. V3 changed that. The sponsorship of the CCP was an important part of this change, if the CIA didn't know why should Mike?
To be clear, I’ve never interacted with Mike and bear him no personal animus whatsoever. I also don’t blame him specifically for anything that went wrong at the Turing in the piece. I included some of his quotations because I feel they’re representative of a wider trend in the academic caste that the government has historically turned to for advice.
It is, however, patently untrue that no one had heard of DeepSeek before January 2025. I (by no means an expert on Chinese AI labs) was writing about them in July of last year, covering the efficiency of their models, their use of synthetic data, and ability to outperform OpenAI models on certain metrics. I’m not claiming any unique foresight here! My friend Shakeel was writing about them at the same time and DeepSeek’s model releases were covered in freely available media.
I didn’t include the DeepSeek quote as some kind of cheap shot. If an academic is going to hold a senior position at a national institute or offer advice to policymakers (specifically on LLMs and generative AI), it’s completely fair to scrutinise their comments on the subject. By the same token, as a believer in free speech, I completely respect people’s right to find me doing so egregious.
Even if we did believe that it was fine for senior academics not to have heard of DeepSeek, it still raises an important question about the quality of advice that the government is receiving. If it can’t rely on a national AI institute, exactly who should it be turning to?
The lights are on, but is anyone home?
One of my favourite responses came from Seán Ó hÉigeartaigh, the Director of AI: Futures and Responsibilities at the University of Cambridge. He noted that:
This delivers a real broadside against UK academia towards the end, but I think it's worth considering it in the context of the ongoing brain drain.
When I arrived in Cambridge, Zoubin Ghahramani was the Big Name & influence, and Cambridge was the leading AI university in Europe. Now Zoubin's playing w infinite resources at Google, lot of the top talent in engineering has gone to industry, & the agenda both at Cambridge & in places Cambridge influences like ATI is much more influenced by the academic holdouts who don't 'buy' LLMs/frontier AI.
Hard to know how to break that cycle. Interesting to me that in terms of vibes, feels like Cambridge AI went from quite liking what we do (frontier AI governance) to very not liking it, even as one would think evidence supporting our research directions was mounting not diminishing.
Again, I think it's super important academia has people who take frontier AI seriously. But if your interest is in building cutting-edge systems, lot easier and more fun to do with the resources of industry. So you still get some governance folks, but ever-fewer scientists.
Also having done some unfun management jobs in my time, I reckon "Executive Director of ATI" is probably a candidate for "least fun job in the UK".
I’d considered including the brain drain angle in the original draft, but was conscious that the post was well on its way to being a six-part drama lengthwise. I definitely think there’s been a strong (largely inverse) selection effect in UK academia.
Government has two choices in response to this: fund good research or change who advises it. It appears the government has opted for the latter approach in recent years, which obviously comes with trade-offs. Relying heavily on the Turing and the old AI Council combined the worst of both worlds - not funding academia and relying on it.
Do I feel sorry for the leadership of the ATI? Yes and no. As I document in the piece, I think they were dealt a pretty rotten hand, but I also don’t think they’ve played it well…
Scream if you want to go harsher
One line of criticism I wasn’t anticipating was that I hadn’t been harsh enough. But some anons seemed to think I hadn’t taken the current management to task sufficiently on redundancies and project closures.
One said:
It Is a good piece. However, I wish you would have covered more recent context to properly understand the current cuts. For example, the fact that three out of the four science leads that were supposed to make the new strategy thrive have resigned; one of them being vocal about a lack of faith in the institute's leadership and recently hired as research director of DeepMind.
And another:
you are missing crucial information about the current administration that has been tanking the ATI lately. the closure of 1/4 of the projects is the tip of the iceberg. those projects were the only ones alive when the formal layoffs begun. in reality closures started with the creation of the chief scientist position, a sort of tiny dictator in the ATI. he has been strangling projects and research groups left and right by killing partnerships and cancelling grants. Mike Wooldridge is one of many examples of the people he pushed out. a more accurate number of closed projects would be around 2/3. it would be useful for you to talk to external partners who have engaged with the ATI recently to see the damage that this strategy has caused.
These anons seem well-informed and I suspect are current (or newly former) Turing insiders. Unfortunately, I’m not omniscient and I can’t make people talk to me. If someone hasn’t supplied me with information or it’s not available publicly, I can’t cite it. There were lots of bits of third hand gossip or tittle-tattle that came my way while I was researching the piece, but I can’t print claims that I’m unable to stand up.
If you wrote one of these comments - get in touch. While I will always protect your anonymity, I can’t uncritically reprint information when I don’t know who you are and have no way of verifying it. You could be part of the techno-libertarian cabal, after all…
Back to the future: the Turing responds (sort of)
Jean Innes and Doug Gurr’s FT interview was a sorry excuse for an institutional defence. It stopped short of addressing any of the substantive criticisms that the Turing has faced. The headline takeaway was that the institute would adopt a more focused approach, doubling down on health, the environment, and defence as part of a ‘revamp’.
“Britain’s flagship artificial intelligence agency will slash the number of projects it bucks and prioritise work on defence, the environment and health as it seeks to respond to technological advances and criticisms of its record.
This is not a ‘revamp’. This is a restatement of the Turing 2.0 strategy that they adopted in 2023. I referenced this strategy in the piece - it’s the 66-page document with a single mention of LLMs. The only new news is that the institute will be doing so with a reduced headcount.
One also has to question the quality of the institute’s oversight. Gurr, the current chair of the Turing, is also Director of the Natural History Museum and the interim chair of the Competition and Markets Authority, at a time when the CMA is under serious pressure from government.
The FT, to their credit, pushed on this:
Asked it was possible to do the three jobs at once, Gurr joked that he was “unbelievably good at time management”.
This doesn’t strike me as a satisfactory response. I equally wasn’t blown away by his attempt to describe the Alan Turing Institute’s value proposition for potential talent:
“Our proposition is that, while you can get more by going across the road to a large tech company, what you get here is the opportunity to work on some of the most interesting problems that will make a real impact on the world,” Gurr said.
Is Gurr really suggesting that the median researcher at the Turing would be working on more interesting problems than a peer at Google DeepMind? Is the Turing’s work really having more of an impact than AlphaFold?
Reflections
I first became aware of the Alan Turing Institute in 2017 and couldn’t quite figure out what it did, but paid little attention. I took more notice over the course of 2023 and 2024, when friends or acquaintances with otherwise uncorrelated opinions began to make similar points about the institute and its lack of impact.
Researching my piece convinced me that it’s unsalvageable in its current form, even if it were to receive better leadership and oversight. Its remit is too broad and its structure too broken for it to have any realistic prospect of a turnaround.
Admitting failure and starting again will be difficult, but I am not aware of any legal obstacles preventing research councils from redirecting their funding to better ends.
I hope the Alan Turing Institute brand is preserved in some form and I see no reason why it couldn’t be applied to a new body that acts as a more fitting tribute.
I don’t have strong views on the exact role of the new institute, but there’s a case for a cyber or national security focused body, preserving the best of the Turing’s defence work. Alternatively, if the government wants a national institute to have a broader focus, it could bring together leading next generation researchers from different universities under a single institutional umbrella and fund them to focus on open frontier research.
Whatever road they choose to go down, they need to ensure they pick a single focus and stick to it. Building an ‘institute for everything to do with AI and the public sector’, or ‘AI, the five missions and badgers’ is a surefire way to ensure you create Turing 3.0.
This will probably be my last word on the Alan Turing Institute for a while, but I’m open to revisiting this. If you have information you’d like to share in confidence, please get in touch.
Disclaimer: These views are my views or those of the people leaving comments. Unless they left them ironically or didn’t mean them, in which case, they should put more thought into how they present themselves in public forums. But more importantly, these aren’t the views of my employer or anyone who didn’t leave a comment. I’m not an expert in anything, I get a lot of things wrong, and change my mind. Don’t say you weren’t warned.