35 Comments

I was the first (and at the time) only industry fellow at the ATI when it opened. I was involved in the discussions before it was proposed to the government. Still I actually think that I got the fellowship because I was the only person to apply, which I really found quite odd at the time and now I find inexplicable. Perhaps they lost a lot of other applications.

There are some things in this article I agree with and many things I don't.

For a start the weird ad-hominem attacks on Mike Wooldridge. Mike took on his leadership role at the ATI in 2022 when it was (I think) a mess. He is not responsible for the mess, I don't know what he's done about it, but to attribute it to him is just flat wrong. And, yes - as part of his mission to communicate about AI (including doing the Christmas lectures) he has described LLMs as prompt completion - which is exactly what they are. The shock was twin fold, that massive computation and data would allow prompt completion as good and useful as it is and that someone would spend $10m on a single training run for 175bn parameter one. These two things show that our understanding of natural language and economics were both radically wrong... Also AI is not a monolithic discipline and Mike is not an NN or NLP researcher, he is a specialist in game theory and multi-agent systems, so this is like criticising a condensed matter physicist for not predicting dark energy. Anyway, take it from me, you've taken aim at the wrong person and it's unpleasant and unfair.

The thing about Deepseek is especially egregious. Of course he hadn't heard of them. Until V2 hardly anyone I know had, they made some interesting models but nothing really significant. V3 changed that. The sponsorship of the CCP was an important part of this change, if the CIA didn't know why should Mike?

Now, moving on from unhelpful personalization of a national issue. From 2015 there was a full on effort to drive interest and activity in deep networks in UK academia. I think my journey was typical of AI people in those days. I was quite negative about Alexnet in 2012, I really thought that there was something silly. I asked an intern about it and got comprehensively schooled in the discussion, so I went and looked at it carefully and rapidly realised I was wrong and they (Hinton and so on) were right. It was just a fact, then people (and me) were scrambling to get GPU's, use GPU's find applications and so on. There were events at the Royal Society, Geoff Hinton and Demis Hassabis were speakers (me as well, but the audience looked very bored for that bit). It was not seen or treated as a curiosity, it was seen as the way of the future - without equivocation. So the idea that it was treated with skepticism is just nonsense. I remember asking Wooldridge about his opinion of neural turing machines (a deepmind paper) in 2014 and he was extremely positive and clear that the work was important. This is just an example, but I was there and I can tell you: UK academia was not skeptical, impressed, excited, interested, but not skeptical.

What do I agree with in the assessment you present and what do I think happened?

You are right that it has failed to create practical outputs. You are right that it has focused in a very silly way on imaginary ethical issues rather than developing useful outputs. Notably, there isn't any reason to make a claim of value for any of these outputs, they haven't made any impact. You are right that it has built a secretariat rather than a capability for the UK.

I did not think that the ATI should act as an admin mechanism for the research councils. For some reason 70 odd admin people were appointed before anything happened. They were all very busy writing policies. I did not think it should act as a recruitment system for the universities either although I did appreciate that the Keck does that for uni's in London and nearby. The Turing had to be a national beast, I thought, so the geographic stickiness of the Keck was not appropriately. Also labs and physical location matter less for compsci. I did not think that the ATI should do independent research, in fact I thought, and think, that that is ridiculous. We have lots of compsci research in the UK. What we have precious little of is transformation of research to practice.

When the ATI was started up it was clear to me after about 30 minutes wandering around the space in the BL and talking to them, that no one involved knew what to do apart from try to get more funding to make it sustainable. I remember a group of professors spending two days in a room wrangling strategic priorities. I was excluded from this process which annoyed the piss out of me, but nevermind, I would probably have had a stroke if I was included. What made me laugh was that there were 20 participants and they produced a list of 20 priorities.

More funding was a pretty difficult objective to argue with and I think that central government had passed a pretty clear message that if the ATI couldn't attract considerable extra funding then it could expect a pretty short lifespan. As above, it was the only issue that anyone could agree on anyway. The problem though, I think, was that the academics who were running it had very little idea of how to engage with industry. I was told that they were interested in collaborations that carried at least £10m funding a year because it wasn't worthwhile to engage with organisations that would put less than this in. There were very few organisations around that were prepared to put £10m with no strings attached and no commitments to anything than funding research in Data Science, Big Data or AI into a brand new institution with no track record. The ATI folks seemed to think that they could just wait in the British Library for sponsors to show up, bringing contracts with them. In fact it wasn't even that they didn't sell the ATI, when I tried to proactively get engagement they were clear that the kind of collaboration in the six figures range that I could stretch my organisation to consider was not of interest at all.

To be fair I suspect that if I had been able to put the collaboration I wanted on the table my organisation would have turned it down anyway, so maybe the ATI were right on that. Anyway there was no viable strategy to get commercial sponsors in the numbers needed both for money and to drive the insitute. Hence, the turn to the academic money mill.

So, fully rebuffed, chastened, and put in a box, but still as thick as mince and oblivious to the undoubted derision swirling about me (or just complete disinterest) I carried on and pushed two ideas, which no one was even slightly interested in. I also think that there was a bloody obvious one that I didn't push because it wasn't my bag that they didn't do as well.

The first idea I had was that the ATI should serve as a bridge between UK academia and UK vital industries. My logic was that the UK lacks national champions in software and hardware and therefore a bridging institution could or should support technology transfer from academia to the NHS, FS, Defence and pharma. To be fair I think that people at the ATI would point at the defence collaborations that you highlight and say that worked out, but in truth I don't really think it has at all.

The second idea was that the ATI should structure activities around the consortium model that MIT was using at the time and I had participated in very successfully. Basically groups at MIT identify a topic and then advertise (through the institutions networks) the opportunity for others to participate. Participation requires a significant fee, but the fee is only a part of the consortium's overall funding and that means that you are buying into a research program that is many times larger than you could fund independently. The consortium then offers a range of ways for members to participate including workshops, courses, meetups, placements and so on. I will be frank, the ATI people I pitched this too looked at me in much the same way that my dog looks at me when I try and get her to play chess, although I am happy that none of them bit me.

The blinding miss is that I think that the startup community in London could have benefited significantly from the ATI facilitating and developing their work - but I am not a startup guy so didn't explore that. There was a lot of entanglement with CognitionX which I just couldn't understand. I don't think my networking autism did me any good in that conversation. Perhaps if I had acted differently I wouldn't have a mortgage now. The banner of the ATI web site still doesn't feature startups or entrepreneurs. I find that shocking (clutches pearls).

Anyway, my fellowship was a total bust and I was quite pissed off about it. I had a relatively young family at the time and I spent quite a lot of hours messing about at the ATI when I could have been messing about in the garden with them. Writing all this down makes me feel a bit better now - but so has being out doing AI in industry for the last ten years and earning some cash while I've been doing it.

What a bloody shame though, it's been a massive miss and I will always regret that I lacked the skills, personality, and gravitas to influence what happened.

Expand full comment

I had heard of Deepseek in Q3 2024 because I follow Steve Hsu on twitter. It seems a bit parochial for senior people at ATI to not be intensely following the state of the art.

Expand full comment

The state of the art today is massive, and people have jobs to do and AI is a very broad field. Q3 2024 isn't so long ago either. I had come across V2 in June or July but only as a curiosity that they were building models at all.

Expand full comment

I've been involved with Turing for many years and so much about this reflects my experience. Too many of the Programme Directors were academics who were just interested in creating mini-empires for themselves. The Turing was a nice source of funding, outside the usual grant structures, for their own niche activities and agendas. However, the core institute did not help itself either, an internal set up with far too many community and programme managers often doing things on the periphery of the core mission of AI and data science and a leadership too scared to focus the core staff and just letting them do whatever they want. It was a long running joke that the Turing had a bigger administrative machine to run a few hundred researchers than many university departments who were looking after thousands of students and staff.

Expand full comment

The root cause is a pattern very common in recent years in the UK - announcementism. This is government policy whose main purpose is to make an announcement, with little (and remarkably often zero) idea of what it means. Something must be done; this is something, so let it be done. The ATI was a George Osborne Announcement. After that, if you get very lucky a strong lead may make something coherent and useful, but the odds are that will not happen (as here). See also UK gov's sovereign investment in life sciences, and many many other examples.

Expand full comment

Yes! The apprentice levy was another George Osborne announcement. As if the Treasury know anything about education policy.

Expand full comment

Absolutely true. Osborne especially was addicted to announcing institutes, but announcements are generally the means by which government influences the news and that's the Stafford Beer POSIWID purpose of government ministers and their staff. If you successfully manage news, you get on and indeed this is seen as the purpose of politics.

Expand full comment

I spent some time at ATI as a visiting fellow, way back in 2017.

The Institute itself was almost completely hostile to actual research work. ATI had not just hot desking with an open-plan office, but mandatory hot desking — i.e., you couldn't leave materials on a desk overnight, and you'd receive a scolding note if you did. It operated, essentially, like a rather unpleasant café. In order to get any work done, you had to mentally isolate yourself with a pair of headphones.

The ratio of staff to scientists was extreme, and there were bizarre incompetencies — for example, my background check was never correctly processed, so each morning I had to tap on the window to be let in. The one novel project that I collaborated on while there was a briefing to the House of Lords on AI, which was managed over an e-mail chain and largely ended up as a banal restatement of platitudes.

A functional institute needs a good leader — a senior person to model the vibe of what should be getting done. They don't have to be involved in the day-to-day research or decision-making, but cultures do not emerge out of nothing. ATI (of course) didn't have this.

That said, many of the staff were absolutely lovely, and the early-career academics ATI funded were (like most in academia) engaging and motivated researchers. But they probably should have just given the space, and the funding, to a local university and let them run it.

Expand full comment

This is what happens when you exclusively engage very senior (ancient) academics and expect them to steer R&D in any organisation.

It's the same 15 or so of these AI experts that seem to haunt every AI panel and advisory board in the UK. Many of them current or former CSAs. All hailing from the pre AI-winter days of symbolic reasoning, semantic webs, multi-agent systems and knowledge graphs - when machine learning was some crap that was done by engineers and statisticians.

These are the people that lead much of the AI strategy within the UK from EPSRC to DSIT to within organisations with GHCQ and the MOD - all singing to the same hymn sheet that LLMs are a dead-end and we need to bring back the old methods. Hence the strange obsession with neuro-symbolic AI, knowledge graphs etc.

The Turing was run by this cadre, from the board of trustees, to the advisory board to the programme directors. Their proximity to Oxbridge, Government and research councils consolidated their power, and meant that they could largely do whatever they wanted.

Despite what they claim, the reality is that nobody talked about LLMs at the Turing before ChatGPT, and then overnight they all became experts, with a deep scepticism towards them. Even today, Turing researchers seem to vacillate between 'LLMs are useless' and 'LLMs will kill us all'. There's nothing wrong with healthy scientific scepticism, but none of this was backed up by any serious research.

To their credit, the Turing did try to hire some young directors with actual ML expertise, and with experience running large R&D programmes. They have since all resigned.

RAI is maybe even more egregious, because they had the Turing as a shining example of what not to do, and they've just double-down on it, with a leadership team entirely comprised of these holy cows.

UKRI has a new CEO - hopefully he will be brave enough to make the right decision it's two failed investments.

Expand full comment

Word for word correct and unimprovable, like a great poem.

Expand full comment

The Managed Decline of the Yookay is so obvious to me now. I seriously doubt anything of any use can be built here thanks to the culture of complete reliance on Grants and obsession on Process over Outcome.

Expand full comment

For the record, you quote me out of context (though not very.) I said the AI Safety *narrative* was mostly transhumanist nonsense, e.g. worrying about AGI. I was contrasting it to what we saw at the AI Action Summit talked about real problems like market concentration, regulation, and digital sovereignty (mostly.) It was to our enormous shame that the UK didn't sign the statement, citing a "middle way," that there must be some advantage to Brexit (also not true, the universe is not arranged to ensure booby prizes.) There is also no middle between signing and not signing a statement.

More generally, I'm not sure that your hatchet job on the ATI is entirely merited. There was definitely something wrong with giving £100M to just 5 rich universities (ahem, nearly half of which ARE in the North, Warwick and Edinburgh...) then expecting them to shuttle staff to and from London and rent expensive property there. As someone at Bath at the time, I would obviously rather have seen £5M in 20 universities, or maybe 18 and one Max-Plank-like office in London for the Ministries to get quick advice from (though they already can ask Elon Musk and the rest of the Royal Society for help there.) If British work on AI ethics was marginalised in 2023 it might be because Brexit was excluded them from working on the EU's AI Act, or because they were panicking about the financial collapse of the country, making enormous portions of their staff redundant, and so forth.

Tony Blair and the AI Safety Institutes seem overly captivated by the same West Coast Accelerationists that are dismantling the US state, committing crimes against humanity, and merging US foreign policy with Putin's. I know there are some extremely smart, yet weirdly ignorant people at both. It's not easy to keep up with the spiral of interlocking concerns – economy, security, technology, governance – but between them the TBI & ASI have enough money and brains I really think they ought to be able to come up with something better if they stopped giving too much credence to charletons whose primary drives seem to be ending restrictions on money laundering – well, that, and validating their horrific lack of concern for the lives of other people.

Meanwhile, back to the universities. They are one of the UK's leading sectors, or were. Maybe still are and the whole UK is crashing, IDK. But we were until recently leading the world along with the US and Switzerland in terms of quantity in the top 100 globally per capita. I'm presently working in Germany because it's still in the EU, but everyone knows German universities are weaker because they have to focus so much on teaching – the research € go to the Max Planks. As far as I can tell, having researchers mixed in with undergraduates, postgraduates, postdocs, and professors has been a winning Anglo-American strategy for a few centuries. I was honestly astonished that the US and UK didn't stand out more in your Figure 1. This is part of what makes me wonder if this is hatchet job, and what the motivation for the piece is.

Sorry not to have a good way to wrap this up – I have science articles and grants to write and I need to learn German so I can get a working EU passport again. But I'm seriously interested in your motivation here, I hope you can reply.

Expand full comment

Hi Joanna - thanks for engaging and responding. I've had a lot of responses to the piece, many of which tend to touch on one of a few themes - so I'll likely to a follow-up addressing some of them in the next week (if other deadlines etc. allow). This should hopefully cover off everything you've raised here.

Expand full comment

I think one of the fundamental problems with academic incentives - some of which I think played out here - is that grant funding is seen as an output, not an input. In other words, *getting* the funding is celebrated / counts towards career success more than what you actually do with it. Philanthropy, for all its limitations, is better in that sense because the donor is typically watching.

Expand full comment

Really interesting to see the framing here. "Caring about the environment" is political but funding defense research is apolitical and "just makes sense." Huge bias in this post.

Expand full comment

Great piece, we hardly ever see these type of grand system critiques of public funding initiatives. I’ll make a few comments from a University perspective. The ATI was a kind of Ponzi scheme whereby a University signed up to making a “donation” (unrestricted) to the ATI, surrendered its research sovereignty to ATI decision makers and “hoped” to get its money back. It was considered as “the price of doing business” and earning prestige, even though it looked like a certain away to give money away for no financial return. The prestige was of course significantly diluted as the ATI went through waves of expanding its university membership in order to increase its funding.

Universities may have got their money back through an implicit expectation that their donations would be returned via fellowships and grants, but they definitely did not profit from the ATI. The money returned simply funded salaries already committed and the projects meant that staff that were already funded by universities were diverted from earning “real” external income. So the ATI ended up being an incredible inefficient way to self-fund research. The cash flow of ATI funding was at best zero-sum but it created a significant opportunity cost for the university. Universities themselves where acting as rational agents in a flawed system designed by civil servants of the ESPRC.

The ATI is a case study of how UK civil servants and politicians with worthy ambition and good intent design systems that fail and waste resources, a bit like the £9k tuition fee system.

Incidentally, your take on the full economic costing system is wide of the mark. That’s another issue….. https://wonkhe.com/blogs/is-trac-the-right-approach-to-track-costing/

Expand full comment

I agree with most the article and most of the comments. i've been connected with the Turing since the start and could give other examples of missed opportunities and weird management decisions. I think this is a good time to have this debate and see if a better structure for this kind of initiative could be devised. I am fairly optimistic about ARIA , for example.... one thing that really was annoying was that the Systems for AI work engaged with people across the UK and was supported by Intel for the first 5 years, but 100% failed to actually do what is now being funded by ARIA, neuromorphic compute, which might actually replace GPUs in a sustainable way, and for which the UK (with ARM, and history like graphcore etc) actually has a research and industry base. the LLM debate is a red herring. we actually had someone who wanted to build a language model on the common crawl in the turing 8 years ago. it was deemed not that interesting, and too expensive to pursue - that view may turn out to be prescient :-)

Expand full comment

Great article with some nice research on the scene that has brought me up to speed. Thank you.

I took a deep dive into the academic world as a professor in an Information Systems Dept that became a Computer Science Dept for 17 years and ended up for 10 years running a multi-university collaborative (MATCH) under the EPSRC's Innovative Manufacturing Research Centres programme. I've written several posts about my experiences.

Yes, fEC was the sort of idea that could only have been dreamt up by a government for use in the academic world. Yes, any university that finds itself in the driving seat will come under pressure to maximise the returns to its home institution, and yes, there are lots of perverse elements to the scene.

My biggest finding is that it's very hard to get academia and industry to work together sustainably and at scale. There are some notable exceptions, but that was my experience. Two things:

I liked to use a 2 axis chart of interest v importance. A lot of what was important to industry was not interesting to academics. There was a subset of research that was interesting and important, one way or another, but maintaining significant effort in that quadrant was hard.

The other thing was timescales, where the skew for needing answers was extreme and hard to manage. I recall a seminal dinner (when most people left the table to watch Liverpool come back from a 3-0 deficit in the European Cup), and our guest speaker was a professor who had moved into industry and back into the academic world. His observations about industry's need for fast answers and the ability of his industrial colleagues to make decisions quicklynwith only a partial picture or vestigial data have stayed with me ever since.

When I worked in commercial R&D, we tried to limit university involvement in government-funded programmes. When I swapped sides, we tried to maximise cash contributions from industry to grants.

There's a deep cultural problem somewhere in that lot...

Expand full comment

I think the breathless ChatGPT fandom is beginning to look a bit stale and ironically rather like dismissing GPT-3 back in 2020; the excitement has moved on, the biggest scale systems are still roughly as big as Google's Switch-C in 2021, and even the priesthood of mere scale at OpenAI now admits that they're not getting anything very interesting or new out of their approach. (what did happen to that GPT-4.5?) All the innovation is now in small models, which changes the economics of development significantly, and it's worrying we're not seeing a lot of new open source small model work from the UK. Whether from academia, industry, amateur hackers, or whatever.

A big asterisk is of course that DeepMind aka Google Euro research is right here in London, but visiting one of the hyperscalers' London presences tends to feel like going aboard an American ship moored in the river and everyone else returns the compliment. Meta's third biggest global location is in Euston but who's noticed?

With regard to ATI I think its problem is that its mission is neither clearly research nor is it operational. Nor is it even both; Bell Labs was very much both committed to inventing the transistor and information theory, and also providing operational support to AT&T and Western Electric on things like how to reduce the number of linemen who fall off the telegraph poles. As such ATI has wandered around looking for a) purpose and b) budget. The Government Digital Service* by comparison succeeded because it always had an operational mission rather than generating endless PDFs, consulting stakeholders, or pontificating about the nature of consciousness. The comparison with the defence-y bit is interesting as it seems to ask for ATI to do stuff rather than poll the public about whether they like robot brains a lot, a little, or not at all.

*But that was introduced by Gordon Brown, so We Don't Talk About The GDS Or To It Unless We're Forced SHIT WE NEED TO SIGN EVERYONE UP FOR A VACCINE APPOINTMENT WHERE'S THE PHONE NUMBER FOR THOSE WEIRD NERDS

Expand full comment

Moving on from the ATI itself, your article explicitly criticises the UK for placing so much research delivery in its universities rather than specialist institutes or other places. I have an open mind on whether this is an issue or not but it’s a question that’s worth exploring.

The great thing about our universities is that they are communities of free spirits, loose networks of individuals that are difficult to lead or direct,especially when it comes to research. Individial researchers general respond to the funding carrots dangled in front of them (the ATI being an inverted carrot as institutions had to pay to join and many were suckered into making “donations” - surely an abuse of charity terms).

But the weakness of UK top-down research leadership in our universities may be a strength rather than a weakness, as it gives greater agency to front-line researchers.

Expand full comment

I agree, see the end of my comment here, and also https://joanna-bryson.blogspot.com/2016/01/what-are-academics-for-can-we-be.html

Expand full comment

It Is a good piece. However, I wish you would have covered more recent context to properly understand the current cuts. For example, the fact that three out of the four science leads that were supposed to make the new strategy thrive have resigned; one of them being vocal about a lack of faith in the institute's leadership and recently hired as research director of DeepMind.

Expand full comment

Who was that?

Expand full comment

This is quite entertaining as a hatchet job, but is not a balanced or well-enough informed piece. For every criticism mentioned here you could equally find an opposing view. The EPSRC was not 'politely scathing', but made some pretty standard recommendations for improvements to governance, alongside lots of praise for certain of Turing's research and community-building activities.

Likewise there is a fundamental misunderstanding here about the Institute's role – as there was in the Tony Blair (techno-libertarian) Institute report that criticised the Turing for not having produced 'UKGPT': it isn't, and never was meant to be, a research lab, and has a budget thousands of times smaller than OpenAI (which is losing vast sums of money each day). The Turing's role was to convene university and other research, which it has done very effectively, while building an outstanding team of research software engineers.

It's also untrue to say the Institute never developed a research culture: it certainly did in the early years, during which time lots of projects and areas of expertise developed organically across many different subject areas.

Having said that, there are plenty of criticisms you could make – but this piece doesn't – of the Sunak-inspired new management team that came in over the last 18 months who have decided to fire 1/4 of the staff and close some of the programmes most highly praised by EPSRC, including 'Turing Way' and nearly all of the work around open data and the public interest, which is not happening in universities or research labs, and is precisely the kind of thing a national institute *should* be doing, but now won't be.

Expand full comment

> there is a fundamental misunderstanding here about the Institute's role – as there was in the Tony Blair (techno-libertarian) Institute report that criticised the Turing for not having produced 'UKGPT': it isn't, and never was meant to be, a research lab, and has a budget thousands of times smaller than OpenAI (which is losing vast sums of money each day). The Turing's role was to convene university and other research, which it has done very effectively, while building an outstanding team of research software engineers.

This seems like a fair point, and, as you mention, I'd be interested to find a balanced view. In your experience, what are the 2-3 most impressive pieces of work the ATI has done in the last 10 years?

(esp. keen to hear outside of the defense work which the author mentions)

Expand full comment

Where does the TBI report criticise the Turing for not building a 'UKGPT'?

I have it open in front of me and can't see that anywhere.

Expand full comment

They did it publicly earlier

Expand full comment

a few ok things Turing did in first 5 years. 1. Turing Way. 2. Synthetic Data(finance program) 3. Counterfactual reasoning (XAI). example thesis https://www.cl.cam.ac.uk/~cm542/phds/grammenos.pdf

Expand full comment