40 Comments
⭠ Return to thread

I was the first (and at the time) only industry fellow at the ATI when it opened. I was involved in the discussions before it was proposed to the government. Still I actually think that I got the fellowship because I was the only person to apply, which I really found quite odd at the time and now I find inexplicable. Perhaps they lost a lot of other applications.

There are some things in this article I agree with and many things I don't.

For a start the weird ad-hominem attacks on Mike Wooldridge. Mike took on his leadership role at the ATI in 2022 when it was (I think) a mess. He is not responsible for the mess, I don't know what he's done about it, but to attribute it to him is just flat wrong. And, yes - as part of his mission to communicate about AI (including doing the Christmas lectures) he has described LLMs as prompt completion - which is exactly what they are. The shock was twin fold, that massive computation and data would allow prompt completion as good and useful as it is and that someone would spend $10m on a single training run for 175bn parameter one. These two things show that our understanding of natural language and economics were both radically wrong... Also AI is not a monolithic discipline and Mike is not an NN or NLP researcher, he is a specialist in game theory and multi-agent systems, so this is like criticising a condensed matter physicist for not predicting dark energy. Anyway, take it from me, you've taken aim at the wrong person and it's unpleasant and unfair.

The thing about Deepseek is especially egregious. Of course he hadn't heard of them. Until V2 hardly anyone I know had, they made some interesting models but nothing really significant. V3 changed that. The sponsorship of the CCP was an important part of this change, if the CIA didn't know why should Mike?

Now, moving on from unhelpful personalization of a national issue. From 2015 there was a full on effort to drive interest and activity in deep networks in UK academia. I think my journey was typical of AI people in those days. I was quite negative about Alexnet in 2012, I really thought that there was something silly. I asked an intern about it and got comprehensively schooled in the discussion, so I went and looked at it carefully and rapidly realised I was wrong and they (Hinton and so on) were right. It was just a fact, then people (and me) were scrambling to get GPU's, use GPU's find applications and so on. There were events at the Royal Society, Geoff Hinton and Demis Hassabis were speakers (me as well, but the audience looked very bored for that bit). It was not seen or treated as a curiosity, it was seen as the way of the future - without equivocation. So the idea that it was treated with skepticism is just nonsense. I remember asking Wooldridge about his opinion of neural turing machines (a deepmind paper) in 2014 and he was extremely positive and clear that the work was important. This is just an example, but I was there and I can tell you: UK academia was not skeptical, impressed, excited, interested, but not skeptical.

What do I agree with in the assessment you present and what do I think happened?

You are right that it has failed to create practical outputs. You are right that it has focused in a very silly way on imaginary ethical issues rather than developing useful outputs. Notably, there isn't any reason to make a claim of value for any of these outputs, they haven't made any impact. You are right that it has built a secretariat rather than a capability for the UK.

I did not think that the ATI should act as an admin mechanism for the research councils. For some reason 70 odd admin people were appointed before anything happened. They were all very busy writing policies. I did not think it should act as a recruitment system for the universities either although I did appreciate that the Keck does that for uni's in London and nearby. The Turing had to be a national beast, I thought, so the geographic stickiness of the Keck was not appropriately. Also labs and physical location matter less for compsci. I did not think that the ATI should do independent research, in fact I thought, and think, that that is ridiculous. We have lots of compsci research in the UK. What we have precious little of is transformation of research to practice.

When the ATI was started up it was clear to me after about 30 minutes wandering around the space in the BL and talking to them, that no one involved knew what to do apart from try to get more funding to make it sustainable. I remember a group of professors spending two days in a room wrangling strategic priorities. I was excluded from this process which annoyed the piss out of me, but nevermind, I would probably have had a stroke if I was included. What made me laugh was that there were 20 participants and they produced a list of 20 priorities.

More funding was a pretty difficult objective to argue with and I think that central government had passed a pretty clear message that if the ATI couldn't attract considerable extra funding then it could expect a pretty short lifespan. As above, it was the only issue that anyone could agree on anyway. The problem though, I think, was that the academics who were running it had very little idea of how to engage with industry. I was told that they were interested in collaborations that carried at least £10m funding a year because it wasn't worthwhile to engage with organisations that would put less than this in. There were very few organisations around that were prepared to put £10m with no strings attached and no commitments to anything than funding research in Data Science, Big Data or AI into a brand new institution with no track record. The ATI folks seemed to think that they could just wait in the British Library for sponsors to show up, bringing contracts with them. In fact it wasn't even that they didn't sell the ATI, when I tried to proactively get engagement they were clear that the kind of collaboration in the six figures range that I could stretch my organisation to consider was not of interest at all.

To be fair I suspect that if I had been able to put the collaboration I wanted on the table my organisation would have turned it down anyway, so maybe the ATI were right on that. Anyway there was no viable strategy to get commercial sponsors in the numbers needed both for money and to drive the insitute. Hence, the turn to the academic money mill.

So, fully rebuffed, chastened, and put in a box, but still as thick as mince and oblivious to the undoubted derision swirling about me (or just complete disinterest) I carried on and pushed two ideas, which no one was even slightly interested in. I also think that there was a bloody obvious one that I didn't push because it wasn't my bag that they didn't do as well.

The first idea I had was that the ATI should serve as a bridge between UK academia and UK vital industries. My logic was that the UK lacks national champions in software and hardware and therefore a bridging institution could or should support technology transfer from academia to the NHS, FS, Defence and pharma. To be fair I think that people at the ATI would point at the defence collaborations that you highlight and say that worked out, but in truth I don't really think it has at all.

The second idea was that the ATI should structure activities around the consortium model that MIT was using at the time and I had participated in very successfully. Basically groups at MIT identify a topic and then advertise (through the institutions networks) the opportunity for others to participate. Participation requires a significant fee, but the fee is only a part of the consortium's overall funding and that means that you are buying into a research program that is many times larger than you could fund independently. The consortium then offers a range of ways for members to participate including workshops, courses, meetups, placements and so on. I will be frank, the ATI people I pitched this too looked at me in much the same way that my dog looks at me when I try and get her to play chess, although I am happy that none of them bit me.

The blinding miss is that I think that the startup community in London could have benefited significantly from the ATI facilitating and developing their work - but I am not a startup guy so didn't explore that. There was a lot of entanglement with CognitionX which I just couldn't understand. I don't think my networking autism did me any good in that conversation. Perhaps if I had acted differently I wouldn't have a mortgage now. The banner of the ATI web site still doesn't feature startups or entrepreneurs. I find that shocking (clutches pearls).

Anyway, my fellowship was a total bust and I was quite pissed off about it. I had a relatively young family at the time and I spent quite a lot of hours messing about at the ATI when I could have been messing about in the garden with them. Writing all this down makes me feel a bit better now - but so has being out doing AI in industry for the last ten years and earning some cash while I've been doing it.

What a bloody shame though, it's been a massive miss and I will always regret that I lacked the skills, personality, and gravitas to influence what happened.

Expand full comment

I had heard of Deepseek in Q3 2024 because I follow Steve Hsu on twitter. It seems a bit parochial for senior people at ATI to not be intensely following the state of the art.

Expand full comment

The state of the art today is massive, and people have jobs to do and AI is a very broad field. Q3 2024 isn't so long ago either. I had come across V2 in June or July but only as a curiosity that they were building models at all.

Expand full comment

Also I have now had an opportunity to get more information and Mike Wooldridge resigned as a director at the Turing in April 2024 and had packed up there completely last summer. So the thing about Deepseek is just completely irrelevant, is nothing at all to do with the Turing and I find it to be a deeply objectionable shot over nothing. I really think you should apologize to him. It's not very nice. It's not the sort of thing a person of good will would do.

Expand full comment

Hi Simon, thanks for reading and sharing your experiences of the Turing. To answer a couple of your points quickly:

- We're going to have to agree to disagree on DeepSeek - I'm not an expert on Chinese AI labs by any stretch of the imagination and wrote about the company last summer (https://press.airstreet.com/p/the-state-of-chinese-ai), while their model releases were covered in mainstream media outlets in 2024. I wouldn't expect a member of the public to have heard of them, but they were both discussed in AI research circles and on government's radar.

- On Mike, I have no personal animus or axe to grind. When I wrote the piece, the Turing listed him as a current member of staff - I can see that this morning they've now amended the website to clarify that's not the case. I'll update the piece and add a note - thank you for flagging.

Expand full comment

Alex - it was you going after Mike Wooldridge over his knowledge of Deepseek that I found upsetting, as I say, what he knew or didn't know in Q4 2024 and Q1 2025 is nothing at all to do with the ATI.

Happy to disagree with you about whether people should have heard of them or not, as I say AI is a broad discipline - there is a lot of incredibly significant stuff out there to pay attention to.

Expand full comment