Matt Clifford just wants to hoover up the Turing’s funding for his buddies at Entrepreneur First. The same Matt Clifford whose interests weren’t declared [1] when he wrote the UK’s AI policy. Why clowns in the government let him is beyond my understanding.
Your understanding is quite limited in that case. There are strikingly few people in the uk who can provide a shred of legitimacy to the governments AI plans. It’s precisely because of his “interests” that he knows what he’s talking about.
Avoiding conflicts of interest is key, but lacking interests is worse imo. Having someone with experience in the role has value and I generally think his AI policy is excellent
This was inevitable and we'll see it playing out all over Europe.
You have a desire to be relevant in an important technological shift.
On one side, you have big tech companies laser-focused on attracting the best talent and putting them in a high-pressure cooker to deliver real business outcomes, under a leadership group that has consistently proven effective for the last XX years.
On the other side, you have universities, led by the remnants of that talent pool—those who were left behind in the acquisition race—full of principles and philosophical opinions but with little to no grounded experience in actual execution. Instead, you find a bunch of PhD students who either didn’t make the cut to be hired by the aforementioned tech companies or lack the DNA to thrive in them, actively avoiding that environment. Sprinkle on top several layers of governmental bureaucracy and diluted leadership, just to ensure everyone gets a fair slice of the extra funding.
I don't think universities should become industry. I mean, that is exactly what we have industry for. If you want to be put in a pressure cooker under leadership focused on business outcomes, great, do industry.
The problem really is that universities are treated as if they have the same mandate as industry. Government people shouldn't tell a professor what kind of research is interesting. They should let the best people do what they want to do.
I remember an acquaintance becoming a professor, promoted from senior reader, and he was going to be associated with the Alan Turing Institute. I congratulated him, and asked him what he was going to do now with his freedom. He answered that there were certain expectations of what he would be doing attached to his promotion, so that would be his focus.
This way you don't get professors, you turn good people into bureaucrats.
Yes. The demand for increasing control, driven by the "taxpayer's money!" lot evident in this thread, strangles almost all state-funded research because it demands to know up front what the outcome will be. Which instantly forces everyone to pick only sure-bet research projects, while trying to sneak off to do actual blue-sky research in the background on "stolen" fractions of the funding. Like TBL inventing the WWW at CERN: that wasn't in his research brief, I'm sure it wasn't something that was funded in advance specifically for him to do.
Mind you, it was evident to me even twenty years ago when briefly considering a PhD that CS research not focused on applying itself to users would .. not be applied and languish uselessly in a paper that nobody reads.
I don't have a good answer to this.
(also, there is no way universities are going to come up with something which requires LLM like levels of capital investment: you need $100M of GPUs? You're going to spend a decade getting that funding. $10bn? Forget it. OpenAI cost only about half of what the UK is spending on its nuclear weapons programme!)
That doesn't sound like a fair appraisal of university research at all. How much do we rely on day to day that came out of MIT alone? A lot of innovation does come from industry, but certain other innovation is impossible with a corporation breathing down your neck to increase next quarter's profits.
The problem is the "desire to be relevant in an important technological shift".
There's loads of worthwhile research to do that has nothing to do with LLMs. A lot of it will not or cannot be done in an industrial environment because the time horizon is too long and uncertain. Stands to reason that people who thrive in a "high-pressure cooker" environment are not going to thrive when given a long-term, open-ended goal to pursue in relative solitude that requies "principles and philosophical opinions" that aren't grounded in "actual execution". That's what makes real (i.e. basic) research hard and different as opposed to applied research. Lots of people in industry claiming to be researchers or scientists who are anything but.
> For example, neither the key advance of transformers nor its application in LLMs were picked up by advisory mechanisms until ChatGPT was headline news. Even the most recent AI strategies of the Alan Turing Institute, University of Cambridge and UK government make little to no mention of AGI, LLMs or similar issues.
Almost any organisation struggles to stay on task unless there's a financial incentive or another driver, such as exceptional staff/management in place. Give them free money - the opposite of financial incentive - and the odds drop further.
I’m sorry to read this — it just doesn’t feel grounded in my own lived experience.
Many of the best Engineering and Computer Science departments, around the world, operate a revolving door for people to go in and out of industry and academia and foster the strongest of relationships bridging both worlds.
Look at Roger Needham’s Wikipedia page and follow his academic family tree up and down and you’ll see what I mean.
> remnants of that talent pool—those who were left behind in the acquisition race—full of principles and philosophical opinions but with little to no grounded experience in actual execution.
I do believe that these people at universities do have experience in the actual execution - of doing research. What they obviously have less experience in is building companies.
> Instead, you find a bunch of PhD students who either didn’t make the cut to be hired by the aforementioned tech companies
Or because they live in a country where big tech is not a thing. Or because these people simply love doing research (I am rather not willing to call what these AI companies are doing "research").
Jesus… are you this judgmental about everyone in society? Some people just value the university environment. It doesn’t mean they’re incompetent and had no other options. Not everyone values money above all else, nor does choosing to opt out of the private sector mean people are “remnants”.
From my perspective it's almost exactly opposite. Almost all of the people I consider exceptionally talented are vying for positions in academia (I'm in mathematics), and the people who don't make it begrudgingly accept jobs at the software houses / research labs.
I'm frequently and sadly reminded when I visit this website that lot of (smart) people can't seem imagine any form of success that doesn't include common social praise and monetary gain.
Have met lots of professors who are glorified managers who do not actual research taking sabbaticals for a fat paycheck. I doubt very much they do any real work during these sabbaticals either. If I had to guess, I would bet that these sabbatical positions are frequently sinecures.
"Considering how the UK treated Alan Turing while he was alive, he deserved a better institute to honour his memory."
Ouch. Almost certainly nothing like the insult his memory endured from "The imitation game" film though. Everyone associated with that film should feel a bit ashamed.[1]
I'm a UK based software engineer, but I almost never heard anything about this organization. Has anything useful come out of it?
[1] I could only bear to watch the first 20 minutes. But I can't imagine it got any better.
"GCHQ Departmental Historian Tony Comer went even further in his criticism of the film's inaccuracies, saying that "The Imitation Game [only] gets two things absolutely right. There was a Second World War and Turing's first name was Alan".
If you are making a fictional film, knock yourself out. But when you are using a real person's name, have some respect for them and their work.
The main complaint seem to be the minimisation of Turing's homosexuality and the focus instead on Joan Clarke. There's also a whole list of historical inaccuracies listed on Wikipedia[0], including changing the name of the Enigma breaking machine from "Victory" to "Christopher" and stating that Turning invented "The Computer".
The worst for me is the John Cairncross subplot that implied Turing might have committed some (light) treason because of his homosexuality. From Wikipedia[1]:
> Turing and Cairncross worked in different areas of Bletchley Park and there is no evidence they ever met. Alex Von Tunzelmann was angered by this subplot (which suggests that Turing was for a while blackmailed into not revealing Cairncross as a spy lest his homosexuality be revealed), writing that "creative licence is one thing, but slandering a great man's reputation – while buying into the nasty 1950s prejudice that gay men automatically constituted a security risk – is quite another."
I agree with the parent poster and a ton of people do. Anyone who actually knows the story of the breaking of Enigma and Turing in general would know that that film is terrible.
I think one of the core observations in this article is that the ATI is essentially just a funding programme for existing university research organisations.
At least in places like Rutherford Appleton you have a distinct institutional identity and culture, in a place where it feels like stuff is going on (Harwell campus is a pretty cool place for both private and public research). There's an organisation that people work for with distinct aims and objectives. I don't really see how the ATI is much more than a funding brand under EPSRC.
If you set up a research institute, you can do good research in specific focus areas and take credit for it. If you just have a funding body, you're going to feel like a funding body, and very little is going to be attributable to your new institute.
I think the UK could do with setting up a few more proper national labs like the US has with LLNL, Oak Ridge and friends. That's not to say we don't have any, but if we're going to set up an institute, that's the model we should follow.
Having been involved in UK research labs before moving to industry, none of this surprises me. The government is more interested in short term headlines than long-term actually doing something.
They'd rather make capital investments (which aren't a recurring budget line, so the same money can make different headlines next year) than pay for the sort of ongoing spend on paying experts competitive salaries to actually build a real capacity.
Prior to Turing Institute there was the Hartree Centre, which was hobbled from hiring anyone significant by being tied to a non-technical civil-service payspine that struggled to approach even "competent" level salaries for these positions as the posts didn't manage 100s of minions.
The Turing was my first major experience of the UK research community. It was immediately apparent to me that it was a bit of a snakepit---resources were denied or otherwise withheld between teams as a part of internal competition, service to the organisation was routinely forgotten or ignored, and changes in direction seemed to occur every quarter, if not every month. It could have been a very cool thing, and the community that was fostered there during the stable periods was extremely productive. But, there's now so much ill will around the latest purges that I doubt it'll ever regain that vibe.
> resources were denied or otherwise withheld between teams as a part of internal competition, service to the organisation was routinely forgotten or ignored, and changes in direction seemed to occur every quarter, if not every month.
Government funded
Loosely defined goals
Little oversight
No accountability
Yup, given the inputs, your description checks out.
I could be totally wrong (please correct me if so), but IMO there are two things happening that weren't totally discussed.
1) My understanding is that most the "researchers" and even students at ATI have a main role at a home institution. Really the question should be something more like "did ATI funding help advance UK research generally?" not "was ATI able to claim its own successes?"
2) Nobody has really mentioned how PhDs are a little different in the UK. E.g. you propose a topic -> research for three years -> receive PhD. In the US its more like research for 2+ years -> find a promising area -> do groundbreaking work over 1+ years -> receive PhD. In other words, its significantly easier to change your research direction in the US, of course its easier to miss out on things like the LLM explosion.
Yes, I remember somebody doing a PhD in the US on theorem proving for a few years, but his last year he focused on doing deep learning work, did his PhD thesis on it, and got hired by OpenAI.
> really the question should be something more like "did ATI funding help advance UK research generally?" not "was ATI able to claim its own successes?"
This is a reasonable objection, but imho misses the point.
Any entity that's unable to account for its positive externalities will underestimate its value. This happened to the Turing. Universities did not want to lose their staff to secondments or buyouts, and Turing cancelled major independent programs like their PhD offerings. Leads fought for their universities, as the article notes---not necessarily to make Turing itself better!
While recent cuts/"realignments" are painful, they also have re-focused support for projects at scale---projects that are substantially more than an afternoon a week for two academics. It's still, in many cases, subsidized consulting for UK businesses w/ dubious scientific merit, but that's a broader issue with the UK's research culture.
I have no comment on any of the general whining about the ineffectiveness about academia and the public sector, but what struck me about this article is how tiny the funding amounts are. They’re talking about a £40 million endowment like it’s impressive? Maybe I’m just too American but it’s baffling that Cambridge or UCL cares about a £1 million grant.
While this is an excellently written piece and really insightful into the state of higher education funding, what seems to be missing from the debate is concrete ideas of what should be done differently (either in 2014 or today). A lot of US innovation success comes from deep pockets of private venture capital, which is just missing in the UK. So if you're a politician/bureaucrat with a (let's face it) relatively small budget and much politics to deal with, the best strategy to take is not obvious (at least to me).
It's well written but completely unjustified in its criticism of UK universities or their role given the resources required to train SOTA models. Are any US universities training SOTA models? No. Your point about the need for private venture capital is exactly correct. I think some kind of new funding stream needs to be identified for doing this. The US is forcing China to sell TikTok's US arm for national security reasons. We could try to do something similar in return for granting US Big Tech companies access to Europe - I guess the digital tax is a step in this direction. But it seems challenging to enforce that given the current power dynamics.
This is no different than what happened in Germany. Despite having tons of funding, tens if not hundreds of AI research institutes, an abundance of talent and enough cluster time to go around for everyone, they were also blindsided by LLMs and are still to this day completely out of the race.
Here in Germany, they tended to bet on "AI for science", because it sounds "sciency" and that somehow appears more substantive, even though a lot of these "AI for science" or ML4Science projects are utter bullshit. They also invested substantially in Quantum Computing, again because it sounds "sciency".
I have the feeling that UK is almost inexistant in the field of AI despite it's previous position on the tech sector line in neobanking and cryptos.
It probably doesn't help that software engineer salaries (and I guess researchers) there are a shame and does probably not attract talents.
For years I have add headhunters from UK contacting me living in Europe for positions there needing a relocation. And I had pity for them advertising for jobs that looks nice on paper but with salaries for senior developers equivalent to entry level positions here...
I don't think Americans know the extent of missed opportunity.
- The phone in your pocket? ARM chip. Was a British company.
- Google Deepmind? British too, but they weren't the Google Research/Google Brain people that made "Attention Is All You Need" paper that gave us today's LLMs. Still sucks though.
Most successful British teams I know are located in Miami, or funded from Palo Alto.
What an utter bullshit take. As if the ATI could ever be competitive at LLMs given the resource requirements needed to train foundational models. If we want UK universities to continue to make contributions in this sphere we either need to massively reduce the cost of training SOTA LLMs, create some national shared GPU infrastructure, or resource universities to access cloud infra. Unfortunately the latter is just introducing further dependencies on overseas (and increasingly problematic) private clouds. If things continue as they are then having the capability to train SOTA models will be a strategic imperative for every nation.
Matt Clifford just wants to hoover up the Turing’s funding for his buddies at Entrepreneur First. The same Matt Clifford whose interests weren’t declared [1] when he wrote the UK’s AI policy. Why clowns in the government let him is beyond my understanding.
[1] https://democracyforsale.substack.com/p/revealed-starmer-ai-...
Your understanding is quite limited in that case. There are strikingly few people in the uk who can provide a shred of legitimacy to the governments AI plans. It’s precisely because of his “interests” that he knows what he’s talking about.
How much funding from the Turing institute has gone to his buddies at EF?
Avoiding conflicts of interest is key, but lacking interests is worse imo. Having someone with experience in the role has value and I generally think his AI policy is excellent
This was inevitable and we'll see it playing out all over Europe.
You have a desire to be relevant in an important technological shift.
On one side, you have big tech companies laser-focused on attracting the best talent and putting them in a high-pressure cooker to deliver real business outcomes, under a leadership group that has consistently proven effective for the last XX years.
On the other side, you have universities, led by the remnants of that talent pool—those who were left behind in the acquisition race—full of principles and philosophical opinions but with little to no grounded experience in actual execution. Instead, you find a bunch of PhD students who either didn’t make the cut to be hired by the aforementioned tech companies or lack the DNA to thrive in them, actively avoiding that environment. Sprinkle on top several layers of governmental bureaucracy and diluted leadership, just to ensure everyone gets a fair slice of the extra funding.
I'm surprised anyone is surprised.
I don't think universities should become industry. I mean, that is exactly what we have industry for. If you want to be put in a pressure cooker under leadership focused on business outcomes, great, do industry.
The problem really is that universities are treated as if they have the same mandate as industry. Government people shouldn't tell a professor what kind of research is interesting. They should let the best people do what they want to do.
I remember an acquaintance becoming a professor, promoted from senior reader, and he was going to be associated with the Alan Turing Institute. I congratulated him, and asked him what he was going to do now with his freedom. He answered that there were certain expectations of what he would be doing attached to his promotion, so that would be his focus.
This way you don't get professors, you turn good people into bureaucrats.
Yes. The demand for increasing control, driven by the "taxpayer's money!" lot evident in this thread, strangles almost all state-funded research because it demands to know up front what the outcome will be. Which instantly forces everyone to pick only sure-bet research projects, while trying to sneak off to do actual blue-sky research in the background on "stolen" fractions of the funding. Like TBL inventing the WWW at CERN: that wasn't in his research brief, I'm sure it wasn't something that was funded in advance specifically for him to do.
Mind you, it was evident to me even twenty years ago when briefly considering a PhD that CS research not focused on applying itself to users would .. not be applied and languish uselessly in a paper that nobody reads.
I don't have a good answer to this.
(also, there is no way universities are going to come up with something which requires LLM like levels of capital investment: you need $100M of GPUs? You're going to spend a decade getting that funding. $10bn? Forget it. OpenAI cost only about half of what the UK is spending on its nuclear weapons programme!)
That doesn't sound like a fair appraisal of university research at all. How much do we rely on day to day that came out of MIT alone? A lot of innovation does come from industry, but certain other innovation is impossible with a corporation breathing down your neck to increase next quarter's profits.
The person you replied to is talking about the UK and Europe. I suspect that funding for research works differently at MIT and in the US generally.
4 replies →
US universities (the usual suspects) have a substantial different approach to industry integration then European one.
Yet, European leaders have not got the memo, and expect the same level of output.
Your rhetorical begs the question -- I can't think of anything more recent than the MIT license.
What DO we rely on that has come out of MIT this century? I'm having a real hard time thinking of examples.
3 replies →
The problem is the "desire to be relevant in an important technological shift".
There's loads of worthwhile research to do that has nothing to do with LLMs. A lot of it will not or cannot be done in an industrial environment because the time horizon is too long and uncertain. Stands to reason that people who thrive in a "high-pressure cooker" environment are not going to thrive when given a long-term, open-ended goal to pursue in relative solitude that requies "principles and philosophical opinions" that aren't grounded in "actual execution". That's what makes real (i.e. basic) research hard and different as opposed to applied research. Lots of people in industry claiming to be researchers or scientists who are anything but.
"actual execution" in the business world seems to be more and more synonymous with recklessly and incompetently fucking things up. See also: doge.
3 replies →
Yes, this is so telling:
> For example, neither the key advance of transformers nor its application in LLMs were picked up by advisory mechanisms until ChatGPT was headline news. Even the most recent AI strategies of the Alan Turing Institute, University of Cambridge and UK government make little to no mention of AGI, LLMs or similar issues.
Almost any organisation struggles to stay on task unless there's a financial incentive or another driver, such as exceptional staff/management in place. Give them free money - the opposite of financial incentive - and the odds drop further.
I’m sorry to read this — it just doesn’t feel grounded in my own lived experience.
Many of the best Engineering and Computer Science departments, around the world, operate a revolving door for people to go in and out of industry and academia and foster the strongest of relationships bridging both worlds.
Look at Roger Needham’s Wikipedia page and follow his academic family tree up and down and you’ll see what I mean.
https://en.m.wikipedia.org/wiki/Roger_Needham
> remnants of that talent pool—those who were left behind in the acquisition race—full of principles and philosophical opinions but with little to no grounded experience in actual execution.
I do believe that these people at universities do have experience in the actual execution - of doing research. What they obviously have less experience in is building companies.
> Instead, you find a bunch of PhD students who either didn’t make the cut to be hired by the aforementioned tech companies
Or because they live in a country where big tech is not a thing. Or because these people simply love doing research (I am rather not willing to call what these AI companies are doing "research").
Jesus… are you this judgmental about everyone in society? Some people just value the university environment. It doesn’t mean they’re incompetent and had no other options. Not everyone values money above all else, nor does choosing to opt out of the private sector mean people are “remnants”.
From my perspective it's almost exactly opposite. Almost all of the people I consider exceptionally talented are vying for positions in academia (I'm in mathematics), and the people who don't make it begrudgingly accept jobs at the software houses / research labs.
I'm frequently and sadly reminded when I visit this website that lot of (smart) people can't seem imagine any form of success that doesn't include common social praise and monetary gain.
Point 10:
https://www.ettf.land/p/30-reflections
2 replies →
Another point re: grounded experience, good professors/researchers make a point to take sabbaticals to work in industry for that purpose.
Have met lots of professors who are glorified managers who do not actual research taking sabbaticals for a fat paycheck. I doubt very much they do any real work during these sabbaticals either. If I had to guess, I would bet that these sabbatical positions are frequently sinecures.
1 reply →
[dead]
"Considering how the UK treated Alan Turing while he was alive, he deserved a better institute to honour his memory."
Ouch. Almost certainly nothing like the insult his memory endured from "The imitation game" film though. Everyone associated with that film should feel a bit ashamed.[1]
I'm a UK based software engineer, but I almost never heard anything about this organization. Has anything useful come out of it?
[1] I could only bear to watch the first 20 minutes. But I can't imagine it got any better.
What's wrong with The Imitation Game?
From: https://en.wikipedia.org/wiki/The_Imitation_Game#Historical_...
"GCHQ Departmental Historian Tony Comer went even further in his criticism of the film's inaccuracies, saying that "The Imitation Game [only] gets two things absolutely right. There was a Second World War and Turing's first name was Alan".
If you are making a fictional film, knock yourself out. But when you are using a real person's name, have some respect for them and their work.
Just throwing stuff at the wall here - Wikipedia has a pretty long list of historical inaccuracies https://en.wikipedia.org/wiki/The_Imitation_Game#Historical_...
The main complaint seem to be the minimisation of Turing's homosexuality and the focus instead on Joan Clarke. There's also a whole list of historical inaccuracies listed on Wikipedia[0], including changing the name of the Enigma breaking machine from "Victory" to "Christopher" and stating that Turning invented "The Computer".
[0] https://en.wikipedia.org/wiki/The_Imitation_Game#Historical_...
The worst for me is the John Cairncross subplot that implied Turing might have committed some (light) treason because of his homosexuality. From Wikipedia[1]:
> Turing and Cairncross worked in different areas of Bletchley Park and there is no evidence they ever met. Alex Von Tunzelmann was angered by this subplot (which suggests that Turing was for a while blackmailed into not revealing Cairncross as a spy lest his homosexuality be revealed), writing that "creative licence is one thing, but slandering a great man's reputation – while buying into the nasty 1950s prejudice that gay men automatically constituted a security risk – is quite another."
[1] https://en.wikipedia.org/wiki/The_Imitation_Game#Personaliti...
[flagged]
You seem to be in the minority with that opinion..
https://www.rottentomatoes.com/m/the_imitation_game
I agree with the parent poster and a ton of people do. Anyone who actually knows the story of the breaking of Enigma and Turing in general would know that that film is terrible.
4 replies →
The majority of the people who watched the film probably knew very little about Turing or his work. And now they have a horribly skewed view.
1 reply →
I think one of the core observations in this article is that the ATI is essentially just a funding programme for existing university research organisations.
At least in places like Rutherford Appleton you have a distinct institutional identity and culture, in a place where it feels like stuff is going on (Harwell campus is a pretty cool place for both private and public research). There's an organisation that people work for with distinct aims and objectives. I don't really see how the ATI is much more than a funding brand under EPSRC.
If you set up a research institute, you can do good research in specific focus areas and take credit for it. If you just have a funding body, you're going to feel like a funding body, and very little is going to be attributable to your new institute.
I think the UK could do with setting up a few more proper national labs like the US has with LLNL, Oak Ridge and friends. That's not to say we don't have any, but if we're going to set up an institute, that's the model we should follow.
Having been involved in UK research labs before moving to industry, none of this surprises me. The government is more interested in short term headlines than long-term actually doing something.
They'd rather make capital investments (which aren't a recurring budget line, so the same money can make different headlines next year) than pay for the sort of ongoing spend on paying experts competitive salaries to actually build a real capacity.
Prior to Turing Institute there was the Hartree Centre, which was hobbled from hiring anyone significant by being tied to a non-technical civil-service payspine that struggled to approach even "competent" level salaries for these positions as the posts didn't manage 100s of minions.
The Turing was my first major experience of the UK research community. It was immediately apparent to me that it was a bit of a snakepit---resources were denied or otherwise withheld between teams as a part of internal competition, service to the organisation was routinely forgotten or ignored, and changes in direction seemed to occur every quarter, if not every month. It could have been a very cool thing, and the community that was fostered there during the stable periods was extremely productive. But, there's now so much ill will around the latest purges that I doubt it'll ever regain that vibe.
> resources were denied or otherwise withheld between teams as a part of internal competition, service to the organisation was routinely forgotten or ignored, and changes in direction seemed to occur every quarter, if not every month.
Government funded
Loosely defined goals
Little oversight
No accountability
Yup, given the inputs, your description checks out.
In my experience, this happens at bad private-sector organisations as well. An asp is an asp, even when it's taxpayer-funded.
6 replies →
I could be totally wrong (please correct me if so), but IMO there are two things happening that weren't totally discussed.
1) My understanding is that most the "researchers" and even students at ATI have a main role at a home institution. Really the question should be something more like "did ATI funding help advance UK research generally?" not "was ATI able to claim its own successes?"
2) Nobody has really mentioned how PhDs are a little different in the UK. E.g. you propose a topic -> research for three years -> receive PhD. In the US its more like research for 2+ years -> find a promising area -> do groundbreaking work over 1+ years -> receive PhD. In other words, its significantly easier to change your research direction in the US, of course its easier to miss out on things like the LLM explosion.
Yes, I remember somebody doing a PhD in the US on theorem proving for a few years, but his last year he focused on doing deep learning work, did his PhD thesis on it, and got hired by OpenAI.
> really the question should be something more like "did ATI funding help advance UK research generally?" not "was ATI able to claim its own successes?"
This is a reasonable objection, but imho misses the point.
Any entity that's unable to account for its positive externalities will underestimate its value. This happened to the Turing. Universities did not want to lose their staff to secondments or buyouts, and Turing cancelled major independent programs like their PhD offerings. Leads fought for their universities, as the article notes---not necessarily to make Turing itself better!
While recent cuts/"realignments" are painful, they also have re-focused support for projects at scale---projects that are substantially more than an afternoon a week for two academics. It's still, in many cases, subsidized consulting for UK businesses w/ dubious scientific merit, but that's a broader issue with the UK's research culture.
I have no comment on any of the general whining about the ineffectiveness about academia and the public sector, but what struck me about this article is how tiny the funding amounts are. They’re talking about a £40 million endowment like it’s impressive? Maybe I’m just too American but it’s baffling that Cambridge or UCL cares about a £1 million grant.
While this is an excellently written piece and really insightful into the state of higher education funding, what seems to be missing from the debate is concrete ideas of what should be done differently (either in 2014 or today). A lot of US innovation success comes from deep pockets of private venture capital, which is just missing in the UK. So if you're a politician/bureaucrat with a (let's face it) relatively small budget and much politics to deal with, the best strategy to take is not obvious (at least to me).
It's well written but completely unjustified in its criticism of UK universities or their role given the resources required to train SOTA models. Are any US universities training SOTA models? No. Your point about the need for private venture capital is exactly correct. I think some kind of new funding stream needs to be identified for doing this. The US is forcing China to sell TikTok's US arm for national security reasons. We could try to do something similar in return for granting US Big Tech companies access to Europe - I guess the digital tax is a step in this direction. But it seems challenging to enforce that given the current power dynamics.
This is no different than what happened in Germany. Despite having tons of funding, tens if not hundreds of AI research institutes, an abundance of talent and enough cluster time to go around for everyone, they were also blindsided by LLMs and are still to this day completely out of the race. Here in Germany, they tended to bet on "AI for science", because it sounds "sciency" and that somehow appears more substantive, even though a lot of these "AI for science" or ML4Science projects are utter bullshit. They also invested substantially in Quantum Computing, again because it sounds "sciency".
Ime it's not just being blindsided but also simply preferring to do research in other areas.
TabPFN comes out of the German University of Freiburg: https://github.com/PriorLabs/TabPFN
Almost every organization (private or public) was blindsided by LLM, right? That’s… just the nature of a big new idea.
> Almost every organization (private or public) was blindsided by LLM, right? That’s… just the nature of a big new idea.
... or of a hype bubble. :-)
Stable diffusion was a productised version of work done at LMU. Not sure Germany is the best example of how AI funding goes wrong.
The purpose of these institutions is to fund research, which is not necessarily profitable.
I'm pretty sure Rosenblatt was criticized back then for his non-tangible stupid ideas, and why the hell was his research was funded to begin with.
I have the feeling that UK is almost inexistant in the field of AI despite it's previous position on the tech sector line in neobanking and cryptos.
It probably doesn't help that software engineer salaries (and I guess researchers) there are a shame and does probably not attract talents.
For years I have add headhunters from UK contacting me living in Europe for positions there needing a relocation. And I had pity for them advertising for jobs that looks nice on paper but with salaries for senior developers equivalent to entry level positions here...
I don't think Americans know the extent of missed opportunity.
- The phone in your pocket? ARM chip. Was a British company.
- Google Deepmind? British too, but they weren't the Google Research/Google Brain people that made "Attention Is All You Need" paper that gave us today's LLMs. Still sucks though.
Most successful British teams I know are located in Miami, or funded from Palo Alto.
mournfully I call app Britain...
https://www.youtube.com/watch?v=Ei9iM_zzzQk
Ironic that the UK went so wrong on this, considering London-based DeepMind are arguably the leading AI research group in the world.
I was expecting ancient history, the story of the Turing Institute in Glasgow. Established 1983. Dissolved 1994. https://en.wikipedia.org/wiki/Turing_Institute
The article is about the second try, commencing 2014.
After skimming through the article and a few of the comments, you could say that institute wasn't turing complete! :D
When was the last time measures pushed heavily by bureaucrats with heavy government spending produced anything good?
daily life in Western Europe literally right now?
for subjective applications of the term 'good'
Do you ingenously believe that life in Western Europe is quality life?
Covid vaccines?
So, a startup failed. Liquidate and try something else, as YC does.
Tldr: government boondoggle with no accountability
What an utter bullshit take. As if the ATI could ever be competitive at LLMs given the resource requirements needed to train foundational models. If we want UK universities to continue to make contributions in this sphere we either need to massively reduce the cost of training SOTA LLMs, create some national shared GPU infrastructure, or resource universities to access cloud infra. Unfortunately the latter is just introducing further dependencies on overseas (and increasingly problematic) private clouds. If things continue as they are then having the capability to train SOTA models will be a strategic imperative for every nation.
What was the budget for DeepSeek's V3/R1 again?
Who actually knows? Far beyond what a UK university can afford.
3 replies →