> The EU trails the US not only in the absolute number of AI-related patents but also in AI specialisation – the share of AI patents relative to total patents.
E.U. patent law takes a very different attitude towards software patents than the U.S. Even if that wasn't the case: “Specialisation” means that no innovation unrelated to AI gets mind share, investment, patent applications. And that's somehow a good thing? Not something you can just throw out there as a presupposition without explaining your reasoning.
> “Specialisation” means that no innovation unrelated to AI gets mind share, investment, patent applications. And that's somehow a good thing?
I don’t think the authors claim we should have 100% specialisation. They just say that the fact that the EU has fewer AI-related patents as a proportion of the total (less specialisation) is evidence that it is behind in AI. That seems reasonable.
> Perhaps it will make patent trolling a bit harder because it is easier to look up existing work and to check if an idea is obvious?
Haha, funny :)
No, it'll be like the rest of the industries that use more AI, they'll spend the same amount of effort (as little as possible) and won't validate anything, and provide worse service, not better. AIslop is everywhere, and seemingly unavoidable for companies to use more and more to cut more corners.
I wonder if web searches used to be pretty productive, then declined as sponsored results and SEO degraded things.
Nowadays an ai assist with a web search usually eliminates the search altogether and gives you a clear answer right away.
for example, "how much does a ford f-150 cost" will give you something ballpark in a second, compared to annoying "research" to find the answer shrouded in corporate obfuscation.
The killer app for AI might just be unenshittifying search for a couple of years.
Then SEO will catch up and we'll have spam again, but now we'll be paying by the token for it. Probably right around the time hallucination drops off enough to have made this viable.
Don't think that is a fair point, the manipulation was done on a topic of which there are hardly any other sources (hot dog eating competition winner). If you want to manipulate what an AI tells you is the F-150 street price, you will complete with hundreds of sources. The AI will unlikely pick yours.
I used to be able to google a question like that and get an accurate answer within the top 3 results nearly every time about 20 years ago. Then it got worse and worse and became pretty much completely useless about 10 years ago.
Now AI will give me a confident answer that is outright wrong 20% of the time or kind of right but not really 30% of the time. So now I ask something using an AI chatbot and carefully word it so as to have it not get off topic and focus on what I actually want to know, wait 30 seconds for its long ass answer to finish, skim it for the relevant parts, then google the answer and try to see where the AI sourced its answer from and determine whether it misinterpreted/mixed up results or it's accurate. What used to be a 10 second google search is now a 2-3 minute exercise.
I can see very much how people say AI has somehow led to productivity losses. It's shit like this, and it floods the internet and makes real info harder to find, making this cycle worse and worse and take more and more time for basic stuff.
Web scraping for LLMs has almost completely ruined the search experience. In the past I could search for simple questions, and quickly get an answer without even having to click through to the links.
This was horrible for web traffic, but the utility level was off the charts. It was possible to get accurate results in milliseconds. It was faster than using an LLM.
Now sites put almost no info in the search result headers, to get people to click through. I think this will work on some users, but most will start using LLMs as search by default.
Search engines have gotten so bad that I almost feel forced to try running SearXNG or some other search engine locally. Its a pain to set up, but degooglefication is always worth it.
It is not horrible, it reached the point of absolute excellence. Not for you, the user - but for making money for the creator. Remember, no one paid for web search, so you are the product. If you are the provider of the web search engine, the point of having web search is not deliver the best search result to the user, but maximize the amount of money you can make from the sum of the world population. And google did very good in maximizing their profits, without users turning away from them.
We always had the technology to do things better, it's the money making part that has made things worse technologically speaking. In this same way, I don't see how AI will resolve the problem - our productivity was never the goal, and that won't change any time soon.
then declined as sponsored results and SEO degraded things
It didn't decline because of this. It declined because of a general decade long trend of websites becoming paywalled and hidden behind a login. The best and most useful data is often inaccessible to crawlers.
In the 2000s, everything was open because of the ad driven model. Then ad blockers, mobile subscription model, and the dominance of a few apps such as Instagram and Youtube sucking up all the ad revenue made having an open web unsustainable.
How many Hacker News style open forums are left? Most open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc. The only reason HN is alive is because HN doesn't make need to make money. It's an ad for Y Combinator.
SEO only became an issue when all there is for crawlers is SEO content instead of true genuine content.
> The best and most useful data is often inaccessible to crawlers.
Interesting point.
> ost open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc
Ironically isn't one of the reasons some of those platforms started to use logins was so they could track users and better sell their information to ad people?
Obviously now there are other reasons as well - regulation, age verification etc.
Does this suggest that the AI/ad platforms need to tweak their economic model to share more of the revenue with content creators?
FWIW, these studies are too early. Large orgs have very sensitive data privacy considerations and they're only right now going through the evaluation cycles.
Case in point, this past week, I learned Deloitte only recently gave the approval in picking Gemini as their AI platform. Rollout hasn't even begun yet which you can imagine is going to take a while.
To say "AI is failing to deliver" because only 4% efficiency increase is a pre-mature conclusion.
If rollout at Deloitte has not yet begun... How on earth did this clusterfuck [0] happen?
> Deloitte’s member firm in Australia will pay the government a partial refund for a $290,000 report that contained alleged AI-generated errors, including references to non-existent academic research papers and a fabricated quote from a federal court judgment.
Because even if an organisation hasn't rolled out generative AI tools and policies centrally yet, individuals might just use their personal plans anyway (potentially in violation with their contract)? I believe that's called "shadow AI".
Exactly, my company started carefully dipping their toes in to org wide AI mid last year (IT has been experimenting earlier than that, but under pretty strict guidelines from infosec). There is so much compliance and data privacy considerations involved.
And for the record I think they are absolutely right to be cautious, a mistake in my industry can be disastrous so a considered approach to integrating this stuff is absolutely warranted. Most established companies outside of tech really can’t have the “move fast break things” mindset.
Agreed. We've been on the agentic coding roller coaster for only about 9-10 months. It only got properly usable on larger repositories around 3-4 months ago. There are a lot of early adopters, grass roots adoption, etc. But it's really still very early days. Most large companies are still running exactly like they always have. Many smaller companies are worse and years/decades behind on modernizing their operations.
We sell SAAS software to SMEs in Germany. Forget AI, these guys are stuck in the last century when it comes to software. A lot of paper based processes. Cloud is mainly something that comes up in weather predictions for them. These companies don't have budget for a lot of things. The notion that they'll overnight switch to being AI driven companies is arguably more than a bit naive. It indicates a lack of understanding of how the real world works.
There are a lot of highly specialized niche companies that manufacture things that are part of very complex supply chains. The transition will take decades, not months/weeks. They run on demand for products they specialize in making. Their revenue is driven by demand for that stuff and their ability to make and ship it. There are a lot of aspects about how they operate that are definitely not optimal and could be optimized. And AI provides plenty of additional potential to do something about it. But it's not like they were short of opportunities to do so. It takes more than shiny new tools for these companies to move. Change is invasive and disruptive for these companies. And costly. They take the slow and careful perspective to change.
There's a clean split between people that are AI clued in and people working in these companies. The Venn diagram has almost no overlap. It's a huge business opportunity for people that are clued in: a rapidly growing amount of people mainly active in software development. Helping the people on the other side of the diagram is what they'll be mostly doing going forward. There's going to be a huge demand for building AI based stuff for these people. It's not a zero sum game, the amount of new work will dwarf the amount of lost work.
Some of that change is going to be painful. We all have to rethink what we do and re-align our plans in life around that. I'm a programmer. Or I was one until recently. Now I'm a software builder. I still cause software to come into existence. A lot of software actually. But I'm not artisanally coding most of it anymore.
Meanwhile, "shadow" AI use is around 90%. And if you guess IT would lead the pack on that, you are wrong. It's actually sales and hr that are the most avid unsactioned AI tool users.
4% isn’t failure! A 4% increase in global GDP would be a big deal (more than what we get in a whole year of progress); and AI adoptionis only just getting started.
OpenAI is buying up like half of the RAM production in the world, presumably on the basis of how great the productivity boost is, so from that perspective this doesn't seem any more premature than the OpenAI scaling plan. And the OpenAI scaling plan is like all the growth in the US economy...
Yeah. We are only just beginning to get the most out of the internet, and the WWW was invented almost 40 years ago - other parts of it even earlier. Adoption takes time, not to speak of the fact that the technology itself is still developing quickly and might see more and more use cases when it gets better.
As a counter-point, someone from SAP in Walldorf told me they have access to all models by all companies to their choosing, at a more or less unlimited rate. Don't quote me on that, though, maybe I misunderstood him, it was in a private conversation. Anyway, it sounded like they're using AI heavily.
Yes I was recently talking to a person who was working as a BA who specializes in corporate AI adoption- they didn’t realize you could post screenshots to ChatGPT
I know at least two different companies in Italy that are very hard on shoving NotebookLM and Gemini down their employees (not IT companies, talking banking/insurance/legal).
Which for the positions/roles involved does make some sense (drafting documents/research).
But it seems like most people are annoyed, because the people shoving those aren't even fully able to show how to leverage the tools, the attitude seems like "you need to do what you do right now under lots of pressure, but also find the time to understand how to use these tools in your own role".
Apropos, I once had a boss who said he was running a headcount reduction pilot and anyone who had the time and availability to help him should email him saying how much time they had to spare. I cannot deny this had a satisfying elegance.
I've all-ways asked the managers can you kindly disclose all confidential business information. In which they obviously respond with condescending remarks. Then I respond with, then how am I going to give you a answer without all the knowledge of how the business runs and operates? You can go away and figure out what is going to make work for the business then you can delegate what you want me to do, it is the reason why you pay me money.
Why is it depressing? Personally, unless the alternative is literally starving, I wouldn't want to do a job that a robot could do instead just so that I could be kept busy. That sounds like an insult to human dignity tbh.
You know what is an insult? Supermarket on my street putting on display sloppy ads with ramen bowl that has 3 different thickness chopsticks and cartoon characters with scrambled faces. Now that is an insult, because there was a human being doing that job, and I am sure there was a great "productivity boost" related to that change.
I am a heavy AI user myself, and sure as hell I am not putting my foot in that place again.
Is it an insult to human dignity? Let’s go through the thought process.
Commodities are used in an enterprise. Some of the commodities are labor. That labor commodity does work. Involving automation. Eventually (so we are told) those labor commodities manage to automate some forms of labor. Making those other labor commodities redundant.
The labor commodities are discarded. Because why (sigh) use a cart when you now have a car? And you don’t even own a horse.
All of the above is presumably not an insult to human dignity. No. The insult to human dignity is being “kept busy” instead of letting billionaires hoard automation made through human labor.
Of course the real solution is not busywork. But the part about busywork was not on the top of my mind with regards to dignity in this context.
> Personally, unless the alternative is literally starving,
I cannot read the paper that this article is based on, but it seems that it refers to the use of big data analytics and AI in 2024, not LLM. It concludes that the use of AI leads to a 4% increase in productivity. Nowadays the debate over AI productivity centers around LLMs, not big data analytics. This article does not seem to contradict more recent findings that LLM do not (yet) provide any increased productivity at the company level.
I have a hard time understanding what "increased productivity by 4%" actually means and how this metric is measured. One low-digit does not seem high when put into the context and promises, is it?
What stands out for me is that the productivity gains for small and medium-sized enterprises are actually negative. But in Germany, for example, these companies are the backbone of the entire economy. That means it would be interesting to know how the average was calculated, what method was used, what weighting was applied, etc.
All in all, it's an interesting study, but it leaves out a lot, such as long-term effects, new dependencies, loss of skills, employee motivation, and much more.
Of note, "AI adoption" here means using "technologies that intelligently automate tasks and provide insights that augment human decision making, like machine learning, robotic process automation, natural language processing (NLP), algorithms, neural networks" and not just LLMs.
As far as I can tell "robotic process automation" mostly seems to be the deeply unglamorous process of building stuff to drive old applications that can't be given API access?
Is there a link to the actual paper anywhere? That seems like a rather large omission. Without the paper it's hard to tell what they are actually measuring.
Use of AI is based on self-reported data. From the paper:
> The respondents to the interviews are senior managers or financial directors with responsibility for investment decisions and how investments are financed – for example, the owner, chief financial officer or chief executive officer
> Firms are asked the following question: “To what extent, if at all, are big data analytics and artificial intelligence used within your business? A. Not used in the business. B. Used in parts of the business. C. Entire business
is organized around this technology.”
AI adoption is defined as the manager answering B or C.
I'm doubtful that this data is going to be very robust. Some senior tech managers are very keen to talk about AI, while at the same time knowing little about how much AI is actually being used by workers. At other companies you'll have people using free or personal ChatGPT accounts without the knowledge of management.
Also "big data" is not exactly AI.
The productivity information is robust as it's based on company accounts, albeit from 2024 so a couple of years out of date now.
This is tongue in cheek but my point is the behavior of these companies, their relentless PR, and the looming liquidity crisis they are causing seems like a coordinated plan. Consumer confidence is certainly being crystalized by rumors of all kinds and businesses are made up of consumers. If the fact checkers are LLMs themselves how does one even begin to figure out the truth?
This is just a little wikipedia adlib I did to illustrate my point. (double posted)
"The Phoebus.AI cartel was an international cartel that controlled the manufacture and sale of computer components in much of Europe and North America between 2025 and 2039. The cartel took over market territories and lowered the useful supply and life of such computer components, which is commonly cited as an example of planned obsolescence of general computing technology in favor of 6G ubiquitous computing.
The Phoebus.AI cartel's compact was intended to expire in 2055, but it was instead nullified in 2040 after World War III made coordination among the members impossible."
You trust these stochastic text/slot machines for scheduling and follow-ups? Human intention is important for both of these. triage and reminders I can see, but if you send me an llm generated follow up, I'm just going to assume you don't care.
> if you send me an llm generated follow up, I'm just going to assume you don't care.
Ironically you just replied to an automated message on a forum and didn't realise :) (hint: click on the user, go to their comment history, you'll see the pattern)
One process redesign that may be considered a moat for AI is employees intending to communicate through a sentence or two first passing the text into their AI of choice and asking it to elaborate. On the other end the colleague uses their AI to summarize the email into a bullet point or two. It's challenging for those that don't use AI to keep up.
AI is affecting everything the same as Covid, as we've been in one single-topic hysteria since 2020. With one short break for attaching bottle caps to their bottles.
Not even Russian invasion or collapse of their automotive industry rattled them.
Hey! Don't make fun of us!
You'd get used to the bottle caps too. Not really that unnoying, except for smoothies, where a bit of the smoothie drips down on the bottleneck and makes everything sticky.
And about the reaction time - politics is in a way expression of the will of the masses. And that depends on how they are informed. They are maybe not yet on point, but they are getting there.
Nowadays people are slowly realizing that Merkel's "all refugees welcome" idea was stupid and can't work. Both ineffective as a means to help people - that's cheapest/most effective closer to their homes. And as immigration policy - getting more hands to work doesn't work with people who refuse to work and refuse to integrate. Part of that refusal comes from locals that are "pro immigrants" on social media, but refuse to live in same neighbourhoods as immigrants or hire them.
More and more people are also realizing that carbon neutrality was often green washing. People are waking up to the fact that execution of good willing ideas with disregard for economic circumstances, or just reality in general, doesn't have good results.
Only the Russian threat is not being realized soon enough. It's not like during cold war, Russia doesn't have the conventional army to conquer even the westernmost tip of Spain.
Or we think it doesn't? Look at how two Ukrainian drone squads owned two NATO battalions of combined tanks & mechanized infantry in latest war exercises. With no losses on their side.
We can infer that russian capabilities are more or less similar.
But that drone event aside, the perception of threat in Europe is not uniform. Russia will likely only take everything east of Germany, (at least first, before rebuilding and attacking again). So Italy or Spain won't reduce their social spending to buy/ invest in defence to protect Poland, Czech or Romania.
There also is a slowly forming pro-russian coalition of Slovaks and Hungarians. Who still, to this day, keep buying Russian oil. Yeah, when Putin is laughing that Europe is still buying Russian oil, he's not mentioning that it's just the pro Russian Hungary. Whos prime minister, Orban, is now in threat of loosing next elections. But the USA support is there already, Rubio is helping Orban in campaign for 12th of april elections.
[sarcasm]So to sum up. Yeah, Europeans are waisting energy on bottle caps. But USA is funding pro-russian parties and helping pro-russian politicians. We in Europe can only hope that if Russia attacks, USA will not join the war, because it's becoming quite evident on whos side USA would fight. [/sarcasm]
> And about the reaction time - politics is in a way expression of the will of the masses.
Then you go on to list all the astroturfing that people are “waking up to”. You just contradict yourself. Politics writ large is astroturfing that you get gaslit into thinking is da will of da masses.
> And that depends on how they are informed. They are maybe not yet on point, but they are getting there.
I work in a big corporation in Europe. Officially we're only allowed to use CoPilot, but a lot of people just have their own subscriptions. Management either turns a blind eye or is actively encouraging investigating other AI solutions.
Of course, people need to take care of confidentiality, data protection and all that, but a lot of work is just not affected by those concerns.
> This was mostly due to second line pushing back because of data protection, data privacy and all other regulatory requirement and bureaucratic paperwork
Fintec startup. Fintech. Handling people's money. Handling a lot of extremely sensitive data. Complaining that they have to deal with some "bullshit bureacracy about the things like privacy and data regulations or something".
> The EU trails the US not only in the absolute number of AI-related patents but also in AI specialisation – the share of AI patents relative to total patents.
E.U. patent law takes a very different attitude towards software patents than the U.S. Even if that wasn't the case: “Specialisation” means that no innovation unrelated to AI gets mind share, investment, patent applications. And that's somehow a good thing? Not something you can just throw out there as a presupposition without explaining your reasoning.
> “Specialisation” means that no innovation unrelated to AI gets mind share, investment, patent applications. And that's somehow a good thing?
I don’t think the authors claim we should have 100% specialisation. They just say that the fact that the EU has fewer AI-related patents as a proportion of the total (less specialisation) is evidence that it is behind in AI. That seems reasonable.
Or it's ahead in not–AI.
EU firms don't file EU patents necessarily, but rather in the relevant countries (including USA).
Makes me wonder how AI will influence the work of patent officers.
Perhaps it will make patent trolling a bit harder because it is easier to look up existing work and to check if an idea is obvious?
> Perhaps it will make patent trolling a bit harder because it is easier to look up existing work and to check if an idea is obvious?
Haha, funny :)
No, it'll be like the rest of the industries that use more AI, they'll spend the same amount of effort (as little as possible) and won't validate anything, and provide worse service, not better. AIslop is everywhere, and seemingly unavoidable for companies to use more and more to cut more corners.
1 reply →
I wonder if web searches used to be pretty productive, then declined as sponsored results and SEO degraded things.
Nowadays an ai assist with a web search usually eliminates the search altogether and gives you a clear answer right away.
for example, "how much does a ford f-150 cost" will give you something ballpark in a second, compared to annoying "research" to find the answer shrouded in corporate obfuscation.
The turning point was around when google stopped honoring Boolean ops and quotation marks
When did this happen? I do exact searches on Google almost every day and it seems to honor the quotation marks just fine for me.
2 replies →
The killer app for AI might just be unenshittifying search for a couple of years.
Then SEO will catch up and we'll have spam again, but now we'll be paying by the token for it. Probably right around the time hallucination drops off enough to have made this viable.
I kind of want to become Amish sometimes.
3 replies →
Allegedly the 'clear' answer is much easier to manipulate than gaming PageRank ever was:
https://x.com/thomasgermain/status/2024165514155536746
Don't think that is a fair point, the manipulation was done on a topic of which there are hardly any other sources (hot dog eating competition winner). If you want to manipulate what an AI tells you is the F-150 street price, you will complete with hundreds of sources. The AI will unlikely pick yours.
2 replies →
I used to be able to google a question like that and get an accurate answer within the top 3 results nearly every time about 20 years ago. Then it got worse and worse and became pretty much completely useless about 10 years ago.
Now AI will give me a confident answer that is outright wrong 20% of the time or kind of right but not really 30% of the time. So now I ask something using an AI chatbot and carefully word it so as to have it not get off topic and focus on what I actually want to know, wait 30 seconds for its long ass answer to finish, skim it for the relevant parts, then google the answer and try to see where the AI sourced its answer from and determine whether it misinterpreted/mixed up results or it's accurate. What used to be a 10 second google search is now a 2-3 minute exercise.
I can see very much how people say AI has somehow led to productivity losses. It's shit like this, and it floods the internet and makes real info harder to find, making this cycle worse and worse and take more and more time for basic stuff.
Web scraping for LLMs has almost completely ruined the search experience. In the past I could search for simple questions, and quickly get an answer without even having to click through to the links.
This was horrible for web traffic, but the utility level was off the charts. It was possible to get accurate results in milliseconds. It was faster than using an LLM.
Now sites put almost no info in the search result headers, to get people to click through. I think this will work on some users, but most will start using LLMs as search by default.
Search engines have gotten so bad that I almost feel forced to try running SearXNG or some other search engine locally. Its a pain to set up, but degooglefication is always worth it.
Now Google has an AI answer at the top with links to sources. This streamlines the process.
My mother lost her phone so I asked her to search for "find my iphone" on Google.
The result started with 3 "sponsored links" which threw her down the rabbit hole.
This used to be easy.
I was just thinking exactly the same. Basic web search has become so horrible that AI is being used as its replacement.
I found it a sad condemnation of how far the tech industry has fallen into enshittification and is failing to provide tools that are actually useful.
> Basic web search has become so horrible
It is not horrible, it reached the point of absolute excellence. Not for you, the user - but for making money for the creator. Remember, no one paid for web search, so you are the product. If you are the provider of the web search engine, the point of having web search is not deliver the best search result to the user, but maximize the amount of money you can make from the sum of the world population. And google did very good in maximizing their profits, without users turning away from them.
We always had the technology to do things better, it's the money making part that has made things worse technologically speaking. In this same way, I don't see how AI will resolve the problem - our productivity was never the goal, and that won't change any time soon.
And it'll happen again when AI models start resorting to ads once again.
3 replies →
Their tools are very useful. To their customers. Not to their users.
It didn't decline because of this. It declined because of a general decade long trend of websites becoming paywalled and hidden behind a login. The best and most useful data is often inaccessible to crawlers.
In the 2000s, everything was open because of the ad driven model. Then ad blockers, mobile subscription model, and the dominance of a few apps such as Instagram and Youtube sucking up all the ad revenue made having an open web unsustainable.
How many Hacker News style open forums are left? Most open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc. The only reason HN is alive is because HN doesn't make need to make money. It's an ad for Y Combinator.
SEO only became an issue when all there is for crawlers is SEO content instead of true genuine content.
> The best and most useful data is often inaccessible to crawlers.
Interesting point.
> ost open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc
Ironically isn't one of the reasons some of those platforms started to use logins was so they could track users and better sell their information to ad people?
Obviously now there are other reasons as well - regulation, age verification etc.
Does this suggest that the AI/ad platforms need to tweak their economic model to share more of the revenue with content creators?
I seem to remember very few ads on the early web. Most sites I frequented were run by volunteers who paid out of their own pockets for webspace.
> I wonder if web searches used to be pretty productive, then declined as sponsored results and SEO degraded things.
Used to be.
> Nowadays an ai assist with a web search usually eliminates the search altogether and gives you a clear answer right away.
Now.
FWIW, these studies are too early. Large orgs have very sensitive data privacy considerations and they're only right now going through the evaluation cycles.
Case in point, this past week, I learned Deloitte only recently gave the approval in picking Gemini as their AI platform. Rollout hasn't even begun yet which you can imagine is going to take a while.
To say "AI is failing to deliver" because only 4% efficiency increase is a pre-mature conclusion.
> Rollout hasn't even begun yet which you can
If rollout at Deloitte has not yet begun... How on earth did this clusterfuck [0] happen?
> Deloitte’s member firm in Australia will pay the government a partial refund for a $290,000 report that contained alleged AI-generated errors, including references to non-existent academic research papers and a fabricated quote from a federal court judgment.
[0] https://fortune.com/2025/10/07/deloitte-ai-australia-governm...
Because even if an organisation hasn't rolled out generative AI tools and policies centrally yet, individuals might just use their personal plans anyway (potentially in violation with their contract)? I believe that's called "shadow AI".
2 replies →
Haven't even read the source, but I like how it's "a partial refund". The chutzpah to deliver absolute nonsense[0] and then give a partial refund!
[0]: If it contains references to nonexistent papers and fabricated quotes, the conclusions of the report are highly doubtful at best.
Exactly, my company started carefully dipping their toes in to org wide AI mid last year (IT has been experimenting earlier than that, but under pretty strict guidelines from infosec). There is so much compliance and data privacy considerations involved.
And for the record I think they are absolutely right to be cautious, a mistake in my industry can be disastrous so a considered approach to integrating this stuff is absolutely warranted. Most established companies outside of tech really can’t have the “move fast break things” mindset.
Agreed. We've been on the agentic coding roller coaster for only about 9-10 months. It only got properly usable on larger repositories around 3-4 months ago. There are a lot of early adopters, grass roots adoption, etc. But it's really still very early days. Most large companies are still running exactly like they always have. Many smaller companies are worse and years/decades behind on modernizing their operations.
We sell SAAS software to SMEs in Germany. Forget AI, these guys are stuck in the last century when it comes to software. A lot of paper based processes. Cloud is mainly something that comes up in weather predictions for them. These companies don't have budget for a lot of things. The notion that they'll overnight switch to being AI driven companies is arguably more than a bit naive. It indicates a lack of understanding of how the real world works.
There are a lot of highly specialized niche companies that manufacture things that are part of very complex supply chains. The transition will take decades, not months/weeks. They run on demand for products they specialize in making. Their revenue is driven by demand for that stuff and their ability to make and ship it. There are a lot of aspects about how they operate that are definitely not optimal and could be optimized. And AI provides plenty of additional potential to do something about it. But it's not like they were short of opportunities to do so. It takes more than shiny new tools for these companies to move. Change is invasive and disruptive for these companies. And costly. They take the slow and careful perspective to change.
There's a clean split between people that are AI clued in and people working in these companies. The Venn diagram has almost no overlap. It's a huge business opportunity for people that are clued in: a rapidly growing amount of people mainly active in software development. Helping the people on the other side of the diagram is what they'll be mostly doing going forward. There's going to be a huge demand for building AI based stuff for these people. It's not a zero sum game, the amount of new work will dwarf the amount of lost work.
Some of that change is going to be painful. We all have to rethink what we do and re-align our plans in life around that. I'm a programmer. Or I was one until recently. Now I'm a software builder. I still cause software to come into existence. A lot of software actually. But I'm not artisanally coding most of it anymore.
I'm not sure this is even measuring LLMs in the first place! They say the definition is "big data analytics and AI".
Is putting Google Analytics onto your website and pulling a report 'big data analytics'...?
Meanwhile, "shadow" AI use is around 90%. And if you guess IT would lead the pack on that, you are wrong. It's actually sales and hr that are the most avid unsactioned AI tool users.
What do you mean? Deloitte has been all in on Microsoft AI offerings for quite some time, people have access to a lot of AI tools through MS.
Did they communicate this from the top or just turn a blind eye to it?
1 reply →
4% isn’t failure! A 4% increase in global GDP would be a big deal (more than what we get in a whole year of progress); and AI adoptionis only just getting started.
OpenAI is buying up like half of the RAM production in the world, presumably on the basis of how great the productivity boost is, so from that perspective this doesn't seem any more premature than the OpenAI scaling plan. And the OpenAI scaling plan is like all the growth in the US economy...
Yeah. We are only just beginning to get the most out of the internet, and the WWW was invented almost 40 years ago - other parts of it even earlier. Adoption takes time, not to speak of the fact that the technology itself is still developing quickly and might see more and more use cases when it gets better.
> We are only just beginning to get the most out of the internet
The Internet has been getting worse pretty steadily for 20 years now
> We are only just beginning to get the most out of the internet
"The Internet" is completely dead. Both as an idea and as a practical implementation.
No, Google/Meta/Netflix is not the "world wide web", they're a new iteration of AOL and CompuServe.
Looking at the study, +4% is what they get when they chose to adopt AI, not overall.
As a counter-point, someone from SAP in Walldorf told me they have access to all models by all companies to their choosing, at a more or less unlimited rate. Don't quote me on that, though, maybe I misunderstood him, it was in a private conversation. Anyway, it sounded like they're using AI heavily.
Yes I was recently talking to a person who was working as a BA who specializes in corporate AI adoption- they didn’t realize you could post screenshots to ChatGPT
These are not the openclaw folks
What does it even mean to specialise in something and know so little about it? What exactly is this BA person doing?
Genuinely confused, I don't get it
2 replies →
Its depressing when people are hearing managers are openly asking all employees to pitch in ideals for AI in order to reduce employee headcount.
For those hearing this at work, better prepare an exit plan.
I know at least two different companies in Italy that are very hard on shoving NotebookLM and Gemini down their employees (not IT companies, talking banking/insurance/legal).
Which for the positions/roles involved does make some sense (drafting documents/research).
But it seems like most people are annoyed, because the people shoving those aren't even fully able to show how to leverage the tools, the attitude seems like "you need to do what you do right now under lots of pressure, but also find the time to understand how to use these tools in your own role".
Apropos, I once had a boss who said he was running a headcount reduction pilot and anyone who had the time and availability to help him should email him saying how much time they had to spare. I cannot deny this had a satisfying elegance.
Except it is a horrible metric to determine who is the least effective in an org and should be cut.
I've all-ways asked the managers can you kindly disclose all confidential business information. In which they obviously respond with condescending remarks. Then I respond with, then how am I going to give you a answer without all the knowledge of how the business runs and operates? You can go away and figure out what is going to make work for the business then you can delegate what you want me to do, it is the reason why you pay me money.
Why is it depressing? Personally, unless the alternative is literally starving, I wouldn't want to do a job that a robot could do instead just so that I could be kept busy. That sounds like an insult to human dignity tbh.
You know what is an insult? Supermarket on my street putting on display sloppy ads with ramen bowl that has 3 different thickness chopsticks and cartoon characters with scrambled faces. Now that is an insult, because there was a human being doing that job, and I am sure there was a great "productivity boost" related to that change.
I am a heavy AI user myself, and sure as hell I am not putting my foot in that place again.
1 reply →
Dignity has no calories, though.
1 reply →
Is it an insult to human dignity? Let’s go through the thought process.
Commodities are used in an enterprise. Some of the commodities are labor. That labor commodity does work. Involving automation. Eventually (so we are told) those labor commodities manage to automate some forms of labor. Making those other labor commodities redundant.
The labor commodities are discarded. Because why (sigh) use a cart when you now have a car? And you don’t even own a horse.
All of the above is presumably not an insult to human dignity. No. The insult to human dignity is being “kept busy” instead of letting billionaires hoard automation made through human labor.
Of course the real solution is not busywork. But the part about busywork was not on the top of my mind with regards to dignity in this context.
> Personally, unless the alternative is literally starving,
To put a fine point on it, yeah? Ultimately.
That's how capitalism works. It doesn't matter if your job is useful but if you don't do anything, you don't get money.
More people without jobs will be a heavy burden on social security systems, so in the end it's literally about starving.
3 replies →
Suggest replacing managers with AI
"Ideas for AI to help reduce headcount" sounds like the title everyone should start using on resignation letters.
If anyone still resigns that is. They seem to have automated that too.
Never seen it actually work though...incentives matter.
> Its depressing when people are hearing managers are openly asking all employees to pitch in ideals for AI in order to reduce employee headcount.
If the manager doesn’t have ideas, it is they who deserve the boot.
Manager is to manage optimal delivery, not coming up with ideas.
I cannot read the paper that this article is based on, but it seems that it refers to the use of big data analytics and AI in 2024, not LLM. It concludes that the use of AI leads to a 4% increase in productivity. Nowadays the debate over AI productivity centers around LLMs, not big data analytics. This article does not seem to contradict more recent findings that LLM do not (yet) provide any increased productivity at the company level.
[dead]
I have a hard time understanding what "increased productivity by 4%" actually means and how this metric is measured. One low-digit does not seem high when put into the context and promises, is it?
What stands out for me is that the productivity gains for small and medium-sized enterprises are actually negative. But in Germany, for example, these companies are the backbone of the entire economy. That means it would be interesting to know how the average was calculated, what method was used, what weighting was applied, etc.
All in all, it's an interesting study, but it leaves out a lot, such as long-term effects, new dependencies, loss of skills, employee motivation, and much more.
You know it's a EU study because they bring up "AI patents" in the first 2 minutes of it, as if they mean anything
Of note, "AI adoption" here means using "technologies that intelligently automate tasks and provide insights that augment human decision making, like machine learning, robotic process automation, natural language processing (NLP), algorithms, neural networks" and not just LLMs.
As far as I can tell "robotic process automation" mostly seems to be the deeply unglamorous process of building stuff to drive old applications that can't be given API access?
Is there a link to the actual paper anywhere? That seems like a rather large omission. Without the paper it's hard to tell what they are actually measuring.
The actual paper: https://www.eib.org/files/publications/20250383-130126-econo...
Use of AI is based on self-reported data. From the paper:
> The respondents to the interviews are senior managers or financial directors with responsibility for investment decisions and how investments are financed – for example, the owner, chief financial officer or chief executive officer
> Firms are asked the following question: “To what extent, if at all, are big data analytics and artificial intelligence used within your business? A. Not used in the business. B. Used in parts of the business. C. Entire business is organized around this technology.”
AI adoption is defined as the manager answering B or C.
I'm doubtful that this data is going to be very robust. Some senior tech managers are very keen to talk about AI, while at the same time knowing little about how much AI is actually being used by workers. At other companies you'll have people using free or personal ChatGPT accounts without the knowledge of management.
Also "big data" is not exactly AI.
The productivity information is robust as it's based on company accounts, albeit from 2024 so a couple of years out of date now.
This is tongue in cheek but my point is the behavior of these companies, their relentless PR, and the looming liquidity crisis they are causing seems like a coordinated plan. Consumer confidence is certainly being crystalized by rumors of all kinds and businesses are made up of consumers. If the fact checkers are LLMs themselves how does one even begin to figure out the truth?
This is just a little wikipedia adlib I did to illustrate my point. (double posted)
"The Phoebus.AI cartel was an international cartel that controlled the manufacture and sale of computer components in much of Europe and North America between 2025 and 2039. The cartel took over market territories and lowered the useful supply and life of such computer components, which is commonly cited as an example of planned obsolescence of general computing technology in favor of 6G ubiquitous computing. The Phoebus.AI cartel's compact was intended to expire in 2055, but it was instead nullified in 2040 after World War III made coordination among the members impossible."
[dead]
[dead]
[flagged]
that's not what the article said, not even close, not sure why you need to push this emotional and wrong framing.
[flagged]
You trust these stochastic text/slot machines for scheduling and follow-ups? Human intention is important for both of these. triage and reminders I can see, but if you send me an llm generated follow up, I'm just going to assume you don't care.
> if you send me an llm generated follow up, I'm just going to assume you don't care.
Ironically you just replied to an automated message on a forum and didn't realise :) (hint: click on the user, go to their comment history, you'll see the pattern)
1 reply →
Yes. Other humans are generally accepting of mistakes below some frequency threshold, and frontier models are very robust in my experience
1 reply →
One process redesign that may be considered a moat for AI is employees intending to communicate through a sentence or two first passing the text into their AI of choice and asking it to elaborate. On the other end the colleague uses their AI to summarize the email into a bullet point or two. It's challenging for those that don't use AI to keep up.
Imagine explaining AI to 1997 you.
"It's like PKZIP, but backwards"
1 reply →
AI is affecting everything the same as Covid, as we've been in one single-topic hysteria since 2020. With one short break for attaching bottle caps to their bottles.
Not even Russian invasion or collapse of their automotive industry rattled them.
Hey! Don't make fun of us! You'd get used to the bottle caps too. Not really that unnoying, except for smoothies, where a bit of the smoothie drips down on the bottleneck and makes everything sticky.
And about the reaction time - politics is in a way expression of the will of the masses. And that depends on how they are informed. They are maybe not yet on point, but they are getting there.
Nowadays people are slowly realizing that Merkel's "all refugees welcome" idea was stupid and can't work. Both ineffective as a means to help people - that's cheapest/most effective closer to their homes. And as immigration policy - getting more hands to work doesn't work with people who refuse to work and refuse to integrate. Part of that refusal comes from locals that are "pro immigrants" on social media, but refuse to live in same neighbourhoods as immigrants or hire them.
More and more people are also realizing that carbon neutrality was often green washing. People are waking up to the fact that execution of good willing ideas with disregard for economic circumstances, or just reality in general, doesn't have good results.
Only the Russian threat is not being realized soon enough. It's not like during cold war, Russia doesn't have the conventional army to conquer even the westernmost tip of Spain.
Or we think it doesn't? Look at how two Ukrainian drone squads owned two NATO battalions of combined tanks & mechanized infantry in latest war exercises. With no losses on their side. We can infer that russian capabilities are more or less similar.
But that drone event aside, the perception of threat in Europe is not uniform. Russia will likely only take everything east of Germany, (at least first, before rebuilding and attacking again). So Italy or Spain won't reduce their social spending to buy/ invest in defence to protect Poland, Czech or Romania.
There also is a slowly forming pro-russian coalition of Slovaks and Hungarians. Who still, to this day, keep buying Russian oil. Yeah, when Putin is laughing that Europe is still buying Russian oil, he's not mentioning that it's just the pro Russian Hungary. Whos prime minister, Orban, is now in threat of loosing next elections. But the USA support is there already, Rubio is helping Orban in campaign for 12th of april elections.
[sarcasm]So to sum up. Yeah, Europeans are waisting energy on bottle caps. But USA is funding pro-russian parties and helping pro-russian politicians. We in Europe can only hope that if Russia attacks, USA will not join the war, because it's becoming quite evident on whos side USA would fight. [/sarcasm]
> And about the reaction time - politics is in a way expression of the will of the masses.
Then you go on to list all the astroturfing that people are “waking up to”. You just contradict yourself. Politics writ large is astroturfing that you get gaslit into thinking is da will of da masses.
> And that depends on how they are informed. They are maybe not yet on point, but they are getting there.
But you have the uncorrupted view from God.
x
Big companies are surprisingly nimble when it comes to AI.
They typically white label Azure LLM offerings or use Github Copilot Enterprise and sign everyone up wholesale.
Some with competent IT dept wrote their own router and offer multiple models from multiple vendors and present it as "<company name> chat".
Not in EU. There is a sacred process that has to be followed that can take months even to flip a switch.
I work in a big corporation in Europe. Officially we're only allowed to use CoPilot, but a lot of people just have their own subscriptions. Management either turns a blind eye or is actively encouraging investigating other AI solutions. Of course, people need to take care of confidentiality, data protection and all that, but a lot of work is just not affected by those concerns.
Workers' councils...
care to elaborate?
> fintec startup in Berlin,
> This was mostly due to second line pushing back because of data protection, data privacy and all other regulatory requirement and bureaucratic paperwork
Fintec startup. Fintech. Handling people's money. Handling a lot of extremely sensitive data. Complaining that they have to deal with some "bullshit bureacracy about the things like privacy and data regulations or something".
Really? Really?!