← Back to context

Comment by 827a

21 hours ago

My deepest concern at this time isn't that AI eventually gets written down to nothing; because I don't think it will. Its that these companies are so scared of being out-competed by an AI-first competitor that they're willing to make deep sacrifices to their core businesses just to effectively virtue signal that they're AI first and unable to be out-competed.

It is deeply concerning because all things point to reality shaking out with irony. None of these big tech companies have leveraged AI to build anything remotely interesting from a product perspective. Its truly astounding how bad they are at it. Apple has nothing, Microsoft wants to put spyware on every Windows computer and builds the worst coding agent on the market despite having privileged access to every line of source code ever written, Meta put a chatbot in Whatsapp then decided paying researchers ten mil would solve their problems, Google has world-class research teams that have produced unbelievable models, without any plan at all on how those make it into their products beyond forcing a chat window into Google Drive.

Their fear is going to lose them everything. Its a fascinating inversion of the early internet problem, where companies who were unwilling to innovate got out-competed. Everyone learned that lesson and decided "we'll never be unwilling to innovate ever again"; but now their core product stable undergoes constant churn that is pissing off customers and driving competition to eat their lunch.

There is long-term, durable beauty in investing majority effort into making Github the single best place to host and organize code. That need is never going away. There is also necessity in ensuring it has an AI strategy in a post-AI world, no one doubts that, but its a matter of proportion and humility. Microsoft/Github will never build AI products that lead the market. Its not a technology problem; its an organizational and political one. But that's ok, because they could dominate the market with the world's best code hosting platform, an average AI strategy, and a library of integrations with the rest of the frontier world.

> Google has world-class research teams that have produced unbelievable models, without any plan at all on how those make it into their products beyond forcing a chat window into Google Drive.

NotebookLM is a genuinely novel AI-first product.

YouTube gaining an “ask a question about this video” button, this is a perfect example of how to sprinkle AI on an existing product.

Extremely slow, but the obvious incremental addition of Gemini to Docs is another example.

I think folks sleep on Google around here. They are slow but they have so many compelling iterative AI usecases that even a BigTech org can manage it eventually.

Apple and Microsoft are rightly getting panned, Apple in particular is inexcusable (but I think they will have a unique offering when they finally execute on the blindingly obvious strategic play that they are naturally positioned for).

  • Google was the absolute king of AI (previously "ML") for at least 10 years of the last 20. They are also an absolute behemoth of tech and have consistently ranked among the most valuable companies in the world for multiple years, valued at trillions of dollars today. Hell, they're on version 7 and production year 10 of their custom AI ASIC family.

    When considering the above, the amount of non-force-fed "modern AI" use they've been able to drive is supposed to be shown by things to the level of a question button on YouTube and some incremental overlaying of Gemini to Docs? What does that leave the companies without the decade head start, custom AI hardware, and trillions to spend to look to actually do worth a damn in their products with the tech?

    I'm (cautiously) optimistic AI will have another round or two of fast gains again in the next 5 years. Without it I don't think it leaves the realm of niche/limited uses in products in that time frame. At least certainly not enough that building AI into your product is expected to make sense most of the time yet.

  • > YouTube gaining an “ask a question about this video” button, this is a perfect example of how to sprinkle AI on an existing product.

    lol if this is the perfect example, "AI" in general is in a sad place. I've tried to use it a handful of times and each time it confidently produced wrong results in a way that derailed my quest for an answer. In my experience it's an anti-feature in that it seems to make things worse.

  • The best and latest Gemini Pro model is not SOTA. The only good things it has are the huge context and the low API price. But I had to stop using it because it kept contradicting itself in the walls of text it produces. (My paid account was forced to pay for AI with a price hike so I tried for a couple of months to see if I could make it work with prompt engineering, no luck).

    Google researchers are great, but Engineering is dropping like a stone, and management is a complete disaster. Starting with their Indian McKinsey CEO moving core engineering teams to India.

    https://www.cnbc.com/2024/05/01/google-cuts-hundreds-of-core...

    • It was the best model according to almost every benchmark until recently. It’s definitely SOTA.

    • There are problems with every model, none of them are perfect. I've found Gemini to be very good but occasionally gets stuck in loops: it does, however, seem to detect the loop and stop. It's more cost effective than the Claude models, and Gemini has regular preview releases. I would rate it between sonnet and opus except it's cheaper and faster than both.

      For whatever reason there are tasks that work better on one model compared to another, which can be quite perplexing.

    • No amount of big context window can stop the model from context poisoning. So in a sense, it's a gimmick when you start having the feel of how bad the output is.

  • > when they finally execute on the blindingly obvious strategic play that they are naturally positioned for

    What's that? It's not obvious to me, anyway.

  • The biggest counterexample would be that dead-ai-autotranslate-voice sucking every gram of joy out of watching your favourite creators, with no ability to turn it off.

  • > YouTube gaining an “ask a question about this video” button, this is a perfect example of how to sprinkle AI on an existing product.

    I remember when I was trying to find a YouTube video, I remembered the contents but not the name. I tried google search and existing LLMs including Gemini, and none could find it.

    It would also be useful for security: give the AI a recording and ask when the suspicious person shows up, the item is stolen, the event happens, etc. But unfortunately also useful for tyranny…

  • Yeah to be clear, I think Google is the strongest in AI product development of the FAANG companies. I included them in the list because the most complaints I see about AI product integration among FANNG comes from Google products; the incessant bundling of Gemini chatboxes in every Workspace product.

  • Those examples are interesting and novel, but don't anywhere near live up to the promise of the next great technological revolution, greater than even the internet. I'm fairly sure if an all-knowing genie were to tell Google that this is the best AI gets, their interest in it would drop pretty quickly.

    I think for most people, if NotebookLM were to disappear overnight it'd be a shame but something you can live with. There'll be a few who do heavily rely on it, but then I wouldn't be surprised to hear that at least one person heavily relies on the "I'm feeling lucky" button, or in other words, xkcd 1172

  • > Apple in particular is inexcusable

    This isn't me defending apple, but, let me play out a little scenario:

    "hey siri, book me tickets to see tonight's game"

    "sure thing, champ"

    <<time passes>>

    "I have booked the tickets, they are now in your apple wallet"

    <<opens up wallet, sees that there is 1x £350 ticket to see "the game", a interactive lesson in pickup artistry>>

    You buy apple because "it works" (yes, most of that is hype, but the vertical integration is actually good, not great for devs/tinkerers though.) AI just adds in a 10-30% chance of breaking what seems to be a simple workflow.

    You don't notice with chatGPT, because you expect it to be the dipshit in your pocket. You don't expect apple to be shit. (although if you've tried to ask for a specific track whilst driving, you know how shit that is. )

  • > YouTube gaining an “ask a question about this video” button, this is a perfect example of how to sprinkle AI on an existing product.

    > Extremely slow, but the obvious incremental addition of Gemini to Docs is another example.

    These are great examples of insulting and invasive introductions of LLMs into already functional workflows. These are anti-features.

    • The Ask button in YouTube is a game changer for the use case of "what timestamp in this hour-long video talks about topic x?".

      What's the existing functional workflow for that? Downloading the captions and querying with a local LLM or a very fuzzy keyword search?

      1 reply →

What you're describing would seem to be a borderline miraculously positive thing. Every single generation of tech companies starts off absolutely amazing. Then they get big, and in surprisingly rapid order enter into the abyss from which they never return

But in modern times the particularly level level of big, scaling back of anti-competitive law enforcement, and a government increasingly obsessed with making [economic] number go up, regardless of the cost, have all created a situation where the current batch is dying a lot slower than they probably otherwise would.

If 'AI' is the pandora's box of self destruction that can move the show along to the next batch of companies, then it'll have been worth the trillions of dollars in investment after all!

  • I tend to feel that a lack of government intervention isn't a significant piece of this puzzle. When Standard Oil held a monopoly on the oil world, it was mostly possible because they were monopolizing a discrete set of natural resources. Tech isn't that: Especially with AI lowering the barrier of entry to learning and generating code, tech is extremely resource-unconstrained. The main resource we fight over is just humans who have the ability and desire to spend money.

    I also don't feel it will happen in "rapid order". These companies are too big. Its happening business-unit by business-unit. In the far future, these companies will still exist, just heavily optimized into the much smaller handful of units that still generate profit.

    • > I tend to feel that a lack of government intervention isn't a significant piece of this puzzle.

      Depends if you agree with somenameforme's theory that tech companies start off amazing, get big, then become awful.

      You may have noticed, in recent decades, we haven't bothered with enforcing anti-trust law. If Facebook wants to buy Instagram and Whatsapp, they can. If Microsoft wants to buy Github and Activision they can. If Google wants to buy Youtube, Doubleclick and Nest they can.

      If we accept the premise that FAANG is where innovation goes to die, going 25 years without any antitrust enforcement might not have been the smartest move.

> None of these big tech companies have leveraged AI to build anything remotely interesting from a product perspective.

Exactly, but this is just the nature of this technology. It can sort of fake human intelligence but not really. You can't count on it to do human work without supervising it so what's the point?

intel.com's <title> says "Simplify Your AI Journey - Intel". Their description meta tag says "Deliver AI at scale across cloud, data center, edge, and client with comprehensive hardware and software solutions." Their frontpage mentions "AI" 9 times, but has only 3 mentions of "processor" and zero of "CPU".

I know they make processors, but they sure don't make it seem that way.

  • They realized they can't compete on processors, so they're moving on to greener pastures. Like kodak back then.

    • Intel has traditionally been behind in software quality and discrete GPUs, I wonder if they are making this move out of desperation because nobody thinks "yay, Intel!" when both topics are mentioned.

Yes, I find it greatly satisfying that these mega companies are turning away their most important asset: super qualified people capable of creating new products. They're basically betting on their own extinction.

> Its a fascinating inversion of the early internet problem, where companies who were unwilling to innovate got out-competed.

Is it though? There's a reason why Microsoft's JVM competitor is called ".NET". They were planning Windows .NET Server 2003, Office.NET, etc.

I don't think an inversion of the hype cycle, it's just another hype cycle exactly. I think, in fact, it's extremely comparable. I remember people joking about Pets.com -- just imagine buying your pet food online?!? Crazy stuff. AI is the same. It's hyped up massively, there will eventually be some kind of correction, and then it'll become the new normal.

> None of these big tech companies have leveraged AI to build anything remotely interesting from a product perspective.

Not true. Ironically, the first exception I can think of is Github Copilot.

It is true these companies haven’t recouped anywhere near the $trillion they’ve invested in AI.

  • Only a sentence later do I explicitly reference Github Copilot; yet they belong on the list because despite having every advantage a company could have, the resources of a megacorporation, all the source code in the world, the semi-independence of a smaller team; they still managed to produce a mediocre and uninteresting product.

    But, again: I think that state for Copilot is totally fine for Github. That product state of "its there, its builtin, and its fine" is a fantastic and extremely efficient market to service.

> There is also necessity in ensuring it has an AI strategy in a post-AI world,

I find it necessary to ask AI what that sentence even means.

> Apple has nothing

I always hear this but people use Siri all the time, and I think outside of talking to programmers, a lot of consumers probably consider that the level of AI they care about using. "is Siri really AI" seems like a real "is a hotdog a sandwich" question. Who cares? People eat hot dogs and talk to Siri.

It seems what Apple has less of is LLM products that cost enormous sums of money to make that people don't like using. Sure, they have a little of it, they fell flat on their faces with their news summaries thing last year and AppleVision was a nothingburger, but when it comes to "sinking huge amounts of money into deeply unpopular ventures", it seems to me that Apple's reluctance to deploy its largess here might be prudent. It seems like they're less exposed on the hype.

  • I do wish Siri was a little more intelligent to be honest.

    I use Siri when I need a fast, distraction-free, action. Which makes it perfect when driving or performing other tasks where my hands a busy and/or I cannot put my attention on my phones LCD screen.

    The way Apple paired with ChatGPT is awkward. You get prompted if you want to use Siri or ChatGPT. Which creates a distraction.

    I'd love it if Siri was smart enough to differentiate between:

    - an automation request. eg setting an alarm or ringing a contact. The kind of interaction what you wouldn't want to offload to a 3rd party but is the kind of interaction where you don't need vast datastores of training.

    - and an open-ended question. eg What time are Oasis playing in London tonight? Who was the 23rd President of Germany? What are the rules of Dodgeball? these sort of things are less confidential and don't require handing control of your phone to a 3rd party.

    And I'd love it if Siri automatically offloaded from their local AI to ChatGPT (or whatever) when the latter was identified. That should be opt in, but when opted in, it should be automatic. I shouldn't have to consent each time after I've opted in.

  • I'm not sure if you're in a country that has already received some upgrade, but over here in Europe Siri is seen as a funny tamagochi that sometimes misunderstands and thinks its needed and is then quickly told to shut up.

    I think the last time I talked to anyone about siri we were wondering why it was still so bad, now that we have LLMs.

    • I've never seen people in europe regularly using siri except to bash how bad it is. I would be really interested taking a look at the secret usage stats of siri in europe compared to other regions.

Do I have any fellow Duolingo users here?

I know they've gotten shit for years, it's not gonna make you fluent, etc etc

But I've defended them because it's at the very least a good starting point and something to keep you consistent every day. As long as you're trying to be mindful about learning, I've found it to be a great tool to assist in improving my Spanish.

That is until a month or 2 ago where they completely overhauled their curriculum with AI slop. The stories are bland at best and confusing at first, the questions are brain-dead simple, it'll have sentences and questions that I've confirmed with native speakers are confusing/incorrect, it's riddled with mistakes, and somehow they even broke the TTS so it'll pronounce things wrong. One of the character voices consistently can't say a couple of letters, like it pronounces all the 'd's with 'v's or something. I can't believe they actually shipped it in this state, they completely broke it overnight. At this rate if it's not fixed by the time my annual subscription is up to renew, I will be cancelling.

It's absolutely the worst AI slopification of any product I use, and the CEO and everyone who pushed to ship it needs to be fired.

  • Yes I've been chronicling the enshittification of Duolingo here for several years (below). But unlike Github/CoreAI, DuoLingo is tied to a single (and imperilled) revenue-stream from a single product, plus they had a 7/2021 IPO in the heady days of Covid, so they started out in a subscriber market awash with cash. Also like other sites with a formerly vibrant community and forums, they rug-pulled the way they extracted value from the user community's posts then copyright-washed it through AI, then turned around and tried to remarket it back to said users ('Duolingo Max = Super Duolingo + features like AI-powered "Explain My Answer" and "Roleplay" options for more advanced practice'). While laying off thousands of their contractors and translators.

    https://news.ycombinator.com/item?id=35679783

  • going to shout-out ClozeMaster here since I first found out about it on hacker news. Always hated duolingo - it's the gamification triggered to many alarm bells to me.

    Clozemaster is much more rudimentary but I do like how they use AI - there's a single button that gives you an AI grammatical summary of the translation and calls out any idioms or grammatical conventions in the target language compared to your native one.

    Bought the lifetime license but it's free to use, you just get a limited amount of flash cards a day. If you wait until christmas there's generally a big discount on the lifetime license.

    • > going to shout-out ClozeMaster here since I first found out about it on hacker news. Always hated duolingo - it's the gamification triggered to many alarm bells to me.

      Duolingo was always aiming at the casual app user (not serious language learners, think getting casual 14-30yo users to switch 10 min a day from playing casual games instead or consuming SM), and openly admitted they crafted the product and their metrics around gamification and socially acquiring new (paying, non-freemium) users. So judge their behavior by that. Also, you can turn off some but not all of the default gamification + social features.

> None of these big tech companies have leveraged AI to build anything remotely interesting from a product perspective

The coding agents, CC, Cursor, etc. are quite good and useful.

> None of these big tech companies have leveraged AI to build anything remotely interesting from a product perspective. Its truly astounding how bad they are at it.

Oh my God, tell me about it. Our C levels are being fed bullshit by all of our vendors about how AI is going to transform their business. Every few weeks I have to ask "what the fuck does that mean exactly?" "Oh, well, agentic AI and workflows blah blah."

Ok? You want a chatbot? Fine, we're still building a state machine. At best, the LLM is doing expensive NLP to classify the choices.

Something something classify support tickets? Alright, but we're still just doing keyword search, LLMs literally aren't even needed.

I love LLMs and get a lot of use out of them for coding, but I still don't see anywhere that they're going to fit in for core business functions. Anything that is proposed can and should be done without LLMs. I'm just not seeing where they can be useful until they are truly AGI. Until then, it's just expensive NLP.

  • It's very funny that for pretty much any use case of LLMs, they're either too expensive or too incapable or both! There may be a few uses that make sense, but it seems to be incredibly hard to find the balance.

    • It blows my mind how many computing professionals truly think this is the case. It doesn't take a tech blogger to draw a trend line through the advancements of the past 2.5 years and see where we're headed. The fact that grifters abound on the edges of the industry is a sign of the radical importance of this unexpected breakthrough, not an indication that it's all a grift.

      To engage in some armchair psychology, I think this is in large part due to a natural human tendency for stability (which is all the stronger for those in relatively powerful positions like us SWEs). Knowing that believing A would imply that your mortgage is in jeopardy, your retirement plan up-ended, and your entire career completely obscured beyond a figurative singularity point makes believing ~A a very appealing option...

      3 replies →

  • The difference is that I can’t sell elasticsearch in my company, but I can sell an LLM.

    Yeah, don’t ask..

    • Why doesn't your company get the use case for Elasticsearch?

      Is it because you're trying to pitch it with CTO arguments on capabilities, not COO/CFO arguments like "will permanently replace N humans"?

      1 reply →

  • I think there's a lot of really interesting (and profitable) AI products out there. And: there's so many more that can be built. We're only scratching the surface of what the industry has already invented can do. Not in an "AGI Inevitable" capacity; what we have, today, with more context engineering, better user interfaces, better products with deeper AI-first thinking, etc.

    My point was more-so that FAANG isn't even scratching the surface; they're punching it bloody with their fists while yelling "look at all this AI we have, see dad we can't be disrupted we're the disrupters we're the disrupters".

    It reminds me a lot of Xbox over the past six years, so much so that I think Xbox is a canary for how many business units in these companies will look in five more years.

    • There's a lot of "promising" and "interesting" stuff, but I'm not seeing anything yet that actually works reliably.

      Sooner or later (mostly sooner) it becomes apparent that it's all just a chatbot hastily slapped on top of an existing API, and the integration barely works.

      A tech demo shows your AI coding agent can write a whole web app in one prompt. In reality, a file with 7 tab characters in a row completely breaks it.