This time is different

13 hours ago (shkspr.mobi)

> 3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.

Agreed, these things all failed to live up to the hype.

But these didn't:

Electricity, cheap computing, calculators, photography, the internet, the steam engine, the printing press, tv, cars, gps, bicycles...

So you can't really start an article by picking inventions that fit your narrative and ignoring everything else.

  • Exactly my thoughts. Selective whinging indeed.

    Also meta-platitude whinging like

    > The ideology of "winner takes all" is unsustainable and not supported by reality.

    Sometimes the winner deserves to win, AND that's a good thing even at scale. It kindof depends.

    • The winner that deserved to win might turn into the complacent monopoly pf tomorrow. It might vow to Not Be Evil for a while, but the investors will demand that it does whatever it takes to grow.

  • The first few paragraphs are all you need to see that the author is writing a propaganda piece. It's not meant to be truthful, it's meant to convince.

    I think this is what is meant by "bullshit".

    • “Bullshit” is:

      + statement of dubious correctness

      + and that serves the author’s interest

      + and which the author does not care whether or not it is believed.

      When the author wants you to believe it, that’s horseshit.

  • The article is trash. The only reason it got voted to the front page is because the author is salty about AI.

  • Electricity bros want to put a socket on every wall. That is such a non-starter from a safety POV. It's a fundamentally unsafe technology and it can never be made safe.

  • OP here! Thanks for replying.

    To take, for example, calculators. I can't find any evidence of a massive influx of hyperbolic articles talking about how the calculator will change everything. With bikes, there were plenty of articles decrying how women would get "bicycle face" but very little in terms of endless coverage about them being miracle technology.

    People adopted bikes and calculators and electricity because they were useful. Car manufacturers didn't have to force GPS into vehicles - customers demanded it.

    The narrative I'm describing is how hype sometimes (possibly often) fizzles out. My contention is the more a technology is hyped, the less useful it will turn out to be.

    Now, excuse me while I ride my Segway into the sunset while drinking a nice can of Prime.

    • You have gotta stop cherrypicking. The massive influx of hyperbolic articles about how electricity will change everything started in the 19th century. It became a common theme in fiction (including classics like Frankenstein) and became an enormous media hype war, which historians call the War of the Currents.

      Yes, electricity was useful. And it had hyperbolic articles talking about how transformative it would be. Like all prognostication, some of those articles were overblown, but, in some ways, they understated the transformative effect electricity would have on human history.

      And cars? Did you somehow miss the influx of hyperbolic articles about how cars will change everything? Like, the whole 20th century?

      What was your approach to researching the history of media hype? You somehow overlooked the hype around air travel, refrigeration, and antibiotics…?

      3 replies →

    • You're still cherry picking. What about the internet? Can you find hyperbolic articles about that?

      For things like bicycles and calculator, they are released in a different time so I don't think you can use a lack of articles about those to mean anything.

      My point here is not about the hype cycle, which is a modern symptom. It's that there have been loads of world changing inventions and the fact that you listed some inventions which were hyped up and didn't change the world is irrelevant to whether this one will.

    • Calculators are a particularly bad example for your case. There was absolutely hyperbole against calculators when they were introduced. [1]

      With similar sentiment as well "They make us dumb" "Machines doing the thinking for us"

      Cars were definitely seen as a fad. More accurately a worse version of a horse [2]

      If you looked through your other examples, you'd see the same for those as well.

      Some things start as fads, but only time will tell if they gain a place in society. Truthfully it's too early to tell for AI, but the arguments you're making, calling it a fad already don't stand up to reason

      [1]: https://www.newspapers.com/article/the-item/160697182/ [2]: https://www.saturdayeveningpost.com/2017/01/get-horse-americ...

    • The personal computer, laptops, web browsers, cell phones, smartphones, AJAX/DHTML, digital cameras, SSDs, WiFi, LCD displays, LED lightbulbs. At some point, all of these things were "overhyped" and "didn't live up to the promise." And then they did.

To my mind at least, it is different. I lean heavily on AI for both admin and coding tasks. I just filled out a multipage form to determine my alimony payments in Germany. Gemini was an absolute godsend, helping answer questions in, translate to English, draft explanations, emails requesting time extensions to the Jugendamt case worker.

This is super scary stuff for an ADHDer like me.

I have an idea for a programming language based on asymmetric multimethods and whitespace sensitive, Pratt-parsing powered syntax extensibility. Gemini and Claude are going to be instrumental in getting that done in a reasonable amount of time.

My daily todos are now being handled by NanoClaw.

These are already real products, it's not mere hype. Simply no comparison to blockchain or NFTs or the other tech mentioned. Is some of the press on AI overly optimistic? Sure.

But especially for someone who suffers from ADHD (and a lot of debilitating trauma and depression), and can't rely on their (transphobic) family for support -- it's literally the only source of help, however imperfect, which doesn't degrade me for having this affliction. It makes things much less scary and overwhelming, and I honestly don't know where I'd be without it.

  • "This time is different" has been correct for every major technological shift in history. Electricity was different. Antibiotics were different. Semiconductors were different.

    Gen AI reached 39% adoption in two years (internet took 5, PCs took 12). Enterprise spend went from $1.7B to $37B since 2023. Hyperscalers are spending $650B this year on AI infra and are supply-constrained, not demand-constrained. There is no technology in history with these curves.

    The real debate isn't whether AI is transformative. It's whether current investment levels are proportionate to the transformation. That's a much harder and more interesting question than reflexively citing a phrase that pattern-matches to past bubbles.

    • > The real debate isn't whether AI is transformative.

      No, the debate is very much whether AI is transformative. You don't get to smuggle your viewpoint as an assumption as if there was consensus on this point. There isn't consensus at all.

      1 reply →

    • The problem is in the middle of such a change it's hard to recognize if this is a real change or if this is another Wankel motor.

      Plenty a visual programming language has tried to toot their own horns as being the next transformative change in everything, and they are mostly just obscure DSLs at this point.

      The other issue is nobody knows what the future will actually look like and they'll often be wrong with their predictions. For example, with the rise of robotics, plenty of 1950s scifi thought it was just logical that androids and smart mechanic arms would be developed next year. I mean, you can find cartoons where people envisioned smart hands giving people a clean shave. (Sounds like the making of a scifi horror novel :D Sweeney Todd scifi redux)

      I think AI is here to stay. At very least it seems to have practical value in software development. That won't be erased anytime soon. Claims beyond that, though, need a lot more evidence to support them. Right now it feels like people just shoving AI into 1000 places hoping that they can find an new industry like software dev.

      4 replies →

    •   > 39% adoption in two years (internet took 5, PCs took 12).
      

      Adjust for connectivity and see whether it is different (from pure hype) this time.

    • > Gen AI reached 39% adoption in two years (internet took 5, PCs took 12)

      You're comparing a service that mostly costs a free account registration and is harder to avoid than to use, with devices that cost thousands of dollars in the early days.

    • There's another perspective you can see in the comparison with the dot com boom. The web is here to stay, but a lot of ideas from the beginning didn't work out and a lot of companies turned bankrupt.

      1 reply →

    • The four technologies I look at are 3D televisions, VR, tablets, and the electric car. 3D televisions and VR have yet to find their moment. Judging tablets by the Apple Newton and electric cars by the EV1, this time is different turns out to be the correct model looking at the iPad and Tesla, but not for 3d televisions or VR (yet). So, it could be, but my time machine is as good as yours (mine goes 1 minute per minute, and only forwards, reverse is broken right now.), so unless you've got money on it, we'll just have to wait and see where it goes.

  • Can you elaborate your choice about asymmetric multimethods? I also tinker with my PL and wanted to hear your reasonings and ideas

    • Sure! First, here are references, in case you want to deep dive:

      1. http://lucacardelli.name/Papers/Binary.pdf

      2. https://www.researchgate.net/publication/221321423_Parasitic...

      Second, asymmetric multimethods give something up: symmetry is a desirable property -- it's more faithful to mathematics, for instance. There's a priori no reason to preference the first argument over the second.

      So why do I think they are promising?

      1. You're not giving up that much. These are still real multimethods. The papers above show how these can still easily express things like multiplication of a band diagonal matrix with a sparse matrix. The first paper (which focuses purely on binary operators) points out it can handle set membership for arbitrary elements and sets.

      2. Fidelity to mathematics is a fine thing, but it behooves us to remember we are designing a programming language. Programmers are already familiar with the notion that the receiver is special -- we even have a nice notation, UFCS, which makes this idea clear. (My language will certainly have UFCS.) So you're not asking the programmer to make a big conceptual leap to understand the mechanics of asymmetric multimethods.

      3. The type checking of asymmetric multimethods is vastly simpler than symmetric multimethods. Your algorithm is essentially a sort among the various candidate multimethod instances. For symmetric multimethods, choosing which candidate multimethod "wins" requires PhD level techniques, and the algorithms can explode exponentially with the arity of the function. Not so with asymmetric multimethods: a "winner" can be determined argument by argument, from left to right. It's literally a lexicographical sort, with each step being totally trivial -- which multimethod has a more specific argument at that position (having eliminated all the candidates given the prior argument position). So type checking now has two desirable properties. First, it design principle espoused by Bjarne Stroustroup (my personal language designer "hero"): the compiler implementation should use well-known, straightforward techniques. (This is listed as a reason for choosing a nominal type system in Design And Evolution of C++ -- an excellent and depressing book to read. [Because anything you thought of, Bjarne already thought of in the 80s and 90s.]) Second, this algorithm has no polynomial or exponential explosion: it's fast as hell.

      4. Aside from being faster and easier to implement, the asymmetry also "settles" ambiguities which would exist if you adopted symmetric multimethods. This is a real problem in languages, like Julia, with symmetric multimethods. The implementers of that language resort to heuristics, both to avoid undesired ambiguities, and explosions in compile times. I anticipate that library implementers will be able to leverage this facility for disambiguation, in a manner similar to (but not quite the same) as C++ distinguishes between forward and random access iterators using empty marker types as the last argument. So while technically being a disadvantage, I think it will actually be a useful device -- precisely because the type checking mechanism is so predictable.

      5. This predictability also makes the job of the programmer easier: they can form an intuition of which candidate method will be selected much more readily in the case of asymmetric multimethods than symmetric ones. You already know the trick the compiler is using: it's just double-dispatch, the trick used for "hit tests" of shapes against each other. Only here, it can be extended to more than two arguments, and of course, the compiler writes the overloads for you. (And it won't actually write overloads, it will do what I said above: form a lexicographical sort over the set of multimethods, and lower this into a set of tables which can be traversed dynamically, or when the types are concrete, the compiler can leverage monomorphize -- the series of "if arg1 extends Tk" etc. is done in the compiler instead of at runtime. (But it's the same data structure.)

      6. It's basically impossible to do separate compilation using symmetric multimethods. With asymmetric multimethods, it's trivial. To form an intuition, simply remember that double-dispatch can easily be done using separate compilation. Separate compilation is mentioned as a feature in both the cited papers. This is, in my view, a huge advantage. I admit, this I haven't quite figured out generics will fit into this -- at least if you follow C++'s approach, you'll have to give up some aspects of separate compilation. My bet is that this won't matter so much; the type checking ought to be so much faster that even when a template needs to be instantiated at a callsite, the faster and simpler algorithm will mean the user experience will still be very good -- certainly faster than C++ (which uses a symmetric algorithm for type checking of function overloads).

      To go a bit more into my "vision" -- the papers were written during a time when object-orientation was the dominant paradigm. I'd like to relax this somewhat: instead of classes, there will only be structs. And there won't be instance methods, everything will be a multimethods. So instead of the multimethods being "encapsulated" in their classes, they'll be encapsulated in the module in which they're defined. I'll adopt the Python approach where everything is public, so you need to worry about accessibility. Together with UFCS, this means there is no "privileging" of the writer of a library. It's not like in C++ or Java, where only the writer of the library can leverage the succinct dot notation to access frequently used methods. An extension can import a library, write a multimethod providing new functionality, and that can be used -- using the exact same notation as the methods of the library itself. (I always sigh when I read languages, having made the mistake of distinguishing between free functions and instance methods, "fix" the problem that you can only extend a library from the outside using free functions -- which have a less convenient syntax -- by adding yet another type of function, an "extension function. In my language, there are only structs and functions -- it has the same simplicity as Zig and C in this sense, only my functions are multimethods.)

      Together with my ideas for how the parser will work, I think this language will offer -- much like Julia -- attractive opportunities to extend libraries -- and compose libraries that weren't designed to work "together".

      And yeah, Claude Code and Gemini are going to implement it. Probably in Python first, just for initial testing, and then they'll port it to C++ (or possibly self-host).

When I look at LLMs as an interface, I'm reminded of back when speech-to-text first became mainstream. So many promises about how this is the interface for how we'll talk to computers forevermore.

Here we are a few decades later, and we don't see business units using Word's built-in dictation feature to write documents, right? Funny how that tech seems to have barely improved in all that time. And, despite dictation being far faster than typing, it's not used all that often because.. the error rate is still too high for it to be useful, because errors in speech-to-text are fundamentally an unsolvable problem (you can only get so far with background noise filtering and accounting for accents etc).

I see the parallel in how LLM hallucinations are fundamentally an unsolvable component of transformers-based models, and I suspect LLM usage in 20 years will be around the level of speech-to-text today: ubiquitously in the background, you use it here and there to set a timer or talk to a device, but ultimately not useful for any serious work.

  • I think there is a second reason people still type, and it's relevant to LLMs. Typing forces you to slow down and choose your words. When you want to edit, you are already typing, so it doesn't break the flow. In short, it has a fit to the work that speech-to-text doesn't.

    LLMs create a new workflow wherever they are employed. Even if capable, that is not always a more desirable/efficient experience.

  • I type faster than I think, and being able to edit gives the edge over text to speech. I don't believe this is a fundamentally comparable analogy.

  • I'd say speech to text is unsolvable for a more fundamental reason that it's hard to actually speak out an entire document flawlessly in one take.

    Spoken language is very different to written language, which is why for example you can easily tell when an article is transcribing a spoken interview.

  • Yeah this is exactly my view. We've had several years of work on the tech, and LLMs are just as prone to randomly spitting out garbage as they were the first day. They are not a tool which is fit for any serious work, because you need to be able to rely on your tools. A tool which is sometimes good and sometimes bad is worse than having no tool at all.

  • The completely different way people are experiencing AI is fascinating.

    In my world AI is already far more influential than text to speech.

    People on here act like we don’t know if AI will be useful. And I’m sitting over here puzzled because of how fucking useful it is.

    Very strange.

  • I'm curious about the statement that hallucinations are "fundamentally unsolvable". I don't think an AI agent has left a hallucination in my code - by which I mean a reference to something which doesn't exist at all - in many months. I have had great luck driving hallucinations to effectively 0% by using a language with static typechecking, telling LLMs to iterate on type errors until there are none left, and of course having a robust unit and e2e test suite. I mean, sure, I run into other problems -- it does make logic errors at some rate, but those I would hardly categorize the same as hallucinations.

    • So type errors are not hallucinations in your book, but "a reference to something which doesn't exist at all" is?

      In the context of AI most people I know tend to mean wrong output, not just hallucinations in the literal sense of the word or things you cannot catch in an automated way.

      2 replies →

    • Maybe you're lucky. I had Opus 4.6 hallucinate a non-existing configuration key in a well known framework literally a few hours ago.

      Granted, it fixed the problem in the very next prompt.

      1 reply →

    • ChatGPT 5.2 kept gaslighting me yesterday telling me that LLMs were explainable with Shapley values, and it kept referencing papers which talk about LLMs, and about SHAP, but talk about LLMs being used to explain the SHAP values of other ML models.

      I encounter stuff like this every week, I don't know how you don't. I suppose a well-structured codebase in a statically typed language might not provide as much of a surface for hallucinations to present themselves? But like you say, logical problems of course still occur.

      2 replies →

    • >> I don't think an AI agent has left a hallucination in my code

      I literally just went on Gemini, latest and best model and asked it "hey can you give me the best prices for 12TB hard drives available with the British retailer CeX?" and it went "sure, I just checked their live stock and here they are:". Every single one was made up. I pointed it out, it said sorry, I just checked again, here they are, definitely 100% correct now. Again, all of them were made up. This repeated a few times, I accused it of lying, then it went "you're right, I don't actually have the ability to check, so I just used products and values closest to what they should have in stock".

      So yeah, hallucinations are still very much there and still very much feeding people garbage.

      Not to mention I'm a part of multiple FB groups for car enthusiasts and the amount of AI misinformation that we have to correct daily is just staggering. I'm not talking political stuff - just people copy pasting responses from AI which confidently state that feature X exists or works in a certain way, where in reality it has never existed at all.

      1 reply →

The hype around AI is admittedly annoying - especially from the Wall St crowd who don't know how to pronounce 'Nvidia' correctly, and who haven't managed to internalize the fact that the chatbots they use hallucinate.

It really is 'different', though, in the same way the Internet was.

It took about 20 years (ie: since The World ISP) for the Internet to work its way into every facet of life. And the dot com bubble popped half-way through that period of time.

AI might 'underwhelm' for another five or ten years. And then it won't. Whether that's good or bad, I don't know.

  • The only people underwhelmed by AI in February 2026 are people who have formed an identity around being AI skeptics over the last couple years and are struggling to shed it. I haven't met anyone who has seriously used the new models who isn't a at least a bit awed and disturbed.

    • That's very true in terms of how capable these chatbots clearly are, but I believe the author was using 'underwhelming' to refer to the societal impact.

      So far, life goes on roughly the same as it did five years ago. This can feel 'underwhelming' in contrast to the onslaught of public discussion about, and huge investments in, AI.

      Most of us here on HN are programmers, and we all know how radically LLMs have changed our code projects. Even so, the change to our everyday lives (aside from our work or hobby project) is not, just yet, glaringly obvious. This year, it's mainly... every website shoves an AI box on its site that nobody seems to want!

      1 reply →

    • Not true. I'm a really heavy user of AI. And it's improved my productivity dramatically as a developer, but it doesn't work in every situation even in programming. I see it as an indispensible tool, but its not, right now, a tool that will replace me as a programmer or product manager or salesperson, or marketer. or (in my case) an owner and investor.

      Will that happen in the future, maybe. but I don't have enough insight into how AI is evolving in the labs to make a judgement on that.

    • This statment is really annoying and getting boring. There are A LOT of us who have built careers evaluating technology with healthy skepticism, finding where it works and were it doesn't, excited to share & learn - and we've heard "this time it's different" many times. Now because we refuse to jump in without that same nuance and thought, and proclaim "everything's different over night!" we're branded as ludites when we're really trying find a balance.

      I don't hear people saying "nothing is going to change", but I do hear questions about the timeline and if the current levels of investment match returns. Branding these people as stuck in some sort of negative identity is bullshit.

      3 replies →

    • You’re creating a false dichotomy to alienate perceived opponents. Frankly, it’s really annoying and close-minded, and you haven’t contributed anything to the conversation.

    • You're likely to find more nuance in opposing views than your "underwhelmed by AI" generalisation could represent.

  • "AI is a bubble!"

    "AI will change everything!"

    Few seem to understand that both of the above can be true. The parallel you draw to the internet revolution is apt; dot-coms were both a bubble and changed everything.

    • it literally describes the gartnerhype cycle. this article is pointless, the only thing that matters is what survives it with over 1m users. AI will have billions of users when GHC is on the back end.

  • What world are you living in where AI is underwhelming currently? I can’t even comprehend this. Are you just not using it or something?

  • I think a good analogy will be the way word processors changed printing. Suddenly anyone with access to a computer had the ability to do professional level editing and layout. Most of them didn’t have the taste or skills to use the tools to the fullest, but it still opened up a ton of possibilities that weren’t available before because it was never practical to hire an actual professional to do a poster for a dinky church bake sale before. But now, church bake sales can have pretty slick looking posters (and websites) depending on whether any of the volunteers cares enough to get.

    The stuff LLMs will democratize will be a lot more impactful than nice posters for car wash fundraisers though. So in that sense it will be different, but I don’t think it will crack the market for proficient experts in the field in the same way photoshop didn’t destroy graphic design and CAD didn’t destroy drafting. It may get rid of the market for a lot of the second-tier bootcamp grad talent though, so I wouldn’t be getting into that right now if I could help it.

    • I think this is exactly right. I've been thinking of "this time" as similar to the advent of digital spreadsheets. Spreadsheets existed for thousands of years but spreadsheet programs transformed spreadsheet work that took hours or weeks into seconds. You still had to know what you were doing, and if you knew what you were doing you were easily 10x faster than those that didn't.

      I think we are in a similar situation with code generation now, then only difference in my mind is that LLMs come with a massive platform risk. Who's to say that one day anthropic decides my company is too much of a competitor to use their tool (like they've already done with openai) or what if they decide that instead of pulling their product from my use they just make it generate worse code, or even insert malicious payloads. A dependence on these tools is wildly more risky than dependency on a word processor or a spreadsheet program. It reminds me of the arguments around net neutrality and I cannot fathom how people building on top of, and with, these tools do not see the mountain of risks around them.

>Blockchain... NFTs >The problem is, the same dudes who were pumped for all of that bollocks now won't stop wanging on about Artificial Intelligence.

I was firmly in the camp that blockchain was not a viable solution to any problem, and that NFTs sound stupid. I think AI is much different than that list. So, there goes your argument?

  • > I was firmly in the camp that blockchain was not a viable solution to any problem, and that NFTs sound stupid. I think AI is much different than that list. So, there goes your argument?

    Squares are rectangles. The existence of rectangles that aren't squares doesn't negate that.

  • You need to re-evaluate your logic here; if you were a Blockchain / NFT booster who doesn't believe AI is different you could argue you've disproved their argument. You have not.

  • Yeah that comparison doesn't pass the smell test. Blockchain/crypto were purely financial instruments and for better or worse, a new financial instrument is very different than a new tech innovation; tbh there was a thin veneer of tech when it comes to crypto/blockchain, but the magic was because of the money, not because of the tech.

    AI is different because the magic clearly is because of the tech. The fact that we get this emergent behavior out of (what essentially amounts to) polynomial fitting is pretty surprising even for the most skeptical of critics.

  • I think the author is saying that a specific crowd, which happened to be very vocal and excited about web3 and NFTs, is also very vocal and excited about AI. In my personal experience they are right, a lot of the hustler types around me who were trying to get everyone to "invest" in digital land are now doomposting about AI.

    It's not a very legible situation for people outside of the profession, and a lot of them believe it's just another grift that will blow up in a few years.

By the looks of it, 2026 might be the year where reality and fiction will finally collide with AI and we'll be able to see if all the hype was warranted.

But like all the previous hype, most of the people that were the loudest won't say they were wrong, and they'll move to the next thing, pretending like they never were the one that portrayed AI as the holy Graal.

  • There are all sorts of algorithms in use that were once thought of as AI, but transitioned to being mere algorithms well before they entered public awareness, if they did that at all. Some are still useful and used everywhere, but they have never been thought of as AI by the public. For them, AI is a term that has long been reserved for some far-off, sci-fi future.

    LLM's are not artificial general intelligence (i.e. not sci-fi AI). Why haven't they transitioned to being mere algorithms by now? Why is the public being told AI is finally arriving when it's really just another algorithm?

    We have some truly slick and shady corporations involved in the bubble right now and they're marketing LLM's like tobacco. LLM's have been pushed out, at immense cost, to the public in a way that makes them more directly accessible to average people than any past algorithm. Young children can ask a LLM to do their homework for them. Middle managers can ask a LLM to create a (shitty) ad campaign for them. Corporations have gone to tremendous expense to make that widely available and, for the moment, mostly free. They seem to be following the Joe Camel school of marketing. Get them hooked while they're young so they come to you first when they're older! The only difference is that nobody is stepping in to stop the new Joe Camel from handing out free samples to kids.

    Then there's the "go big" aspects of the bubble. The major competitors are trying to out-spend each other to dominance, but the suckers are so colossally big that their bubble is affecting global GPU, memory, and storage prices. This bubble is going to stress power grids wherever it operates and do considerable environmental harm. The financial games being played behind the bubble are absolutely stupid. The results, so far, are tantalizing for billionaires. LLM's offer the promise of being able to fire all their pesky and annoying human workers. It won't deliver on that, and none of these companies is ever going to make enough to pay their debts. There might be "too big to fail" government bailouts, but there are going to be some big bankruptcies too.

    Useful algorithms will come out of all this, a lot of tears too, but not "AI".

  • AI is real but the socio-political environment is far from conductive to some form of productive use of it - as opposed to using it as a war-machine - AI isn't going to fail in that role but very few will be happy about it.

    I mean, disillusionment is the least of my worries.

  • > and we'll be able to see if all the hype was warranted.

    Umm, what? For the past 3 years, every year I've said something along the lines of "even if models stop improving now, we'll be working on this for years, finding new ways to use it and make cool stuff happen". The hype is already warranted. To have used these tools and not be hyped is simply denial at this point.

    • Maybe AI is useful to you, but the US economy is currently buoyed by promises of AI replacing the workforce across the board.

      Most of Mag-7 are planning to spend over 500B on capex this year alone on building out datacenters for AI pipelines that have yet to prove that it can generate a sustainable profit. Yes, AI is useful in some environments, but the current pricing is heavily subsidized. So my point stand, the hype is not warranted.

      7 replies →

    • i think the point is AI has to go much further and faster than it has in the past 3 years to justify the investments being made from the hype. The hype did its job now the AI industry has to execute and create the returns they promised. That is still very much up in the air, if they can't then the tech was over hyped.

      2 replies →

  • > most of the people that were the loudest won't say they were wrong

    I was so expecting to find this wind-up aimed at those peddling the "AI is hype" laziness.

    It's laziness because they have little CS fundamentals to base such claims on, and the deductions can be made, just not clearly to people who need to study a lot more.

    It's like watching an invisible train (visible to those with strong CS) rolling down the tracks at a leisurely pace. Those sitting in their stalled car on the tracks are busy tweeting about "AI HPY PE TRAIN." Until it wrecks their car, the gimmick is free oxygen. It's a lot easier to write articles than it is to build GPUs and write programs.

    • > It's laziness because they have little CS fundamentals to base such claims on

      So, what CS fundamentals do you need to evaluate if AI is the real thing, or will disappoint in the future? Until a few months ago, coding agents were met with skepticism, until Anthropic introduced their new model and, with it, a hype train that cannot be rationally justified. Look, SOTA LLMs, and coding agents in particular, are impressive. However, current predictions about the future of software development (and the world in general) are speculative. There is little to no data showing whether AI can deliver on its promises. How could there be in this short time frame? No one knows what the future will hold, no one knows how coding agents will be integrated into our work life and everyday life in the long run, or what hard limitations they will reveal. No one can tell you how professions will change in the coming years; every prediction is purely speculative, and anyone making prophecies is either trying to cope with the uncertainty themselves or has some stakes in the AI bet. It would be nice if people were actually humble enough to admit that they have no idea what will happen in the future, instead of writing the hundredth doom and gloom post.

      2 replies →

I’m doing enterprise coding tasks that used to take a month of whole team coordination from mockups to through development and testing in 3 days now. It’s all test driven development, codex 5.3 and a small team of two people who know how to hold it right orchestrating the agents. There’s no reason not to work this way. The sociotechnical engineering aspects of this change are fascinating and rewarding to solve.

  • I work for an old enterprise, so far rather conservative with LLM/AI usage. However the copilot cli adoption in the last 2 weeks is spreading light wild fire. Codex 5.3, a good instructions file and it works. Features are getting done and delivered in days, proper test coverage is done, proper documentation in place. Onboarding to it is also very fast.

  • Many of my industry friends and I were skeptics about all the things the OP mentions, still am. And yet, I am able to push 30-40K lines of nearly perfect code a day now.

    It's different just like the steam engine was different, except technology moves much 100x faster now than it did then. It's different and the same.

    • The 40k lines of code a day crows are amusing. In solving any problem solvable by code, there's a ratio of non-coding work to coding work, and codex et al all help immensely with the coding work but help less with the non-coding work.

      Non-coding work is thinking about the system architecture, thinking about how data should flow, thinking about the problem to be solved, talking with people who will use it, discovering what their objectives are.

      Producing 40k lines a code per day simply means you're not doing any of that work: the work that ensures you're building something worth building.

      Which is why the result is massive, pointless things that don't do the things people actually need, because you've not taken any time to actually identify the problems worth solving or how to solve them.

      It's a form of mania that recalls Kafka's The Burrow, where an underground creature builds and builds an endless series of catacombs without much purpose or coherence. When building becomes so easy when it was so hard -- and when it becomes more fun to build and watch codex's streams of diffs fly by, than to plan -- we forget the purpose of building, and building becomes its own purpose, which is why we usually so little actual productive impact on the world from the "40k lines of code a day" cohort.

    • > "I am able to push 30-40K lines of nearly perfect code a day now."

      It is physically and physiologically impossible for anyone to be reviewing "30-40K lines of nearly perfect code a day" to the extent needed to push it with confidence in a sensible development process.

    • Why do you and many of your industry friends conveniently never actually post their 'perfect code' when asked for proof? I've asked like five different people now that make these claims and they just vanish into the ether.

      1 reply →

>3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.

For what it’s worth, not a single other technology in the list made any sort of impact on my work. For better or worse, LLMs did.

Well, okay, quantum computing actually affected me a lot because I worked at a quantum hardware manufacturer, but that’s different.

LLMs have not radically transformed the world yet because the number of people capable of solving problems by typing into a blinking cursor on a blank screen is actually quite small. Take that subset of the population and reduce it to those that can effectively write communicative prose, and its even smaller still.

It's just an interface problem. The VT100 didn't change the world overnight either.

  • There's another point, too. Detractors say LLMs will never advance to whatever threshold they consider meaningful. Fine. We're working on other paradigms, too, though. Just because a lot of o people are productizing LLMs doesn't mean the state of the art isn't advancing in parallel and AGI isn't in the cards.

  • Agree, LLMs are just another tool. Treating them as chatbots is a very basic way of using them. The future is intelligent engineers embedding them in traditional systems and having them perform specific roles.

Author forgot Segway. Remember when it was going to fundamentally change humanity?

  • Their Ninebot escooters are pretty damn good, far better than most random brands.

    I spent most of Covid in VRChat and met my current live-in gf, so the metaverse was real for me too.

    I also made decent money selling crypto, so that part was real for me too.

    And AI coding, for as dumb as even the best models are, still enabled me to create things that I wanted to, but wouldn't have had time or gotten nearly as far without.

    I dunno if the author realizes, but all the things they mentioned did materialize in one way or another, just not exactly how the hype described it.

    Maybe if they could let go of some of the cynicism, they could find something to be optimistic about. Nothing ever goes exactly as planned, but that doesn't mean nothing is good.

    • > I dunno if the author realizes, but all the things they mentioned did materialize in one way or another, just not exactly how the hype described it.

      From the post, which is not a very long one: "All of the above technologies are still chugging along in some form or other (well, OK, not Quibi). Some are vaguely useful and others are propped up by weirdo cultists"

      1 reply →

    • But there's a spectrum of responses to these technologies, from knee-jerk cynicism to genuine moral disgust. "Useful" and "good for people/society/humanity" don't always go hand-in-hand, particularly if you take origins and power into account.

  • Heh - that went right off the cliff, when... well, I will let the reader research that themselves...

    • The guy who died on one was Jimi Heselden, who was a British entrepreneur who bought the company from the American inventor, Dean Kamen. Dean is alive, however he was recently found to have hung out with the "disgraced financier".

  • I see hoverboards everywhere, which are the self balancing scooter tech from the Segway. Many little ebikes as well making deliveries.

    75% of restaurant orders are delivery now due to widespread personal electric transportation. It already has fundamentally changed humanity.

    https://youtu.be/KOSUEFqszK8

The post nicely lists a bunch of failed hyped tech:

> 3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.

...conveniently doesn't list a bunch of hyped tech that hasn't failed:

> microchips, PCs, the internet, ecommerce, cloud, EVs, 5G

...and presents this as evidence that the current hyped tech (AI) will fail:

> Seems like you say that about every passing fancy - and they all end up being utterly underwhelming.

When the article needs to construct disingenious arguments, I'm not interested in its conclusion.

But wait! If you actually read to the end, there's a plot twist!

> The ideology of "winner takes all" is unsustainable and not supported by reality.

Who said anything about winner takes all? You just burned a "this time is different" straw man and then conclude that "winner takes all" is not realistic?

At this moment I'm wondering if the article was in fact written by a quantized 8B LLM. Surely people don't do such non-sequiturs and then expect to be taken seriously.

But of course not. This is not an argument. This is preaching to the choir.

Preach, brother, preach.

  • Exactly! If this post had been written 20 years ago it would have started with

    Internet, handheld computers, electric cars...The problem is the same dudes.

    Putting beanie babies in with Quantum Computing and Nuclear Power completely ignores the potential life changing elements of some technologies, even if they don't work.

    Oh, and smart glasses he put in there, so he'll be eating his words in 2 years.

> 3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.

I’ve never heard half of the things and the other half is mostly consumer electronics or specific product names. The closest example here is Quantum Computing, which is also a serious technology in development. I think for the OP these are all tech buzzwords that he invests in without understanding what they really are. That’s why he thinks all these unrelated things are the same.

  • I'd say AR & VR were hyped to be as big as AI is now, but just haven't fully delivered on the promise yet. 3D printing was similarly hyped for a time. Same with blockchain. Nuclear power in the 50s was hyped to be the future of energy.

    The point is to take the hype with a grain of salt and knowledge that not all hyped technologies transformed the world as promised. Maybe AI is like the internet or electricity. But maybe the claims about AGI/ASI and full automation are just hype.

I always figured AI would be a big deal from childhood onwards and wrote about it for my college entrance exam in 1980 or so. That doesn't apply to any of

>3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX

It's quite a different thing, more on the level of the evolution of life on earth and quite unlike all that junk.

For me, this captures it:

"All of the above technologies are still chugging along in some form or other (well, OK, not Quibi). Some are vaguely useful and others are propped up by weirdo cultists. I don't doubt that AI will be a part of the future - but it is obviously just going to be one of many technology which are in use.

> No enemies had ever taken Ankh-Morpork. Well technically they had, quite often; the city welcomed free-spending barbarian invaders, but somehow the puzzled raiders found, after a few days, that they didn't own their horses any more, and within a couple of months they were just another minority group with its own graffiti and food shops.

- Terry Pratchet's Faust Eric"

I get that everyone has a strong opinion on whats-going-to-happen-with-AI, but I really think nobody knows.

We're in that part of turbulence where we don't know if the floating leaf is going to go left or right.

The people who will have the hardest time with this transition are those who go all in on a specific prediction and then discover they were wrong.

If you want to avoid that, you can try very very hard to just not be wrong, but as I said, I don't think that's possible.

Instead, we need to be flexible and surf the wave as it comes. Maybe AI fades away like VR. Or maybe it reshapes the world like the internet/smartphones. The hardest thing to do right now, when everyone is yelling, is to just wait and see what happens. But maybe that's the right thing to do.

[p.s.: None of this means don't try to influence events. If you've got a frontier model you've been working on, please try to steer us safely.]

I got my first tech job in 2001. I've been doing this a while and ridden all the waves.

There are two kinds of waves. The ones that don't require collective belief in them to succeed, and those that do.

The latter are kinds like crypto and social media. The former is mobile...and AI.

If no one else in the world had access to AI except me, I would appear superhuman to everyone in the world. People would see my level of output and be utterly shocked at how I can do so much so quickly. It doesn't matter if others don't use AI for me to appreciate AI. In fact, the more other people don't use AI, the better it works out for me.

I'm sympathetic to people who feel like they are against it on principle because scummy influencers are talking about it, but I don't think they're doing themselves any favors.

  • > If no one else in the world had access to AI except me, I would appear superhuman to everyone in the world.

    You really wouldn't. AI simply isn't that useful because it is so unreliable.

to me, ai seems likely to become a new user interface, just like gui did from cli.

abstract away a lot of the mechanics of working with data/information.

helpful, when literacy seems to be trending in a downward direction.

Perhaps this is the failure to understand the distinction between a technology and a meta-technology. Upgrading the factory that builds the robots is much different than upgrading the robots.

  • A technology is a set of methods and tools for achieving the desired results (generally in a reliable and reproducible way). Or, in a broader sense of the word, it's the idea of applying scientific knowledge to solving practical problems, and the process of such application.

    What is meta-technology?

  • Or (taking the other side) failure to notice the distinction between a technology and a pump-and-dump. The technology (attention/diffusion) is awesome. The hype is unbelievable. Literally.

Everything is the same until it's not, good luck predicting when "until it's not" is on the horizon though. Isn't technology innovation a power law thing? Everything hums along fairly regular and then, out of the blue, there's a massive impact. Personally, I think AI has made a pretty large impact in software dev and overall tech industry but I don't see AGI any time soon (and that hype has died down) and therefore I don't see the economics working out. The coding tools, API integrations, chatbots, those are great but I don't see them producing the returns required to keep companies like OpenAI running unless OpenAI takes all the customers and all the ad clicks from everyone else ( Athropic, Alhphabet, X, Amazon, Meta, even Microsoft ). I just don't see that happening.

Said elsewhere on this post... "AI is a bubble!" "AI will change everything!"

Is just propaganda...

Iran is 2 weeks from a nuclear weapon We obliterated Iran's nuclear dreams

Russia is fighting with shovels Russia is on the verge of swarming Europe

What would Joost Meerloo say about it, I wonder.

A very cynical article.

Actually IT IS different. Actually if they manage to create a viable small nuclear reactors or Quantum computers the world will change like it changed with the Watt thermal engine.

Why he is not talking about the Internet, trains, electricity, nuclear bombs, rockets,aviation or engines? Because they worked, like AI works today.

All of them were bubbles at the time and they changed the world forever. AI is changing the world AND it is a bubble.

AI is here to stay. It will improve and it will have consequences. The fact that a robot could do things with its hands is actually significant, whenever you like it or not.

  • > Why he is not talking about the Internet, trains, electricity, nuclear bombs, rockets,aviation or engines? Because they worked, like AI works today.

    Except for the minor bit that AI doesn't work today, and it is not yet clear if it ever will.

This Andrew Klavan interview on AI is worth your time, if not an independent submission:

https://www.youtube.com/watch?v=SZFhFGpDWGw

"Today, I'm speaking with Stephen C. Meyer, Director of The Discovery Institute's Center for Science and Culture, and and George D. Montañez, Director of the AMISTAD Lab at Harvey Mudd College–both of whom are extremely knowledgable on the topic of artificial intelligence. During the course of our conversation, they discuss the asymmetry between human intelligence & AI, the inability of AI to ascribe meaning to raw data, and the limitations of large language models. The real question though is: are we screwed? Let's find out."

Honestly, the remixes this generation suck compared to priors.

"This time will be different," they said about the Metaverse, ignoring the vast tranches of MUCKs, MUDs, MMOs, LSGs, and repeated digital real estate gold rushes of the past half-century. Billions burned on something anyone who played Second Life, Entropia, FFXIV, EQ2, VRChat, or fucking Furcadia could've told you wasn't going to succeed, because it wasn't different, it just had more money behind it this time.

"NFTs are different", as collectors of trading cards, art prints, coins, postage stamps, and an infinite glut of collectibles looked at each other with that knowing, "oh lord, here we go" glance.

"Crypto is different", as those who paid attention to history remembered corporate scrip, gift cards, hedge funds, the S&L crisis, Enron, the MBS crisis, and the multitude of prior currency-related crises and grifts bristled at the impending glut of fraud and abuse by those too risky to engage in traditional commerce.

And thus, here we are again. "This time is different", as those of us who remembered the code generators of yore pollute our floppy drives and salesgrifts convinced our bosses that their program could replace those expensive programmers roll our eyes at the obvious bullshit on naked display, then vomited from stress as over a trillion dollars was diverted from anything of value into their modern equivalent - with all the same problems as before.

I truly hate how stupidly people with money actually behave.

This lazy kind of post annoys me because it sort of groups any of us saying that this technology is profoundly different in with all the town criers who have said this kind of thing before — even if we have never said it before and were even skeptical of past declarations

Effectively, it’s a statement saying nothing can ever be profoundly different, because people have said it before and been wrong.

Lazy.

I enjoyed Dave Cridland's comment more than the article. The article is dismissive of AI and other technologies in an unsubstantiated way.

New things are happening and it's exciting. "AI bad" statements without examples feel very head-in-sand.

  • OP here. Unless you're still watching Quibi on your curved TV, delivered via WiMax then, yeah, I'd say it was pretty bloody substantiated.

    I like technology. I made a decent living from it. But if I had chased every hyped fad that was promised as the next big thing, I doubt I'd be as happy as I am now.

    • You claim to cite 'technologies' but include a few brands and companies for some reason.

      The one you keep citing, here and in the article, Quibi, lives on in technology-form (the spirit of your article we must presume) as an 8 billion dollar business in China and is rapidly upending every Hollywood film studio.

      So, arguments about substantiation or even 'this time' fall flat in the face of not even understanding your own message.

    • Just chiming in to say thanks for the Pratchett quote! I dare say he's about to beat out Douglass Adams for my top author. Feets of Clay and Hogfather should be must reads for people dealing with AI right now imo.

    • You're not really saying anything, though. For every tech hype that has failed, there is another that's changed the world. This IS changing the world and our industry, regardless of whether it reaches the heights of the hypers.

      I mean you're just stating that sometimes tech doesn't meet it's hype. What's insightful about that? It's a given; cherry-picking examples doesn't prove your case.

      12 replies →

  • It's not unsubstantiated though. The claim is "People frequently assert that 'this time is different' and they are almost always wrong" and it proceeded to provide a reasonable list of analogous manias.

    This only doesn't feel like substantiation if you reject the notion that these cases are analogous.

    "You shouldn't eat that."

    "Why not?"

    "Everyone else who's eaten it has either died or gotten really sick."

    "But I'm different! Why should I listen to your unsubstantiated claims?"

    "(lists names of prior victims)"

    "That doesn't mean anything. I'm different. You're just making vague and dismissive unsubstantiated claims."

    The claim isn't "AI bad" the claim is more along the lines of "there's a lot of money changing hands and this has all the earmarks of a classic hype cycle; while attention/diffusion models may amount to something the claims of their societal impacts are almost certainly being exaggerated by people with a financial stake in keeping the bubble inflated as long as possible, to pull in as many suckers as possible."

    If you want another example (which you won't find analogous if you've already drunk the koolaid):

    https://theblundervault.substack.com/p/the-segway-delusion-w...

LLMs are really a marvel, GPT 2 actually inspired me to go back to college (not directly, rather I needed to understand how it worked).

I have unlimited derision for morally spineless worms who disingenuously make it out to be more than it is-- looking at Dario, Sam, and the silly CEO of Control AI. Also, I hate to say it but Andrej Karpathy on twitter-- he's a worthless follow now. I can't blame, but am daily exasperated by media figures who can't help but go with what they hear prominent individuals in the field say.

If I were a junior now, and less confident, I would be abandoning my career in this climate.

LLMs are not going away. They will get a little better than they are now, and new model paradigms will come around at some point. But this tale of massive redundancy and skyrocketing unemployment is not going to come from LLMs.

This is the only reason why I cannot wait for a pop, and pray to God that it comes sooner than later. I just want to feel good about technology again. I want to tinker, to feel positivity, to know how sustainable the tools I'm using actually are.

I don't want to be reminded daily of the disgusting reality of unbridled capitalism.

What is the point being made here? Some past technologies were overhyped, therefore AI is overhyped? Well, some past consumer technologies did change the world (smartphones, texting, video streaming, dating apps, online shopping, etc), so where's the argument that AI doesn't belong to this second group?

Also, every single close friend of mine makes some use of LLMs, while none of them used any the overhyped technologies listed. So you need a specially strong argument to group them together.

For all of those, there is a gartner hype cycle. The thing that matters is when it comes out the back end, is 1m, 1b, 6b people using it?

for all the things you listed, less than 1000 people are using it, with AI we're clearly not finished with the gartner hype cycle, but the back end is going to be over a billion users.

If you can't distinguish the actual utility and progress of AI from it's annoying hype-men then it's hard to take your dismissal of AI seriously.

Failure to appreciate changes in AI will have left you calling every shot wrong over the past 5 years. While AI models continue to improve at an exponential rate, you'll cling to your facile maxims like "dude it's just predicting the next token it isn't real intelligence".

Nuclear weapons - this time is different

Internet - this time is different

iPhone - this time is different

this just looks like someone hearing about tons of hyped things from people across the internet (which almost by definition, is full of false signals and grifters), imagining they are coming from the same person, then arguing with how wrong that person always is. how is that interesting?

I invested in Tesla extremely early (2011) because electric cars, if built correctly, would obviously make great cars, and Elon was one of the few people I actually thought had a shot at doing it.

I was right that blockchain was BS and all the "not sure about Bitcoin, but blockchain will be big" people were idiots.

I've been right for the last couple of years on AI and that people were vastly underestimating it when it came to it's coding potential. And I put my money where my mouth was here. In 2021 when GPT-3 came out I decided almost immediately I needed to invest a significant amount of my net worth in Google simply as hedge against AI destroying knowledge work jobs. Which at the time I thought was probably going to happen around 2030 not realising how far LLMs could go with reasoning.

I'm not particularly intelligent ("only" top 1-2% IQ), but my ability to predict the future is very good. If you have a skill you're unusually good you might relate to how it's strange other people find it so hard to do that thing you find kinda easy. For me that's predicting things and computers.

Since I was a young teen I have been worrying about AI. Most of my IRL best friends I have made from talking about AI risk in 2010s when I was studying AI.

Admittedly I got some of the details wrong back then. In 2010 I thought a lot of manual labour jobs would be easier to automate first – warehouse work, mail, taxis, buses, trains, etc. I worried primarily about the economic and political ramifications, and much less about ASI scenario (at least in this half of the century). But I think still I got the general timeframes and direction right. This was the decade I was concerned about.

I'm so scared right now... My whole life I've had nightmares about AI. I know there are some people who talk about how AI is an existential risk, but it feels like they don't internalise it like I do. They're not prepping like me for one, not that you really can prep for what's coming. If they're concerned why don't they have the nightmares of the omnipresent AI which you can't out think or punch to protect those you love? AI is so powerful in the scariest ways. Super viruses, mass surveillance and control, mind reading, unimaginable sci-fi weapons. It's like a horror story, but suddenly real.

I am an OG AI doomer, but until the last few months I've at least always had some doubt in my mind about whether I'm right, perhaps not about the risk of AI broadly, but about whether we'd actually be able to develop highly capable AIs while I still have a lot of my life ahead of me.

In my opinion this time is different, and what I've been worrying about for the last couple of decades is now here.

We are collectively the indigenous peoples of America and the Europeans have just arrived in the new world. The risk vectors are now endless and how this all plays out is hard to know exactly. What we do know is that the majority of ways this will play out are bad, and some are incomprehensibly bad. Some may achieve status and wealth in the near-term, but longer-term we're all dead, or worse.

I always worry these comments make me sound like a lunatic, I think I am, but I hope I am. I hope you will all forgive me, but I just need to shout about this tonight while I still can. We need to stop this insanity. Data centers need to be nuked. You may doubt me now, but in time you will understand. Hopefully I won't be around to say I told you so. Please make the best of the time we have left.

  • > I needed to invest a significant amount of my net worth in Google simply as hedge against AI destroying knowledge work jobs

    I felt similarly, and did similarly, with both GOOGL and MSFT. I'm not an AI "doomer" in the Yudkowski/LolzWrong sense, but I do think it's quite sad that generative AI is first branch of the AI "tech tree" we raced up. AI art, especially, is tragic.

I hoped the article would be be a meta-discussion of "time" and perhaps relativity or some other phenomenon. Sigh, it's an investment thesis saying "This Time is Different" is a risky bet.

  • That sounds like an interesting article. You should write it.

    • Or have an LLM write it and then we can judge whether the OP is wrong about whether "this time is different".

I would suggest editing the title to "This Time is Different". I think that captures the essence much better.

Love the Sir Terry reference.