← Back to context

Comment by seertaak

8 hours ago

To my mind at least, it is different. I lean heavily on AI for both admin and coding tasks. I just filled out a multipage form to determine my alimony payments in Germany. Gemini was an absolute godsend, helping answer questions in, translate to English, draft explanations, emails requesting time extensions to the Jugendamt case worker.

This is super scary stuff for an ADHDer like me.

I have an idea for a programming language based on asymmetric multimethods and whitespace sensitive, Pratt-parsing powered syntax extensibility. Gemini and Claude are going to be instrumental in getting that done in a reasonable amount of time.

My daily todos are now being handled by NanoClaw.

These are already real products, it's not mere hype. Simply no comparison to blockchain or NFTs or the other tech mentioned. Is some of the press on AI overly optimistic? Sure.

But especially for someone who suffers from ADHD (and a lot of debilitating trauma and depression), and can't rely on their (transphobic) family for support -- it's literally the only source of help, however imperfect, which doesn't degrade me for having this affliction. It makes things much less scary and overwhelming, and I honestly don't know where I'd be without it.

My empirical experience is that people with ADHD are more vulnerable to get addicted to LLMs due to the feeling of instant gratification. But when PRs take ages and 3 different people are reviewing, you are just making prompting a group effort. If you think meetings are a time waste multiplier you should watch LLM PRs.

For that reason, and my own experience with AI users being unaware of how bad of a job the LLM is doing (I've had to confront multiple people about their code quality suddenly dropping), if someone says they can rely on LLM I've learned to not trust them.

When I was younger if I had an idea for a project I would spend time thinking of a cool project name, creating a git repo, and designing an UI for my surely badass project. All easy stuff that gave me the feeling of progress. Then I would immediately lose interest when I realized the actual project idea was harder than that, and quit. This is the vibe I get from LLM use.

I pray you do not become the next HN user to be screwed over by over-trusting LLM when you have it fill out legal documents for you.

"This time is different" has been correct for every major technological shift in history. Electricity was different. Antibiotics were different. Semiconductors were different.

Gen AI reached 39% adoption in two years (internet took 5, PCs took 12). Enterprise spend went from $1.7B to $37B since 2023. Hyperscalers are spending $650B this year on AI infra and are supply-constrained, not demand-constrained. There is no technology in history with these curves.

The real debate isn't whether AI is transformative. It's whether current investment levels are proportionate to the transformation. That's a much harder and more interesting question than reflexively citing a phrase that pattern-matches to past bubbles.

  • > The real debate isn't whether AI is transformative.

    No, the debate is very much whether AI is transformative. You don't get to smuggle your viewpoint as an assumption as if there was consensus on this point. There isn't consensus at all.

    • No one is smuggling this in. The debate is over. It's transformative. We're in the midst of transformation.

  • The problem is in the middle of such a change it's hard to recognize if this is a real change or if this is another Wankel motor.

    Plenty a visual programming language has tried to toot their own horns as being the next transformative change in everything, and they are mostly just obscure DSLs at this point.

    The other issue is nobody knows what the future will actually look like and they'll often be wrong with their predictions. For example, with the rise of robotics, plenty of 1950s scifi thought it was just logical that androids and smart mechanic arms would be developed next year. I mean, you can find cartoons where people envisioned smart hands giving people a clean shave. (Sounds like the making of a scifi horror novel :D Sweeney Todd scifi redux)

    I think AI is here to stay. At very least it seems to have practical value in software development. That won't be erased anytime soon. Claims beyond that, though, need a lot more evidence to support them. Right now it feels like people just shoving AI into 1000 places hoping that they can find an new industry like software dev.

    • > Plenty a visual programming language has tried to toot their own horns as being the next transformative change in everything, and they are mostly just obscure DSLs at this point.

      But how many of your non-nerdy friends were talking about them, let alone using them daily?

    • The practical value is there, if they managed to keep the price at the current levels or lower.

      But if they don't and if I have to think twice about how much every request's going to cost, the cost-benefit analysis will look differently fast.

      1 reply →

    • I once owned a Maxda RX2 ... my second car, IIRC. The Wankel motor wasn't revolutionary, but it was pretty good.

  • > Gen AI reached 39% adoption in two years (internet took 5, PCs took 12)

    You're comparing a service that mostly costs a free account registration and is harder to avoid than to use, with devices that cost thousands of dollars in the early days.

    • That is a fair point. You could look at enterprise adoption though, also very high, and not cheap at all.

  •   > 39% adoption in two years (internet took 5, PCs took 12).
    

    Adjust for connectivity and see whether it is different (from pure hype) this time.

  • There's another perspective you can see in the comparison with the dot com boom. The web is here to stay, but a lot of ideas from the beginning didn't work out and a lot of companies turned bankrupt.

    • The original concept of the web, hyperlinked documents originating from high-quality institutions, is pretty much dead. Now we have an application platform that happens to have adopted some similar protocols and is 99% slop

  • The four technologies I look at are 3D televisions, VR, tablets, and the electric car. 3D televisions and VR have yet to find their moment. Judging tablets by the Apple Newton and electric cars by the EV1, this time is different turns out to be the correct model looking at the iPad and Tesla, but not for 3d televisions or VR (yet). So, it could be, but my time machine is as good as yours (mine goes 1 minute per minute, and only forwards, reverse is broken right now.), so unless you've got money on it, we'll just have to wait and see where it goes.

Can you elaborate your choice about asymmetric multimethods? I also tinker with my PL and wanted to hear your reasonings and ideas

  • Sure! First, here are references, in case you want to deep dive:

    1. http://lucacardelli.name/Papers/Binary.pdf

    2. https://www.researchgate.net/publication/221321423_Parasitic...

    Second, asymmetric multimethods give something up: symmetry is a desirable property -- it's more faithful to mathematics, for instance. There's a priori no reason to preference the first argument over the second.

    So why do I think they are promising?

    1. You're not giving up that much. These are still real multimethods. The papers above show how these can still easily express things like multiplication of a band diagonal matrix with a sparse matrix. The first paper (which focuses purely on binary operators) points out it can handle set membership for arbitrary elements and sets.

    2. Fidelity to mathematics is a fine thing, but it behooves us to remember we are designing a programming language. Programmers are already familiar with the notion that the receiver is special -- we even have a nice notation, UFCS, which makes this idea clear. (My language will certainly have UFCS.) So you're not asking the programmer to make a big conceptual leap to understand the mechanics of asymmetric multimethods.

    3. The type checking of asymmetric multimethods is vastly simpler than symmetric multimethods. Your algorithm is essentially a sort among the various candidate multimethod instances. For symmetric multimethods, choosing which candidate multimethod "wins" requires PhD level techniques, and the algorithms can explode exponentially with the arity of the function. Not so with asymmetric multimethods: a "winner" can be determined argument by argument, from left to right. It's literally a lexicographical sort, with each step being totally trivial -- which multimethod has a more specific argument at that position (having eliminated all the candidates given the prior argument position). So type checking now has two desirable properties. First, it design principle espoused by Bjarne Stroustroup (my personal language designer "hero"): the compiler implementation should use well-known, straightforward techniques. (This is listed as a reason for choosing a nominal type system in Design And Evolution of C++ -- an excellent and depressing book to read. [Because anything you thought of, Bjarne already thought of in the 80s and 90s.]) Second, this algorithm has no polynomial or exponential explosion: it's fast as hell.

    4. Aside from being faster and easier to implement, the asymmetry also "settles" ambiguities which would exist if you adopted symmetric multimethods. This is a real problem in languages, like Julia, with symmetric multimethods. The implementers of that language resort to heuristics, both to avoid undesired ambiguities, and explosions in compile times. I anticipate that library implementers will be able to leverage this facility for disambiguation, in a manner similar to (but not quite the same) as C++ distinguishes between forward and random access iterators using empty marker types as the last argument. So while technically being a disadvantage, I think it will actually be a useful device -- precisely because the type checking mechanism is so predictable.

    5. This predictability also makes the job of the programmer easier: they can form an intuition of which candidate method will be selected much more readily in the case of asymmetric multimethods than symmetric ones. You already know the trick the compiler is using: it's just double-dispatch, the trick used for "hit tests" of shapes against each other. Only here, it can be extended to more than two arguments, and of course, the compiler writes the overloads for you. (And it won't actually write overloads, it will do what I said above: form a lexicographical sort over the set of multimethods, and lower this into a set of tables which can be traversed dynamically, or when the types are concrete, the compiler can leverage monomorphize -- the series of "if arg1 extends Tk" etc. is done in the compiler instead of at runtime. (But it's the same data structure.)

    6. It's basically impossible to do separate compilation using symmetric multimethods. With asymmetric multimethods, it's trivial. To form an intuition, simply remember that double-dispatch can easily be done using separate compilation. Separate compilation is mentioned as a feature in both the cited papers. This is, in my view, a huge advantage. I admit, this I haven't quite figured out generics will fit into this -- at least if you follow C++'s approach, you'll have to give up some aspects of separate compilation. My bet is that this won't matter so much; the type checking ought to be so much faster that even when a template needs to be instantiated at a callsite, the faster and simpler algorithm will mean the user experience will still be very good -- certainly faster than C++ (which uses a symmetric algorithm for type checking of function overloads).

    To go a bit more into my "vision" -- the papers were written during a time when object-orientation was the dominant paradigm. I'd like to relax this somewhat: instead of classes, there will only be structs. And there won't be instance methods, everything will be a multimethods. So instead of the multimethods being "encapsulated" in their classes, they'll be encapsulated in the module in which they're defined. I'll adopt the Python approach where everything is public, so you need to worry about accessibility. Together with UFCS, this means there is no "privileging" of the writer of a library. It's not like in C++ or Java, where only the writer of the library can leverage the succinct dot notation to access frequently used methods. An extension can import a library, write a multimethod providing new functionality, and that can be used -- using the exact same notation as the methods of the library itself. (I always sigh when I read languages, having made the mistake of distinguishing between free functions and instance methods, "fix" the problem that you can only extend a library from the outside using free functions -- which have a less convenient syntax -- by adding yet another type of function, an "extension function. In my language, there are only structs and functions -- it has the same simplicity as Zig and C in this sense, only my functions are multimethods.)

    Together with my ideas for how the parser will work, I think this language will offer -- much like Julia -- attractive opportunities to extend libraries -- and compose libraries that weren't designed to work "together".

    And yeah, Claude Code and Gemini are going to implement it. Probably in Python first, just for initial testing, and then they'll port it to C++ (or possibly self-host).

This comment is scary. You don’t control these technologies, you are growing dependent on stilts that could disappear any moment.