← Back to context

Comment by DoctorOetker

4 days ago

So frequently beginners in linux command lines complain about the irregularity or redundance in command line tool conventions (sometimes actual command parameters -h --help or /h ? other times: man vs info; etc...)

When the first transformers that did more than poetry or rough translation appeared everybody noticed their flaws, but I observed that a dumb enough (or smart enough to be dangerous?) LLM could be useful in regularizing parameter conventions. I would ask an LLM how to do this or that, and it would "helpfully" generate non-functional command invocations that otherwise appeared very 'conformant' to the point that sometimes my opinion was that -even though the invocation was wrong given the current calling convention for a specific tool- it would actually improve the tool if it accepted that human-machine ABI or calling convention.

Now let us take the example of man vs info, I am not proposing to let AI decide we should all settle on man; nor do I propose to let AI decide we should all use info instead, but with AI we could have the documentation made whole in the missing half, and then it's up to the user if they prefer man or info to fetch the documentation of that tool.

Similarily for calling conventions, we could ask LLM's to assemble parameter styles and analyze command calling conventions / parameters and then find one or more canonical ways to communicate this, perhaps consulting an environment variable to figure out what calling convention the user declares to use.

Similarly law professor Rob Anderson joked on X that llm hallucinated cases are good law:

https://x.com/ProfRobAnderson/status/2019078989348774129

> Indeed hallucinated cases are "better law." Drawing on Ronald Dworkin's theory of law as integrity, which posits that ideal legal decisions must "fit" existing precedents while advancing principled justice, this article argues that these hallucinations represent emergent normative ideals. AI models, trained on vast corpora of real case law, synthesize patterns to produce rulings that optimally align with underlying legal principles, filling gaps in the doctrinal landscape. Rather than errors, they embody the "cases that should exist," reflecting a Hercules-like judge's holistic interpretation.

  • Seems naive. You can get an LLM to agree with almost anything if you say the right things to it, and it will hallucinate citations to back you up without skipping a beat. You can probably get it to hallucinate case law to legalize murder on Mondays.

    • You’re talking about manipulated/malicious/intentfully steered hallucination but the parent is referring to trained emergent hallucination (even if sycophantic). These are two different things and both can occur, but the latter is what’s being tongue-in-cheek referred to by the professor.

It has long been a pet peeve of mine that the *nix world has no standard reliable convention for how to interrogate a program for it's available flags. Instead there are at least a dozen ways it can be done and you can't rely on any one of them.

Ah yes, the vaunted ffmpeg-llm --"take these jpegs and turn them into an mp4 and use music.mp3 as the soundtrack" command.

  • Ngl.. I can see the merit and simultaneously recoil in horror as I am starting to understand what linux greybeards hate about windofication of linux ( and now proposed llm-ification of it :D).