← Back to context

Comment by margalabargala

16 hours ago

Sure, I suppose that's something that someone who doesn't understand the discussion might say.

A person who better comprehends what they read might properly contextualize within the larger conversation, where the point that stands is that LLMs and ceilings are both useful, neither are doomed such that no one should use them, and that individual instances of failures are somewhat uncommon and not a reason for others to avoid the category.

> Sure, I suppose that's something that someone who doesn't understand the discussion might say.

I'm going to be frank, you are the person who misunderstands (and are being rather rude about it). You are responding to an argument no one is making.

To put a fine point on it, you said this:

> Entropy may mean all ceilings collapse eventually, but that doesn't mean we aren't able to make useful ceilings.

But you were responding to a comment saying this:

> Except your ceiling can and will fall on you unless you take preventative measures, entirely due to molecular interactions within the material.

Emphasis added. They are saying maintenance is necessary, not that a safe ceiling is unachievable. It's obviously achievable, we've all seen it achieved.

They further say:

> It boggles the mind to let an LLM have access to a production database without having explicit preventative measures and contingency plans for it deleting it.

Emphasis added. When they say it boggles the mind to deploy an LLM without the proper measures, the implication is that it does make sense to deploy it with the proper measures.

> ...the point that stands is that LLMs and ceilings are both useful, neither are doomed such that no one should use them, ...

I have not seen a single person in this subthread say that LLMs aren't useful or that they are doomed. People say that. But the people you're talking to haven't.

I try to avoid these petty "I brought the receipts" comments, but I don't like the way you're being snarky to people who's crime is engaging with the premises you set up. The faults you are finding are faults you introduced. I'd appreciate if you would avoid that in the future.

  • If that's what you got out of the above conversation that is about as fundamental a misunderstanding as the one at the top of this thread saying "It is fundamental to language modeling that every sequence of tokens is possible". I could say something rude here about both mistakes being made by the same person, but since you brought it up I won't.

    If you want to take a comb to it, the comment saying this:

    > Except your ceiling can and will fall on you unless you take preventative measures, entirely due to molecular interactions within the material

    Was already off the plot. What was being discussed wasn't some specific molecular process, it was the false premise "oh molecules move around randomly so your ceiling might just collapse of its own accord because the beam decided to randomly disintegrate". That's not something that happens.

    You said "The sequence of tokens that would destroy your production environment can be produced by your agent, no matter how much prompting you use". This is analogous to "the ceiling could just collapse on you due to random molecular motion, no matter how much maintenance you do or what materials you use".

    Make sense now?

    Your edit at the bottom of your top comment does better than your original statement.

    • > What was being discussed wasn't some specific molecular process, it was the false premise "oh molecules move around randomly so your ceiling might just collapse of its own accord because the beam decided to randomly disintegrate". That's not something that happens.

      Except it does happens. That’s why buildings get condemned and buildings eventually turn to rubble.

      To the exact point; I have a product from a couple years ago using an old model from OpenAI. It’s still running and all it does is write a personality report based on scores from the test. I can’t update the model without seriously rewriting the entire prompt system, but the model has degraded over the years as well. Ergo, my product has degraded of its own accord and there is nearly nothing I can do about it. My only choice is to basically finagle newer models into giving the correct output; but they hallucinate at much higher rates than older models.

    • > I could say something rude here about both mistakes being made by the same person, but since you brought it up I won't.

      I'd encourage to desist from rudeness, not just when people point it out to you, but at all times.

      > You said "The sequence of tokens that would destroy your production environment can be produced by your agent, no matter how much prompting you use". This is analogous to "the ceiling could just collapse on you due to random molecular motion, no matter how much maintenance you do or what materials you use".

      If prompt engineering is effective (analogous to performing the necessary maintenance and selecting the correct materials), I'm curious what your explanation is for the incident in the article?

      2 replies →