Comment by LeonardoTolstoy

6 months ago

What does a submarine do? Submarine? I suppose you "drive" a submarine which is getting to the idea: submarines don't swim because ultimately they are "driven"? I guess the issue is we don't make up a new word for what submarines do, we just don't use human words.

I think the above poster gets a little distracted by suggesting the models are creative which itself is disputed. Perhaps a better term, like above, would be to just use "model". They are models after all. We don't make up a new portmanteau for submarines. They float, or drive, or submarine around.

So maybe an LLM doesn't "write" a poem, but instead "models a poem" which maybe indeed take away a little of the sketchy magic and fake humanness they tend to be imbued with.

Depends on if you are talking about an llm or to the llm. Talking to the llm, it would not understand that "model a poem" means to write a poem. Well, it will probably guess right in this case, but if you go out of band too much it won't understand you. The hard problem today is rewriting out of band tasks to be in band, and that requires anthropomorphizing.

A submarine is propelled by a propellor and helmed by a controller (usually a human).

It would be swimming if it was propelled by drag (well, technically a propellor also uses drag via thrust, but you get the point). Imagine a submarine with a fish tail.

Likewise we can probably find an apt description in our current vocabulary to fittingly describe what LLMs do.

A submarine is a boat and boats sail.

  • An LLM is a stochastic generative model and stochastic generative models ... generate?

    • And we are there. A boat sails, and a submarine sails. A model generates makes perfect sense to me. And saying chatgpt generated a poem feels correct personally. Indeed a model (e.g. a linear regression) generates predictions for the most part.

I really like that, I think it has the right amount of distance. They don't write, they model writing.

We're very used to "all models are wrong, some are useful", "the map is not the territory", etc.

  • No one was as bothered when we anthropomorphized crud apps simply for the purpose of conversing about "them". "Ack! The thing is corrupting tables again because it thinks we are still using api v3! Who approved that last MR?!" The fact that people are bothered by the same language now is indicative in itself. If you want to maintain distance, pre prompt models to structure all conversations to lack pronouns as between a non sentient language model and a non sentient agi. You can have the model call you out for referring to the model as existing. The language style that forces is interesting, and potentially more productive except that there are fewer conversations formed like that in the training dataset. Translation being a core function of language models makes it less important thought. As for confusing the map for the territory, that is precisely what philosophers like Metzinger say humans are doing by considering "self" to be a real thing and that they are conscious when they are just using the reasoning shortcut of narrating the meta model to be the model.

    • > You can have the model call you out for referring to the model as existing.

      This tickled me. "There ain't nobody here but us chickens".

      I have other thoughts which are not quite crystalized, but I think UX might be having an outsized effect here.

      1 reply →

  • What about they synthesize?

    Ties in with creation from many and synthetic/artificial data. I usually prompt instruct my coding models more with “synthesize” than “generate”.