← Back to context

Comment by anthonyrstevens

14 hours ago

I'm starting to think that many of the "But the AIs tell me I should drive my car off a cliff!!" posters are just making stuff up.

I've seen enough weird output from some models to not think quite so negatively about nay-sayers.

If "stupid response" happens 1% of the time, and the first attempt to use a model has four rounds of prompt-and-response, then I'd expect 1 in 25 people to anchor on them being extremely dumb and/or "just autocomplete on steroids" — the first time I tried a local model (IIRC it was Phi-2), I asked for a single page Tetris web app, which started off bad and half way in became a python machine learning script; the first time I used NotebookLM, I had it summarise one of my own blog posts and it missed half and made up clichés about half the rest.

And driving off, if not a cliff then a collapsed bridge, has gotten in the news even with AI of the Dijkstra era: https://edition.cnn.com/2023/09/21/us/father-death-google-gp...

No! A friend of a friend asked an AI and the AI said they were real. Honest. But it was the other AIs. Not the one the friend asked.