Comment by wavefrontbakc
13 hours ago
I've only found useful answers for when I've begun something from completely zero. Things I've got even a reasonable grasp on I start to spot large dangerous holes a model very confidently claims is the right solution. Stuff like bringing camping stove inside a tent to avoid rainy weather, nevermind it being suicide at the same time.
>I web search maybe 10% of what I did six months ago.
Me too, though this is more driven by the total cliff-fall of web search result quality
I'm not sure what you used for it to tell you that?
what should I do with my camping stove if I want to avoid the rain
ChatGPT said: If you want to avoid the rain while using your camping stove, here are your best options:
Shelter Options: Use a Tarp: Set up a tarp high above your stove area with good airflow. Make sure it's not enclosed and water runs off away from you.
Camp Kitchen Tent: Lightweight pop-up canopies or dedicated camp kitchen shelters are perfect. Just make sure it’s ventilated.
Picnic Table Shelter: Many campgrounds have covered areas you can cook under — check for those.
Vestibule (Cautiously): If you have a tent with a large vestibule, you can use the stove there, only if it's very well-ventilated and you're extremely cautious (fire and carbon monoxide risks are real).
Important Safety Tips: Never use a stove inside a closed tent — risk of carbon monoxide poisoning and fire.
Keep the stove on a stable, flat surface.
Avoid cooking right under trees with low-hanging branches.
Ensure your shelter is tall enough — heat rises.
Would you like a quick sketch or image of a tarp setup that works well in rain?
I'm starting to think that many of the "But the AIs tell me I should drive my car off a cliff!!" posters are just making stuff up.
I've seen enough weird output from some models to not think quite so negatively about nay-sayers.
If "stupid response" happens 1% of the time, and the first attempt to use a model has four rounds of prompt-and-response, then I'd expect 1 in 25 people to anchor on them being extremely dumb and/or "just autocomplete on steroids" — the first time I tried a local model (IIRC it was Phi-2), I asked for a single page Tetris web app, which started off bad and half way in became a python machine learning script; the first time I used NotebookLM, I had it summarise one of my own blog posts and it missed half and made up clichés about half the rest.
And driving off, if not a cliff then a collapsed bridge, has gotten in the news even with AI of the Dijkstra era: https://edition.cnn.com/2023/09/21/us/father-death-google-gp...
No! A friend of a friend asked an AI and the AI said they were real. Honest. But it was the other AIs. Not the one the friend asked.
The problem I have with this conclusion is that "trust but verify" long predates AI models. People can, and have been, posting total bullshit on the internet since time immemorial. You have never _not_ needed to actually validate the things you are reading.