So, do you want to actually have a conversation comparing ChatGPT to Google and Wikipedia, or do you just want to strawman typical AI astroturfing arguments with no regard to the context above?
Ironic as you are answering someone who talked about correcting a human who blindly pasted an answer to their question with no human verification.
> So, do you want to actually have a conversation comparing ChatGPT to Google and Wikipedia, or do you just want to strawman typical AI astroturfing arguments with no regard to the context above?
Dunno about the person you're replying to (especially given the irony re that linked reddit thread), but I would like to actually have a conversation (or even just a link to someone else's results) comparing ChatGPT to Google and Wikipedia.
I've met people who were proudly, and literally, astroturfing Wikipedia for SEO reasons. Wikipedia took a very long time to get close to reliable, editors now requiring citations for claims etc., and I still sometimes notice pairs of pages making mutually incompatible claims about the same thing but don't have the time to find out which was correct.
Google was pretty reliable for a bit, but for a while now the reliability of its results has been the butt of jokes.
That doesn't mean any criticisms of LLMs are incorrect! Many things can all be wrong, and indeed are. Including microfilm and books and newspapers of record. But I think it is fair to compare them — even though they're all very different, they're similar enough to be worth comparing.
Ah yes, the thing that told people to administer insulin to someone experiencing hypoglycemia (likely fatal BTW) is nothing like a library or Google search, because people blindly believe the output because of the breathless hype.
Yeah, this is why wikipedia is not a good resource and nobody should use it. Also why google is not a good resource, anybody can make a website.
You should only trust going into a library and reading stuff from microfilm. That's the only real way people should be learning.
/s
So, do you want to actually have a conversation comparing ChatGPT to Google and Wikipedia, or do you just want to strawman typical AI astroturfing arguments with no regard to the context above?
Ironic as you are answering someone who talked about correcting a human who blindly pasted an answer to their question with no human verification.
> So, do you want to actually have a conversation comparing ChatGPT to Google and Wikipedia, or do you just want to strawman typical AI astroturfing arguments with no regard to the context above?
Dunno about the person you're replying to (especially given the irony re that linked reddit thread), but I would like to actually have a conversation (or even just a link to someone else's results) comparing ChatGPT to Google and Wikipedia.
I've met people who were proudly, and literally, astroturfing Wikipedia for SEO reasons. Wikipedia took a very long time to get close to reliable, editors now requiring citations for claims etc., and I still sometimes notice pairs of pages making mutually incompatible claims about the same thing but don't have the time to find out which was correct.
Google was pretty reliable for a bit, but for a while now the reliability of its results has been the butt of jokes.
That doesn't mean any criticisms of LLMs are incorrect! Many things can all be wrong, and indeed are. Including microfilm and books and newspapers of record. But I think it is fair to compare them — even though they're all very different, they're similar enough to be worth comparing.
3 replies →
Ah yes, the thing that told people to administer insulin to someone experiencing hypoglycemia (likely fatal BTW) is nothing like a library or Google search, because people blindly believe the output because of the breathless hype.
See Dunning-Kruger.
See 4chan during the "crowd wisdom" hype era.