Comment by caseyy

1 year ago

In my experience, Perplexity and OpenAI's deep research tools are so misleading that they are almost worthless in any area worth researching. This becomes evident if one searches for something they know or tries to verify the facts the models produce. In my area of expertise, video game software engineering, about 80% of the insights are factually wrong cocktail-party-level thoughts.

The "deep research" features were much more effective at getting me to pay for both subscriptions than in any valuable data collection. The former, I suspect, was the goal anyway.

It is very concerning that people will use these tools. They will be harmed as a result.

> “They will be harmed as a result.”

Compared to what exactly? The ad-fueled, SEO-optimized nightmare that is modern web search? Or perhaps the rampant propaganda and blatant falsehoods on social media?

Whoever is blindly trusting what ChatGPT is spitting out is also falling for whatever garbage they’re finding online. ChatGPT is not very smart, but at least it isn’t intentionally deceptive.

I think it’s an incredible improvement for the low information user over any current alternatives.

  • It’s deceptive by design because there is no reasoning, and humans created it and know this.

    • Clearly it is able to solve various logical problems, and therefore an at least imitate logical thought. Is that not reasoning?

      And there are plenty of logical problems that many humans can’t solve. Does that mean they’re not capable of reasoning?

      At what point would you say something has reasoning? I’d argue that it’s more about how good something is at reasoning, rather than saying it is or isn’t capable of reasoning in absolute terms.

  • OpenAI knows the tool it markets as “research” does not pass muster. It hallucinates, mid-quotes sources, and does not follow the formal inference logic used in research.

    AI slop already produces many plausible-sounding articles used as infotainment and in academia. We already know this slop adds much noise to the signal and that poor signal slows actual research in both cases. But until now, the slop wasn't masquerading specifically as research! It was presented as an assistant, which provides no accuracy guarantees. “Research” by the word’s common meanings does.

    This is why it will do harm. There is no doubt in my mind. And I believe OpenAI knows it. They have quite smart engineers, certainly clever enough to figure it out.

    • If your concern is primarily about researchers in academia using this and believing what it says without skepticism, then higher education has failed them.

      And if you think that all published “research” was guaranteed to be accurate before AI tools became available, then I think you should start looking more critically at sources yourself.

      1 reply →