Comment by xmprt

1 year ago

On the flipside, you might end up getting scammed even worse because of incorrect analysis. For example if ChatGPT hallucinates some data/features through faulty research then you might be surprised when you actually make the decision.

While this will undoubtedly happen, I don't understand why this is a new phenomenon, the internet is filled with data with questionable accuracy. One should always be validating/verifying information even if Deep Research put it together.

  • I think the difference with Deep Research – and other hallucination and extrapolation-prone research agents – is that without assistance, that verifying synthesized information is much more of a slog than, say, doing your own research and judging the quality of sources as you go, which "deduplicates" querying and verifying.

    Of course there are straightforward ways, in terms of UX, to make verification orders of magnitude easier – i.e. inline citations – but TFA argues that OpenAI isn't quite there yet.

    Ultimately, if a research agent requires us to verify significant AI synthesized-conclusions as TFA argues, I'd argue research agents actually haven't automated tricky and routine work that keep us thinking about our research at lower level than we would like.

    • From my experience (having hit the Deep Research quota), I wouldn’t use it to build data tables like the article did, but for qualitative or text-based research, it’s incredibly useful. Useful enough that I’d justify multiple accounts just to increase quota. People keep saying hallucinations but in the reports I have built I have not noticed it to be a problem, again I am not doing quantitative analysis with it.

Yeah, probably true. But if it includes links and sources, at the very least it'll save me some time. I can cross-check faster than I can start the research