Comment by aprilthird2021
1 year ago
I'm just realizing this might finally be something that helps me get past analysis paralysis I have before committing to so many decisions online. I always feel like without doing my research, I'll get scammed. Maybe this will help give me a bit more confidence
On the flipside, you might end up getting scammed even worse because of incorrect analysis. For example if ChatGPT hallucinates some data/features through faulty research then you might be surprised when you actually make the decision.
While this will undoubtedly happen, I don't understand why this is a new phenomenon, the internet is filled with data with questionable accuracy. One should always be validating/verifying information even if Deep Research put it together.
I think the difference with Deep Research – and other hallucination and extrapolation-prone research agents – is that without assistance, that verifying synthesized information is much more of a slog than, say, doing your own research and judging the quality of sources as you go, which "deduplicates" querying and verifying.
Of course there are straightforward ways, in terms of UX, to make verification orders of magnitude easier – i.e. inline citations – but TFA argues that OpenAI isn't quite there yet.
Ultimately, if a research agent requires us to verify significant AI synthesized-conclusions as TFA argues, I'd argue research agents actually haven't automated tricky and routine work that keep us thinking about our research at lower level than we would like.
1 reply →
Yeah, probably true. But if it includes links and sources, at the very least it'll save me some time. I can cross-check faster than I can start the research
I have found it to be exactly this in a lot of cases. It helps answer or synthesize the data that answers questions I had that are good to know but not critical for me to understand.