Comment by submeta

1 year ago

Deep Research is in its „ChatGPT 2.0“ phase. It will improve, dramatically. And to the naysayers: When OpenAI released its first models, many doubted that it will be good at coding. Now after two years look at Cursor, aider, and all the llms powering them, what you can do with a few prompts and iterations.

Deep research will dramatically improve as it’s a process that can be replicated and automated.

This is like saying: y=e^-x+1 will soon be 0, because look at how fast it went through y=2!

  • Many past technologies have defied “it’s flattening out” predictions. Look at Personal computing, the internet, and smartphone technology.

    By conflating technology’s evolving development path with a basic exponential decay function, the analogy overlooks the crucial differences in how innovation actually happns.

    • > Many past technologies have defied “it’s flattening out” predictions.

      And many haven't

    • everything you listed was subject to the effects of Moore's Law, explaining their trajectories, but Moore's Law doesn't apply AI in any way. And it's dead.

  • Tony Tromba (my math advisor at UCSC) used to tell a low key infuriating, sexist and inappropriate story about a physicist, a mathematician, and a naked woman. It ended with the mathematician giving up in despair and a happy physicist yelling "close enough."

    • (from a sibling's link)

      > A mathematician and a physicist agree to a psychological experiment. The mathematician is put in a chair in a large empty room and a beautiful naked woman is placed on a bed at the other end of the room. The psychologist explains, "You are to remain in your chair. Every five minutes, I will move your chair to a position halfway between its current location and the woman on the bed." The mathematician looks at the psychologist in disgust. "What? I'm not going to go through this. You know I'll never reach the bed!" And he gets up and storms out. The psychologist makes a note on his clipboard and ushers the physicist in. He explains the situation, and the physicist's eyes light up and he starts drooling. The psychologist is a bit confused. "Don't you realize that you'll never reach her?" The physicist smiles and replied, "Of course! But I'll get close enough for all practical purposes!"

      Is that it? Is it sexist because the physicist and mathematician are attracted to the naked woman?

      1 reply →

disagree - i actually think all the problems the author lays out about Deep Research apply just as well to GPT4o / o3-mini-whatever. These things just are absolutely terrible at precision & recall of information

  • I think Deep Research shows that these things can be very good at precision and recall of information if you give them access to the right tools... but that's not enough, because of source quality. A model that has great precision and recall but uses flawed reports from Statista and Statcounter is still going to give you bad information.

    • Deep Research doesn’t give the numbers that are in statcounter and statista. It’s choosing the wrong sources, but it’s also failing to represent them accurately.

      5 replies →

Unfortunately that's not how trust works. If someone comes into your life and steals $1,000, and then the next time they steal $500, you don't trust them more, do you?

Code is one thing, but if I have to spend hours checking the output, then I'd be better off doing it myself in the first place, perhaps with the help of some tooling created by AI, and then feeding that into ChatGPT to assemble into a report. By showing off a report about smartphones that is total crap, I can't remotely trust the output of deep research.

> Now after two years look at Cursor, aider, and all the llms powering them, what you can do with a few prompts and iterations.

I don't share this enthusiasm, things are better now because of better integrations and better UX, but the LLM improvements themselves have been incremental lately, with most of the gains from layers around them (e.g. you can easily improve code generation if you add an LSP in the loop / ensure the code actually compiles instead of trusting the output of the LLM blindly).

I agree, they are only starting the data flywheel there. And at the same time making users pay $200/month for it, while the competition is only charging $20/month.

And note, the system is now directly competing with "interns". Once the accuracy is competitive (is it already?) with an average "intern", there'd be fewer reasons to hire paid "interns" (more expensive than $200/month). Which is maybe a good thing? Fewer kids wasting their time/eyes looking at the computer screens?