Comment by manmal
2 days ago
> Today, in ~5 minutes I can do a literature review that would have taken me easily 10+ hours five years ago.
And it will not yield the same outcome you would have had. Your own taste in clicking links and pre-filtering as you do your research, is no longer being done if you outsource this. I‘m guilty of this myself. But let’s not kid ourselves.
I’ve had GPT Pro think 40 minutes about the ideal reverse osmosis setup for my home. It came up with something that would have been able to support 10 houses and cost 20k. Even though I did tell it all about what my water consumers are and that it should research their peak usage. It just failed to observe that you can buffer water in a tank.
There‘s a reason they let you steer GPT-Pro as it goes, now.
I don't claim using AI is the same as doing it yourself. My point is that AI capabilities are much more extensive than "fancy search". By giving a metric and an example I hoped to make that point without getting into hair-splitting.
I wouldn’t call that hair-splitting. I’m saying, it’s not a real literature review, but even fancier search.
Words hint at concept space, which is messy and interconnected. I think a charitable reading can understand the difference between "powerful search, kind of like Google as of 2020, or Lexus-Nexus" and LLM-AI chatbot interfaces... I would hope. But I've been developing software since the 1980s so I can't speak for the newer generations who might not have a quadruple decade view. I've been in meetups in San Francisco around 2018, where people were excited to find multimodal reasoning in early days proto-language models. There have been qualitatively noticeable historical shifts. We don't have to agree on the exact labels used, but what LLM's enable is different enough from e.g. ElasticSearch of 2020 to call out.