← Back to context

Comment by dlachausse

1 day ago

AI tools absolutely can deliver value for certain users and use cases. The problem is that they’re not magic, they’re a tool and they have certain capabilities and limitations. A screwdriver isn’t a bad tool just because it sucks at opening beer bottles.

So what use cases are those?

It seems to me that the limitations of this particular tool make it suitable only in cases where it doesn't matter if the result is wrong and dangerous as long as it's convincing. This seems to be exclusively various forms of forgery and fraud, e.g. spam, phishing, cheating on homework, falsifying research data, lying about current events, etc.

  • Extracting structured data from unstructured text at runtime. Some models are really good at that and it’s immensely useful for many businesses.

    • Except when they "extract" something that wasn't in the source. And now what, assuming you can even detect the tainted data at all?

      How do you fix that, when the process is literally "we throw an illegible blob at it and data comes out"? This is not even GIGO, this is "anything in, synthetic garbage out"

      2 replies →

  • > So what use cases are those?

    I think that as software/data people, we tend to underestimate the number of business processes that are repetitive but require natural language parsing to be done. Examples would include supply chain (basically run on excels and email). Traditionally, these were basically impossible to automate because reading free text emails and updating some system based on that was incredibly hard. LLMs make this much, much easier. This is a big opportunity for lots of companies in normal industries (there's lots of it in tech too).

    More generally, LLMs are pretty good at document summarisation and question answering, so with some guardrails (proper context, maybe multiple LLM calls involved) this can save people a bunch of time.

    Finally, they can be helpful for broad search queries, but this is much much trickier as you'd need to build decent context offline and use that, which (to put it mildly) is a non-trivial problem.

    In the tech world, they are really helpful in writing one to throw away. If you have a few ideas, you can now spec them out and get sortof working code from an LLM which lowers the bar to getting feedback and seeing if the idea works. You really do have to throw it away though, which is now much, much cheaper with LLM technology.

    I do think that if we could figure out context management better (which is basically decent internal search for a company) then there's a bunch of useful stuff that could be built, but context management is a really, really hard problem so that's not gonna happen any time soon.

  • I started a new job recently, and used ChatGPT tons to learn how to use the new tools: python, opencv, fastapi. I had questions that were too complex for a web search, which ChatGPT answered very coherently! I found it a very good tool to use alongside web search, documentation, and trawling through Stack Overflow.

  • I personally use it as a starting point for research and for summarizing very long articles.

    I’m a mostly self taught hobbyist programmer, so take this with a grain of salt, but It’s also been great for giving me a small snippet of code to use as a starting point for my projects. I wouldn’t just check whatever it generates directly into version control without testing it and figuring out how it works first. It’s not a replacement for my coding skills, but an augmentation of them.