← Back to context

Comment by dartos

21 hours ago

> LLM costs

Inference costs, not training costs.

> The fact that you can replace programmers

You can’t… not for any real project. For quick mockups they’re serviceable

> That’s sort of like asking a horse and buggy driver whether automobiles

Kind of an insult to OP, no? Horse and buggy drivers were not highly educated experts in their field.

Maybe take the word of domain experts rather than AI company marketing teams.

> Inference costs, not training costs.

Why does training cost matter if you have a general intelligence that can do the task for you, that’s getting cheaper to run the task on?

> for quick mockups they’re serviceable

I know multiple startups that use LLMs as their core bread-and-butter intelligence platform instead of tuned but traditional NLP models

> take the word of domain experts

I guess? I wouldn’t call myself an expert by any means but I’ve been working on NLP problems for about 5 years. Most people I know in NLP-adjacent fields have converged around LLMs being good for most (but obviously not all) problems.

> kind of an insult

Depends on whether you think OP intended to offend, ig

  • > Why does training cost matter if you have a general intelligence that can do the task for you, that’s getting cheaper to run the task on?

    Assuming we didn’t need to train it ever again, it wouldn’t. But we don’t have that, so…

    > I know multiple startups that use LLMs as their core bread-and-butter intelligence platform instead of tuned but traditional NLP models

    Okay? Did that system write itself entirely? Did it replace the programmers that actually made it?

    If so, they should pivot into a Devin competitor.

    > Most people I know in NLP-adjacent fields have converged around LLMs being good for most (but obviously not all) problems.

    Yeah LLMs are quite good at comming NLP tasks, but AFAIK are not SOTA at any specific task.

    Either way, LLMs obviously don’t kill the need for the NLP field.

Reply didn’t say that the expert is uneducated, just that their tool is obsolete. Better look at facts the way they are, sugar coating doesn’t serve anyone.

> Maybe take the word of domain experts rather than AI company marketing teams.

Appeal to authority is a well known logical fallacy.

I know how dead NLP is personally because I’ve never been able to get NLP working but once ChatGPT came around, I was able to classify texts extremely easily. It’s transformational.

I was able to get ChatGPT to classify posts based on how political it was from a scale of 1 to 10 and which political leaning they were and then classify the persons likely political affiliations.

All of this without needing to learn any APIs or anything about NLPs. Sorry but given my experience, NLPs are dead in the water right now, except in terms of cost. And cost will go down exponentially as they always do. Right now I’m waiting for the RTC 5090 so I can just do it myself with open source LLM.

  • > NLPs are dead in the water right now, except in terms of cost.

    False.

    With all due respect, the fact that you're referring to natural language parsing as "NLPs" makes me question whether you have any experience or modest knowledge around this topic, so it's rather bold of you to make such sweeping generalizations.

    It works for your use case because you're just one person running it on your home computer with consumer hardware. Some of us have to run NLP related processing (POS taggers, keyword extraction, etc) in a professional environment at tremendous scale, and reaching for an LLM would absolutely kill our performance.

    • My understanding is that inference models can absolutely scale down, we are only at the beginning of these getting minimized, and they are trivial to parallelize. That's not a good combo to be against them, their price/performance/efficiency will quickly drop/grow/grow.

  • Performance and cost are trade-offs though. You could just as well say that LLMs are dead in the water, except in terms of performance.

    It does seem likely we’ll soon have cheap enough LLM inference to displace traditional NLP entirely, although not quite yet.

  • > Appeal to authority is a well known logical fallacy.

    I did not make an appeal to authority. I made an appeal to expertise.

    It’s why you’d trust a doctor’s medical opinion over a child’s.

    I’m not saying “listen to this guy because their captain of NLP” I’m saying listen because experts have spent years of hands on experience with things like getting NLP working at all.

    > I know how dead NLP is personally because I’ve never been able to get NLP working

    So you’re not an expert in the field. Barely know anything about it, but you’re okay hand waving away expertise bc you got a toy NLP Demo working…

    That’s great, dude.

    > I was able to get ChatGPT to classify posts based on how political it was from a scale of 1 to 10

    And I know you didn’t compare the results against classic NLP to see if there was any improvements because you don’t know how…

    • > I did not make an appeal to authority. I made an appeal to expertise.

      Lol

      > I’m saying listen because experts have spent years of hands on experience with things like getting NLP working at all.

      “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

      Upton Sinclair

      > Barely know anything about it, but you’re okay hand waving away expertise bc you got a toy NLP Demo working…

      Yes that’s my point. I don’t know anything about implementing an NLP but got something that works pretty well using an LLM extremely quickly and easily.

      > And I know you didn’t compare the results against classic NLP to see if there was any improvements because you don’t know NLP…

      Do you cross reference all your Google searches to make sure they are giving you the best results vs Bing and DDG?

      Do you cross reference the results from your NLP with LLMs to see if there were any improvements?

      1 reply →

  • I haven’t understood these types of uses. How do you validate the score that the LLM gives?

    • The same way you validate scores given by NLPs I assume. You run various tests and look at the results and see if they match what you would expect.