← Back to context

Comment by codeptualize

2 years ago

I'm working on a project that uses GPT-3 and similar stuff, even before the hype. I think the overhype is really tiring.

Just like with most of these hype cycles there is an actual useful interesting technology, but the hype beasts take it way overboard and present it as if it's the holy grail or whatever. It's not.

That's tiring, and really annoying.

It's incredibly cool technology, it is great at certain use cases, but those use cases are somewhat limited. In case of GPT-3 it's good at generative writing, summarization, information search and extraction, and similar things.

It also has plenty of issues and limitations. Lets just be realistic about it, apply it where it works, and let everything else be. Now it's becoming a joke.

Also, a lot of products I've seen in the space are really really bad and I'm kinda worried AI will get a scam/shitty product connotation.

Finally, a take on chatgpt and similar LLMs I agree with!

I've criticized it whenever it gets brought up as an alternative for academic research, coding, math, other more sophisticated knowledge based stuff. In my experience at least, it falls apart at reliably dealing with these and I haven't gone back.

But man, is it ever revolutionary at actually dealing with language and text.

As an example, I have a bunch of boring drama going on right now with my family, endless fucking emails while I'm trying to work.

I just paste them into chat gpt and get it to summarize them, and then I get it to write a response. The onerous safeguards make it so I don't have to worry about it being a dick.

Family has texted me about how kind and diplomatic I'm being and I honestly don't even really know what they're squabbling about, it's so nice!

  • Haha that's amazing. This is exactly what I mean, for the right use cases it's absolutely amazing.

    Good luck with the drama! Make sure to read a summary for the next family meeting haha.

    • The great part is I can just get it to summarize all the summaries, love how it's flexible like that.

      Yeah I will be sure to read it before meeting them, would be awkward if they found out I was using it during one of the disputes, which was whether or not to keep resuscitating Grandma.

      ChatGPTs stupid ethical filters made it so I actually had to type my response to that one all by myself.

> I'm kinda worried AI will get a scam/shitty product connotation.

Which has happened before. The original semantic/heuristic AI, most notably expert systems, over-promised and ultimately under-delivered. This led directly to the so-called "AI winter" which lasted more than two decades and didn't end until quite recently. It's a very real concern, especially among people who want to push the technology forward and not just profit from it.

> I'm kinda worried AI will get a scam/shitty product connotation

I think we're already there. A legion of AI based startups seem to be coming out daily (https://www.futuretools.io/) that offer little more than gimmicks.

  • You are probably right, kinda sad.

    My last resort is to just remove all AI references from my marketing and just deliver the product.

> Just like with most of these hype cycles there is an actual useful interesting technology, but the hype beasts take it way overboard and present it as if it's the holy grail or whatever. It's not.

See also: Gartner hype cycle

> Also, a lot of products I've seen in the space are really really bad and I'm kinda worried AI will get a scam/shitty product connotation.

I agree with this, I feel like I've seen a lot of really cool technology get swept up in a hype storm and get carried away into oblivion.

I wonder what ways there are for the people who put out these innovations to shield them/their products from it?

Luckily I have a lot of faith in the OpenAI people - I hope their shielding themselves from the technological form of audience capture.

I think the hope is that unlike crypto, as others have said, clearly AI will have actual good applications, so the hope is the hype beasts, after they've burned through all the grifting they can, they can go off back to selling penis pills or whatever they were into last year (or may be pre 2020).

> GPT-3 it's good at generative writing

made up bullshit

> summarization

except you can't possibly know the output has any relation whatsoever to the text being summarized

> information search and extraction

except you can't possibly know the output has any relation whatsoever to the information being extracted

people still fall for this crap?

  • Agreed. Been testing out responses to parsing complex genomics papers (say, a methodology section describing parameters of some algorithm) and its mostly rephrasing rather than digesting and responding with useful information / interpretation. And it will use so many words and imbue so little to the conversation, yet appear like it's helping because ... words.

    • Yeah you can't use it for academic research, it's really, really terrible at it.

      It will even claim it can generate citations for you too, which is pretty messed up because when I tried it just fabricated them replete with faked DOIs.

      Where it shines is at squishy language stuff, like generating the framework of an email, paraphrasing a paragraph for you, or summarizing a news article.

      It really is revolutionary at language tasks, but unfortunately the hype machine and these weird "AI sycophants" have caused people to dramatically overestimate it's use cases.

    • > complex genomics papers

      I think it's fair to say this is not one of the use cases where it shines. It's not great at logic, it's also not that smart.

      That's exactly what the hype does. Too big claims and then it gets dismissed when it inevitably doesn't live up to the hype.

  • I think this is sort of the other side of the hype, totally dismissing it is also incorrect imo.

    Yes, it's overhyped, but it's not useless, it actually does work quite well if you apply it to the right use cases in a correct way.

    In terms of accuracy, in ChatGPT the hallucination issue is quite bad, for GPT3 it's a lot less and you can reduce it even further by good prompt writing, fine tuning, and settings.

    Can we just recognize it for what it is?

    • Someone called it a zeroday on human cognition, on the entire society so I am ready to recognize for that.