Comment by bsaul
2 years ago
I think this time is the good one. ChatGPT has reached a level where we finally can think of building actually useful products on top of "AI".
Note that nobody is pretending that ChatGPT is "true" intelligence (whatever that means), but i believe the excitement comes from seeing something that could have real application (and so, yes, everybody is going to pretend to have incorporated "AI" in their product for the next 2 years probably). After 50 years of unfulfilled hopes from the AI field, i don't think it's totally unfair to see a bit of (over)hype.
I really don't understand how engineers are having good experiences with it; a lot of the stuff I've seen it output w.r.t. swe is only correct if you're very generous with your interpretation of it (re: dangerous if you use it as anything more than a casual glance at the tech). W.r.t. anything else it outputs, it's either so generic that I could do it better, outright wrong (e.g. cannot handle something as simple as tic tac toe), or functions as an unreliable source (in cases where I simply don't have the background).
I wish I could derive as much utility as everyone else that's praising it. I mean, it's great fun but it doesn't wow me in the slightest when it comes to augmenting anything beyond my pleasure.
I'm a Civil Engineer with a modest background including some work in AI. I'm pretty impressed with it. It's about as good or better than an average new intern and it's nearly instant.
I think a big part of my success with it is that I'm used to providing good specifications for tasks. This is, apparently, non-trivial for people to the point where it drives the existence of many middle-management or high-level engineering roles whose primary job is translating between business people / clients / and the technical staff.
I thought of a basic chess position with a mate in 1 and described it to chatGPT, and it correctly found the mate. I don't expect much in chess skill from it, but by god it has learned a LOT about chess for an AI that was never explicitly trained in chess itself with positions as input and moves as output.
I asked it to write a brief summary of the area, climate, geology, and geography of a location I'm doing a project in for an engineering report. These are trivial, but fairly tedious to write, and new interns are very marginal at this task without a template to go off of. I have to lookup at least 2 or 3 different maps, annual rainfall averages over the last 30 years, general effects of the geography on the climate, average & range of elevations, names of all the jurisdictions & other things, population estimates, zoning and land-use stats, etc, etc. And it instantly produced 3 or 4 paragraphs with well-worded and correct descriptions. I had already done this task and it was eerily similar to what I'd already written a few months earlier. The downside is, it can't (or rather won't) give me a confidence value for each figure or phrase it produces. ...So given it's prone to hallucinations, I'd presumably still have to go pull all the same information anyway to double check. But nevertheless, I was pretty impressed. It's also frankly probably better than I am at bringing in all that information and figuring out how to phrase it all. (And certainly MUCH more time efficient)
I think it's evident that the intelligence of these systems is indeed evolving very rapidly. The difference in ChatGPT 2 vs 3 is substantial. With the current level of interest and investment I think we're going to see continued rapid development here for at least the near future.
I can't speak to the rest of what you wrote because I couldn't be further from the field of civil engineering but if you feel impressed with it on chess, ask it to play game of tic tac toe; for me it didn't seem to understand the very simple rules or even keep track of my position on the grid.
There are so few permutations in tac tac toe that it's lack of memory and lack of ability to understand extremely simple rules make it difficult for me to have confidence in anything it says. I mean, I barely had confidence left before I ran that "experiment" but that was the final nail in the coffin for me.
5 replies →
The fact that i can use this tool as a source of inspiration, or a first opinion on any kind of problem on earth is totally incredible. Now whenever i'm stuck on a problem, chatgpt has become an option.
And this happens in the artistic world as well with the other branch of NN : "mood boards" can now be generated from prompts infinitely.
I don't understand how some engineers still fail to see that a threshold was passed.
I've literally asked it to generate stories from prompts and, it has, without fail, generated the most generic stories I have ever read. High school me could have generated better with little to no effort (and I don't say that lightly) and I'm not a good writer by any means.
Moreover, it's first opinion on the things I'm good at has been a special kind of awful. It generates sentences that are true on their face but, as a complete idea, are outright wrong. I mean, you're effectively gaslighting yourself by learning these half truths. And as someone with unfortunate lengthy experience in being gaslit as a kid, I can tell you that depending on how much you learn from it, you could end up needing to spend 3x as much time learning what you originally sought to learn (if you're lucky and the only three things you need to do is learn it very poorly, unlearn it and relearn it the right way)
1 reply →
I agree. Even understanding its limitations as essentially a really good bullshit generator, I have yet to find a good use for it in my life. I've tried using it for brainstorming on creative activities and it consistently disappoints, it frequently spouts utter nonsense if asked to explain something, code it produces is questionable at best, and it is even a very boring conversation partner.