Comment by Enginerrrd
2 years ago
I'm a Civil Engineer with a modest background including some work in AI. I'm pretty impressed with it. It's about as good or better than an average new intern and it's nearly instant.
I think a big part of my success with it is that I'm used to providing good specifications for tasks. This is, apparently, non-trivial for people to the point where it drives the existence of many middle-management or high-level engineering roles whose primary job is translating between business people / clients / and the technical staff.
I thought of a basic chess position with a mate in 1 and described it to chatGPT, and it correctly found the mate. I don't expect much in chess skill from it, but by god it has learned a LOT about chess for an AI that was never explicitly trained in chess itself with positions as input and moves as output.
I asked it to write a brief summary of the area, climate, geology, and geography of a location I'm doing a project in for an engineering report. These are trivial, but fairly tedious to write, and new interns are very marginal at this task without a template to go off of. I have to lookup at least 2 or 3 different maps, annual rainfall averages over the last 30 years, general effects of the geography on the climate, average & range of elevations, names of all the jurisdictions & other things, population estimates, zoning and land-use stats, etc, etc. And it instantly produced 3 or 4 paragraphs with well-worded and correct descriptions. I had already done this task and it was eerily similar to what I'd already written a few months earlier. The downside is, it can't (or rather won't) give me a confidence value for each figure or phrase it produces. ...So given it's prone to hallucinations, I'd presumably still have to go pull all the same information anyway to double check. But nevertheless, I was pretty impressed. It's also frankly probably better than I am at bringing in all that information and figuring out how to phrase it all. (And certainly MUCH more time efficient)
I think it's evident that the intelligence of these systems is indeed evolving very rapidly. The difference in ChatGPT 2 vs 3 is substantial. With the current level of interest and investment I think we're going to see continued rapid development here for at least the near future.
I can't speak to the rest of what you wrote because I couldn't be further from the field of civil engineering but if you feel impressed with it on chess, ask it to play game of tic tac toe; for me it didn't seem to understand the very simple rules or even keep track of my position on the grid.
There are so few permutations in tac tac toe that it's lack of memory and lack of ability to understand extremely simple rules make it difficult for me to have confidence in anything it says. I mean, I barely had confidence left before I ran that "experiment" but that was the final nail in the coffin for me.
This is like complaining that your computer isn't able to toast bread. It's a language model based on multicharacter tokens, outputting grids of single characters is not something you would expect it to succeed at.
If you explained the rules carefully and asked it to respond in paragraphs rather than a grid, it might be able to do it. Can't test since it's down now.
You're acting like it's a grid of arbitrary size and an arbitrary amount of characters. It's a 3x3 with 2 choices for each square.
Neglecting that (only because it's harder to navigate whether I should expect it to handle state for an extremely finite space; even if it's in a different representation than it's directly used to), I know I saw a post where it failed at rock, paper, scissors. Just found it:
https://www.reddit.com/r/OpenAI/comments/zjld09/chat_gpt_isn...
Let's talk about what ChatGPT (or fine-tuned GPT-3) actually is and what it is not. It is a zero-shot or few-shot model that is pretty good at a variety of NLP tasks. Playing tic tac toe or chess is not a traditional NLP task so shouldn't expect it to be good at that. But board games can be completely played in a text format so it is not unexpected either that it can kinda play a board game.
If GPT-3 was listed on Huggingface, its main category listing would be a completion model. Those models tend to be good at generative NLP tasks like creating a Shakespeare sonnet about French fries. But they tend not to be as good at similarity tasks, used by semantic search engines, as models specifically trained for those tasks.
That's a core problem with this. If people with expertise can't even tell us clear boundaries of its truth, how is anyone else going to come to rely on this for that purpose. I mean, you could say you defined a fuzzy boundary and I shouldn't trend towards that boundary from the wrong direction (re: text games that use different tokens than the ones it was trained on) but, how will I know if I'm too close to this boundary when I'm trending from a direction of doing things it's supposed to be good at?
It can't play tic tac toe, fine. But I know it gets concepts wrong on things I'm good at. I've seen it generate a lot of sentences that are correct on their own, but when you combine them to form a bigger picture, it paints something fundamentally different than what's going on.
Moreover, I've had terrible results with it as something to generate creative writing; to the extent that it's on par with a lazy secondary school student that only knows a rudimentary outline of what they're writing about. For example, I asked it to generate a debate between Chomsky and Trump and it gives me a basic debate format around a vague outline of their beliefs where they argue respectfully and blandly (both of which Trump is not known for).
It's entirely possible I haven't exercised it enough and that it requires more than the hours I put into it or it just doesn't work for anything I find interesting.
1 reply →