Comment by cs02rm0
2 years ago
I keep trying to use it for code and it keeps leading me up the garden path with suggestions that look really reasonable but don't work.
Off the top of my head - a python app for drawing over a macos screen, but it used an API which didn't support transparent windows, I could draw over a black screen which was so close in code (even set the background alpha) but miles from the desired application. And a Java android app for viewing an external camera, which it seems used an API that doesn't support external cameras.
Of course, because it's not sentient when a couple of days later I figure out from searching elsewhere why it's effort would never work and tell it why, it just apologises and tells me it already knew that. As I'm going along though telling it what errors I'm getting it keeps bringing up alternative solutions which again look like exactly what I want but are completely broken.
I haven't had it produce a single thing that was any use to me yet, but so often it looks like it's done something almost magical. One day I'm sure it'll get there, in the meantime I'm learning to loathe it.
Separately, I've asked it to create a job advert for a role in my wife's business and it did a decent job of that, but it's far easier to walk a path there from what it provides to an acceptable solution. Programming is hard.
It never gives me perfect code, but it gets me 90% there.
For example, I just read the 2017 Google attention paper a few days ago, and with ChatGPTs help I was able to build a complete implementation using only numpy.
It took a full day to generate and organize the code and unit tests. Then two days of debugging and cross referencing.
But, this was impossible before. I barely knew anything about transformers or neural network implementations.
I can’t even imagine what truly motivated people are doing with it.
Completely agree. It gives me the headstart I need that would otherwise take hours of careful searching + crawling through docs, source code, examples, issue trackers, etc.
Do you not feel like you're losing something here, though? You haven't learned anything new or improved your understanding as you would have if you did the research.
3 replies →
For code using it like that, I live or as well, no need to try to understand the API docs (if even usable ), just be pointed in the right direction.
But code has a pretty strict check on correctness afterwards, thinking about it its pretty scary how chatgpt results can be used without proper validation. And they will, the step from "looks good to me" to "it is good" is a small one when you just want an answer.
That's awesome!
I've been using it learn Python and produced Conway's Game of Life with pygame and I had never touched Python before that!
Now I'm working on another python project to try and split the audio of songs into individual lines to learn the lyrics and one approach has been downloading lyrics videos of youtube and munging them with ffmpeg to detect frame changes which should give me the timestamps to then split the audio on etc. It gave me all sorts of wrong enough to be slightly annoying advice about ffmpeg, but in the end what it did give me was _ideas_ and a starting point! I've had to wind up hashing the images and comparing the hashes, but I've since learned that that produces false positives and it was able to give me advice on using ensemble methods to up the accuracy etc which have panned out and helped me solve the problem.
I think for me, this is stuff I could have accomplished using Google if I had been sufficiently motivated, but it's lowered the level of frustration quite significantly being able to ask it follow up questions in context.
In the end, I don't think I mind the random bullshit it makes up from time to time. If I try it and it doesn't work, then fine. It's the stuff I try and it does work that matters!
Thanks for sharing. Your audio work sounds really interesting! Is any of it sharable, or still in-flight?
2 replies →
ChatGPT totally saved one of my side projects. I was getting close to that abandonment phase due to a blocker. I had the solution in paper form, but I couldn't make myself type it into a computer and experiment. Playing with things in a prompt requires a lot less mental effort. "Oh that looks promising..."
I taught myself how to architect a software DSP engine in about 30 minutes last night. "Now rewrite that method using SIMD methods where possible" was a common recurrence. It's incredible how much you can tweak things and stay on the rails if you are careful. Never before would I have attempted to screw with this sort of code of my own volition. Seeing the trivial nature by which I can request critical, essential information makes me reconsider everything. The moment I get frustrated or confused, I can pull cards like "please explain FIR filters to me like I am a child".
same here. often times it's not even _wrong_ per se, it doesn't do what i was _actually_ asking. it's like if you asked an enthusiastic intern to do a thing for you, except interns are smarter.
I have also tested it retroactively on some tricky debugging sessions that I had previously spent a lot of time on. It really goes down the wrong path. Without asking leading questions and, well, proper prompting, you may end up wasting a lot of time. But that's the thing - when you're investigating something, you don't know the root cause ahead of time, you _can't_ ask questions that'll nudge it in the right direction. It ends up being a case of blind leading the blind.
As an SRE, it's handling the tricky debugging (the kind with awful Google results) that would alleviate the most time for me. The trivial stuff takes less time than going through the slog of prompting the AI in the right direction.
I keep seeing people saying they are using it for technical work. It must be design-oriented and less about troubleshooting because most of my experience with the latter has been abysmal.
more positively if you run out of ideas it can lead you to new paths
It just apologises and tells me it already knew that.
I love it when these bots apologize.
Yeah, yeah. Just like "your call is very important to us."
"I'm sorry Dave i can't do that..."
For me it's gotten a few right, but a few terribly wrong. The other day it completely hallucinated a module that doesn't at all exist (but should!), and wrote a ton of code that uses that module. It took me a little while to figure out that the module I was searching for (so I could install it into the project) wasn't real!
I think that would be a great service, not only hallucinate modules, but also implement those that should exist. :)
It probably can do that to some extent. I wonder what would happen if the GP just asked for that.
2 replies →
And when that happens, we can all have fun snarkily replying "PRs welcome" hahaha.
You can fairly easily use it to then collaboratively create the missing module though.
For code, I have found that copilot is significantly better and I don't mind paying for it.
I've been using them in combination and that has been a real boost to me! Co-pilot is great for ideas. Sometimes I want an explanation of what something is doing or to ask about methods for accomplishing a certain task etc. I'm just using ChatGPT in the web browser at the moment, but in the IDE itself would be fantastic, though at the moment I don't feel confident about giving an API key to a 3rd party extension that I don't fully trust.
for code completion copilot is significantly better,
for high level api exploration of reasonably popular frameworks openai can offer something different, at times very valuable despite the very frequent hallucinations.
on the other hand sometimes I re-appreciate good old documentation; openai here has a psychological effect of removing 'api-fear'
I love the conversations about the code with ChatGPT though.
same here. i tried to make it write a simple raytracer and the methods looked correct but were slightly wrong. the output was, of course, garbage.