Comment by illiarian
2 years ago
I've tried some of the things you mention (code snippets, summarizing text and writing essay-like texts). These AIs are more often than not wrong, incomplete or lying.
2 years ago
I've tried some of the things you mention (code snippets, summarizing text and writing essay-like texts). These AIs are more often than not wrong, incomplete or lying.
I struggle to understand what exactly people are coding up where ChatGPT actually saves them a lot of time. Is it just generic stuff that would have already been copy/pasted from stackoverflow?
I wonder how many of those people would just benefit from better auto-complete like copilot + learning how to read documentation properly.
I wanted to enhance a python script that organizes my photo library to include a fuzzy city name in the name of the folder. E.g. changing from ./2023/2023-03/2023-03-03_18-29-32.jpg to ./2023/2023-03/San_Francisco/2023-03-03_18-29-32.jpg, where the city name is pulled from the lat/long in the EXIF and then looked up online. I asked ChatGPT one "chunk" at a time, and all of the suggestions it came up with needed some probing and clarifications, but got to a working solution a lot quicker that piecing together random snippets from stack overflow (I am not a python programmer).
It translates curl calls to python requests very well. Also things like "wrap this piece of long running code and show a progress bar" and similar low level stuff.
You are right that it's absolutely possible to figure all this using google and documentation. But do you really want to spend X minutes googling for the correct python module and then figuring out ho to use it? When you can just show it your code and ask it to update it in seconds? It's like you have someone at your side who has already been there and figured out the answer for you.
IDEA's autocomplete was consistently better and more useful for me than Copilot. I think that out of the 5, maybe 10 times when it managed to autocomplete something, it was correct maybe once.
It will may be better on less idiosyncratic code base than the one I was working on, but at one point I just turned it off completely.
It doesn’t matter? It’s a tool, you need to learn how to use it, understand its limitations.
I used chatgpt today to save minutes of my life having it rewrite code from one language to another. Could I have googled the syntax of both, remember how, why , etc. transcribed it to another language, sure. Chat gpt did this in seconds.
I've found it particularly good at translating from one language to another. It will even recognize limitations and dependencies. For example, I asked it to translate some Javascript code (with a UI) into Julia. It said something to the effect of "You can't manipulate the DOM directly in Julia, but we can use GTK.jl to create a GUI". The resulting code needed some work (I'd be surprised if it didn't) but for the most part the structure was there and it provided a pretty decent frame on which to build.
FWIW I made a CLI utility that does this, it saves a bit typing / copy-pasting. More languages can be added easily: https://github.com/clevercli/clevercli#built-in-prompts
For some types of code.
We translated a short bash script into windows .bat files and PowerShell.
It was so wrong with the .bat files that there wasn't a single working line in the entire script. It was 99% correct with the PowerShell code.
As a tool it is completely unreliable. But it does look like magic when it works.
4 replies →
Doesn't lying require moral agency? To say that an LLM is lying is to suggest that it intends to deceive you. That's anthropomorphizing.
lying here just means spitting out falsehoods
that sounds like a very intersting comment in an entirely different conversation.
So are StackOverflow answers. When I look at something on StackOverflow, I am expected to use my expertise and the context of my work to adapt the collection of answers to what I'm working on. StackOverflow doesn't tell me anything I could not have figured out on my own, but searching through some library's source code to find under which situations an error can occur isn't always a good use of my time. ChatGPT fills a similar role. I type into it, much like how I would with Google, get some output, validate it, and adapt it to the specific problem I'm addressing.
> So are StackOverflow answers.
You don't embrace SO overlords though ;) Meanwhile with ChatGPT and Copilot people are losing their minds it seems. I've yet to find them useful for anything beyond mere curiosities and one-off queries like improving corporate-sounding texts.
>You don't embrace SO overlords though
I mean, don't we? The number of times I've heard SWE humorously defined as "people who copy and paste from Stackoverflow"...
(My favorite: https://stackoverflow.blog/2021/03/31/the-key-copy-paste/)