Comment by kamaal
5 days ago
>>I'm not afraid to ask "stupid" questions -That is critical.
AI won't judge and shame you in front of the whole world, for asking stupid questions, or not RTFM'ing well enought, like Stackoverflow users do. Nor will it tell you, your questions are irrelevant.
I think this is the most killer AI feature ever.
I’ve always worked that way. In school (or in seminars), I ask questions that may have the whole room in stitches, but I always learn the lesson. The worst teacher I ever had, was a genius calculus professor, who would harangue you in front of the class, for asking a “stupid” question. That’s the only class I ever took an Incomplete.
That’s the one thing about SO that I always found infuriating. It seems their favorite shade, is inferring that you’re “lazy,” and shaming you for not already having the answer. If anyone has ever looked at my code, “lazy” is probably not a word that springs to mind.
In most cases, I could definitely get the answer, myself, but it would take a while, and getting pointers might save me hours. I just need a hint, so that I can work out an answer.
With SO, I usually just bit my tongue, and accepted the slap, as well as the answer.
An LLM can actually look at a large block of code, and determine some boneheaded typo I made. That’s exactly what it did, yesterday. I just dumped my entire file into it, and said “I am bereft of clue. Do you have any idea why the tab items aren’t enabling properly?”. It then said “Yes, it’s because you didn’t propagate the tag from the wrapper into the custom view, here.” It not only pointed out the source error, but also explained how it resulted in the observed symptoms.
In a few seconds, it not only analyzed, but understood an entire 500-line view controller source file, and saw my mistake, which was just failing to do one extra step in an initializer.
There’s absolutely no way that I could have asked that question on SO. It would have been closed down, immediately. Instead, I had the answer in ten seconds.
I do think that LLMs are likely to “train” us to not “think things through,” but they said the same thing about using calculators. Calculators just freed us up to think about more important stuff. I am not so good at arithmetic, these days, but I no longer need to be. It’s like Machine Code. I learned it, but don’t miss it.
>>I’ve always worked that way. In school (or in seminars), I ask questions that may have the whole room in stitches, but I always learn the lesson.
In my experience, if a question is understood well enough, it basically directly translates into a solution. In most cases parts of questions are not well understood, or require going into detail/simplification/has a definition we don't know etc etc.
This is where being able to ask questions and getting clear answers helps. AI basically helps your do understand the problem as you probe deeper and deeper into the question itself.
Most human users would give up after answering you after a while, several would send you through a humiliating ritual and leaving you with a life long fear of asking questions. This prevents learning, as a good way of developing imagination is asking questions. There is only that much you can derive from a vanilla definition.
AI will be revolutionary for just this reason alone.
Forcing you to read through your 500 line view controller does have the side effect of you learning a bunch of other valuable things and strengthening your mental model of the problem. Maybe all unrelated to fixing your actual problem ofc, but also maybe helpful in the long run.
Or maybe not helpful in the long run, I feel like AI is the most magical when used on things that you can completely abstract away and say as long as it works, I don't care what's in it. Especially libraries where you don't want to read their documentation or develop that mental model of what it does. For your own view, idk it's still helpful when AI points out why it's not working, but more of a balance vs working on it yourself to understand it too.
Well, the old Java model, where you have dozens of small files, for even the simplest applications, may be better for humans, but it's difficult to feed that to an LLM prompt. With the way I work, I can literally copy and paste. My files aren't so big, that they choke the server, but they are big enough to encompass the whole domain. I use SwiftLint to keep my files from getting too massive, but I also like to keep things that are logically connected, together.
Judge for yourself.
Here's the file I am working on: [0].
The issue was in this initializer: [1]. In particular, this line was missing: [2]. I had switched to using a UIButton as a custom view, so the callback only got the button, instead of the container UIBarButtonItem. I needed to propagate the tag into the button.
[0] https://github.com/LittleGreenViper/SwipeTabController/blob/...
[1] https://github.com/LittleGreenViper/SwipeTabController/blob/...
[2] https://github.com/LittleGreenViper/SwipeTabController/blob/...