Comment by xmprt
1 day ago
I think one of the big problems with Devin (and AI agents in general) is that they're only ever as good as they are. Sometimes their intelligence feels magical and they accomplish things within minutes that even mid level or senior software engineers would take a few hours to do. Other times, they make simple mistakes and no matter how much help you give, they run around in circles.
A big quality that I value in junior engineers is coachability. If an AI agent can't be coached (and it doesn't look like it right now), then there's no way I'll ever enjoy using one.
My first job I spent so much time reading Python docs, and the ancient art of Stack Overflow spelunking. But I could intuitively explain a solution in seconds because of my CS background. I used to encounter a certain kind of programmer often, who did not understand algorithms well but had many years of experience with a language like Ruby, and thus was faster in completing tasks because they didn't need to do the reference work that I had to do. Now I think these kinds of programmers will slowly disappear and only the ones with the fast CS intuition will remain.
I've found the opposite true as well.
I disagree. If anything, CS degrees have proven time and time again they aren't translatable into software development (which is why there's an entire degree field called Software Engineering emerging).
If anything, my gut says that the CS concepts are very easy for LLMs to recall and will be the first things replaced (if ever) by AI. Software engineer{ing,s} (project construction, integrations, scaling, organizational/external factors, etc) will stick around for along time.
There's also the meme in the industry that self-taught, non-CS degree engineers are potentially of the most capable group. Though this is anecdotal.
> If anything, CS degrees have proven time and time again they aren't translatable into software development (which is why there's an entire degree field called Software Engineering emerging).
Emerging? I graduated in 2006 with a BEng in Software Engineering.
The difference between it and the BSc CompSci degree I started in, was that optional modules became mandatory — including an industrial placement year (paid internship).
> Software engineer{ing,s} (project construction, integrations, scaling, organizational/external factors, etc) will stick around for along time.
My gut disagrees, because LLMs are at about the same level in those things as they are in low level coding: not yet replacing humans in project level tasks any more than they do in coding tasks, but also being OK assistants for both coding and project domains. I have no reason to think either has less or more opportunity for self-training, so I expect progress to track for the foreseeable future.
(That said, the foreseeable future in this case is 1-2 years).
> the CS concepts are very easy for LLMs to recall
They're easy to recall, but you have to know what to recall in the first place. Or even know enough of the territory to realise there's something to recall. Without enough background, you'll get a whole set of amazing tools that you have no idea what to do with.
For example, you may be able to write a long description of your problem with some ideas how to steer the AI to give you possible solutions. And the AI may figure out what the problem is and that the hyperloglog is something that could be useful to you. And you may have the awesome programming skills to implement that. But that's a lot of maybes. It would be much faster/easier if you knew about hyperloglog ahead of time and just asked for the implementation or library recommendation.
Or even if you don't know about the actual solution, you'd have enough of CS vocabulary to ask: "how do I get a fast, approximate distinct count from a multiset". It would take a long imprecise description to get the same thing for a coder with no theory background.
1 reply →
I'm not convinced an LLM is really "recalling" any CS concepts when they try to solve a problem. IMHO, we're lucky if it matches the pattern of the request against the pattern of a solution and the two are actually related. I'm no expert but I don't think there's any reason to think that an LLM is taking a CS concept and applying it to something novel in order to get a solution. If they were, I believe their success rate would be much higher.
In many places where someone might reach for something they remember from their CS coursework, there's often an open-source library or tool doing much the same thing. Understanding how these libraries and tools function is certainly valuable but, much of the time, people can get by with only a vague hunch; indeed, this is why they exist! IMHO, I would be happier with the LLM assistant if it picked reliable library code rather than writing up a sketchy solution of its own.
I'm also familiar with this idea people who have managed to be successful in the field without a CS degree are more capable. In my opinion, this is hogwash. I think if we take a step back, we'll see that people graduating from established, top-tier CS programs are looking for higher pay than those who have come from a less expensive and (very often) business focused program. To be fair, people from each of these backgrounds has their strengths; in many organizations a developer who has done two semesters of accounting is a real benefit, in others the ability to turn a research paper into the prototype of a new product is going to be important.
Years of experience often washes out much of these differences. People who have started from business oriented education programs may end up taking a handful of CS courses as they navigate their career, likewise many people with a CS background end up accruing business centered skills.
In my opinion, people start out their education at a place that they can afford, a place that is familiar to them, often a place that they feel comfortable. Someone's economic background (and of their family) plays a big role on what kind of educational program they choose when they are on the cusp of adulthood. Smart and talented people can always learn what they need to learn, regardless of their starting point.
I think honestly the meme that non-CS degree engineers are most capable is selection bias.
If they had taken a CS degree they would likely be just as, of not more capable.
To self-learn the topics you need to make good software takes an immense amount of effort and although the data and material is out there, takes a lot of work to figure out.
I'm only recently starting to pick up on "magic" patterns that are actually extremely simple to understand given the right base knowledge... I can gain tons of insights from talks givem in the early 2010s but if I watched them without the correct practical experience and foundational knowledge it is the same as the title to a HN post this week[1], gibberish.
With the correct time playing with the foundational patterns and learning some of the backing knowledge it unlocks amazing patterns in my mind and makes the magic seem simple. A great example, CSP[2]. I've known about and used the actor model before, which I first discovered when I found Erlang, but now with CSP I could ask the question "Why should actors be heavy?", you can put an actor into a light-weight task and spawn tons of them and build a tree of connections. Stuff like oneTBB flow graph[3]now makes sense and looks like a beautiful pattern with some really interesting ideas that can be implemented in more general computing than the high performance computing it was designed for. It seems niche but golang is built on those foundations, and the true power of concurrency in golang comes from embracing that. It fundamentally changes the way I want to structure and layout code and I feel like a good CS course can get you there quicker...
Unfortunately a good CS course probably wouldn't accelerate the average CS grads understanding of that but can get someone dedicated and hungry there much much quicker. Someone fresh out of a JS bootcamp is maybe a decade away from that if they ever even want to search for that knowledge.
1. https://oneapi-spec.uxlfoundation.org/specifications/oneapi/...
I completely agree with you. More precisely, I feel they are useful when you have specific tasks with limited scope.
For instance, just yesterday I was battling with a complex SQL query and I got halfway there. I gave our bot the query and an half assed description of what I wanted/what was missing and it got it right on the first try.
Are you sure that your SQL query is correct?
Can they be sure even if they wrote it theirself?
he’s certainly sure, but lord knows if it is
And when working with people it's fairly easy to intervene and improve when needed. I think the current working model with LLMs is definitely suboptimal when we cannot confine their solution space AND where they should apply a solution precisely, and timely.
It’s also often possible to know what a human will be bad at before they start. This allows you to delegate tasks better or vary the level of pre-work you do before getting started. This is pretty unpredictable with LLMs still.