Comment by mppm
18 days ago
There is a lot of talk about how LLMs will disrupt software development and office work and whatnot, but there is one thing that they are massively disrupting right now, and that is education. I've witnessed this with a group of CS master students recently, and they have let their programming skills atrophy to barely imaginable levels. LLMs have the twin effect of raising the bar for what even a barely viable junior developer has to live up to, while simultaneously lowering their actual skills. There is a generation of completely unemployable "graduates" in the pipeline.
The article mentions that most students are only in it for the diploma anyway, but somehow most people are yet to realize that those diplomas will soon be toilet paper, precisely because they no longer require any actual effort to obtain.
I am currently a CS student in germany and our python lecturer told us at the first lesson that "we didn't really need to learn python" because AI was going to take over anyways and we will not be writing any code after we graduate. He then encouraged us to use AI on all assignments he gives us. He even allows us to cheat at the final exam by using LLMs.
I was about to have a word with him after the lecture but when he started talking about how crypto is going to replace fiat any second now I knew he was a lost cause.
I asked around with my fellow students what they thought about them and not one minded that they were essentially enrolled in a "how to proompt" class. When I asked one student that it was all nice and well that you pass the module but isn't the ideal outcome that you actually know the language by the end? He laughed and said "Yeah sure, do you think the same about maths"?
Please raise this with the university, be it 'Fachschaft' or the ombudsman for academic integrity. This is not representative for CS education here as far as I know. Other teachers or faculty want to know.
Besides that, these are ridiculous claims from the teacher. LLMs are powerful but in the end they are still a tool with random output, which needs to be carefully evaluated. Especially Python is my personal view much more subtle than people assume on first contact. Especially the whole numpy universe is like a separate language and quite complicated for a beginner if you want to write fast and efficient code.
I've had courses where LLMs where allowed for projects but we had to provide prompts.
> I am currently a CS student in germany and our python lecturer told us at the first lesson that "we didn't really need to learn python" because AI was going to take over anyways and we will not be writing any code after we graduate. He then encouraged us to use AI on all assignments he gives us. He even allows us to cheat at the final exam by using LLMs.
> I was about to have a word with him after the lecture but when he started talking about how crypto is going to replace fiat any second now I knew he was a lost cause.
Knowing the education system in Germany rather well, I ask myself in which (kind of) educational establishment this happened, since I'd consider this to be rather unusual for at least universities (Universitäten) and Fachhochschulen (some other system of tertiary education that has no analogue in most countries).
It's a Fachhochschule. And yeah that lecture was very unusual it felt like the insane rablings of a techno evangelist who jumped on every hype train in the last 20 years. He said the most important technologies are IoT, Blockchain and AI
1 reply →
You do not have to put up with this. Your lecturer is significantly undermining your education (which you pay for!).
You should bring this up with the department chair of your study. The purpose of your CS degree is to build a strong theoretical foundation, replacing programming with prompting directly goes against this.
> Your lecturer is significantly undermining your education (which you pay for!).
In Germany, at state universities, you typically only pay money for the student self-administration. The huge "payment" is rather the opportunity cost.
> The purpose of your CS degree is to build a strong theoretical foundation
In https://news.ycombinator.com/item?id=43533033 Loeffelmann wrote that this happened at a Fachhochschule, not at a university. The purpose of universities is to give the student a strong theoretical foundation to prepare them for doing research. The purpose of Fachhochschulen is to prepare the student for working in jobs outside of academia.
2 replies →
Don't believe them. I was also going to university in Germany and had to work so much to compensate for bad lecturers. Until now I can say I needed 100% of what I learned in university. Even the most esoteric stuff came back to bite me. For LLMs, they are close to useless if you can't review the stuff. Maybe at some point in the future they are better and can reason about their code, but as in fusion, self-driving, etc., you never know when this is. And there will always be people who have to develop this.
> but isn't the ideal outcome that you actually know the language by the end?
Given that it is billed as a Python course that is reasonable.
But, to be fair, the intent of the course is almost certainly to provide background in the tools so that you can observe CS concepts learned later. Which is kind of like astronomy majors learning how to use a telescope so that they can observe its concepts. If Google image search provided the same imagery just as well as a telescope, the frustration in being compelled to teach rudimentary telescope operation is understandable. It is not like the sciences are studied for the tools.
> It is not like the sciences are studied for the tools
The problem with this logic is that most university students don't go there to do science, they go there to, at best, become working experts in their field. Many employers now expect their javascript frontend developers to have a CS degree, which is simply absurd. Secondary vocational education is generally considered insufficient and tertiary vocational schools are "where you go if you can't get into university". This means universities get a huge number of applicants who want nothing to do with science or advanced theory, but just want to learn enough (and get the right paper!) to get a job in their preferred field.
This is now self-reinforcing. If you're a good programmer and want to work on business software, it would make sense for you to go to a tertiary vocational school (where I'm from that means 2 years, one semester of which is essentially an apprenticeship). But because "everyone goes to university", you'll be seen as a worse candidate for most jobs. At the same time, employers are pressuring universities to be "more practical" because "graduates come to the first day on the job useless". So universities lower the bar, taking more away from vocational, who then lower the bar in turn to stay afloat, devaluing themselves in the process.
4 replies →
You're being cheated out of an education by your feckless lecturer.
> "we didn't really need to learn python" because AI was going to take over anyways
Wow! I think this is an extreme comment to make. I get it.. but WOW! It really makes you wonder about the future of universities. If the answer is to let AI do our work.. even to cheat in final exams... what is the point of universities? Not only are we talking about Software Engineers dying.. but so if his lecturer job!
Anyway..
I am developer for over 20 years.
I have kids -- both are not even teenagers... but there are times I think to myself "is it worth them learning XYZ" because of AI?
By the time my eldest get his first job.. we are talking (atleast) around year 2032. We have to accept that AI is going to do some pretty cool things. HOWEVER, I still "believe" that AI will work alongside software developers. We still need to communicate with it - to do that, you need to understand how to communicate with it.
Point is, if any of my kids express interest in computer programming in the next year or so, I will HAPPILY encourage them to invest time in it. What I have to accept is that they will use AI.. a lot.. to build something in their chosen language.
I can see this being a typical question for new coders:-
"Can you create a flappy bird game in python"
Sure.. AI might spit something out in a matter of minutes and it might even work, but are they really learning? I think I would encourage my kids to ban using AI for (around) 4 days a week.
At the end of the day it is very difficult to know our future. Sometimes I have to think about my future.. not just my kids. I mean, would my job as a software engineer be over? If so, when? What would I do?
Overall It doesn't not bother me because I do think my role will transition with AI but for the younger generation, it can be a grey area understanding where they fit in all this.
I try to be optimistic that the next 100 years will be a very exiciting time for the human race (if we do not destroy ourselves beforehand)
To counter your lecturer, I am reminded of a John Carmack quote: "Low-level programming is good for the programmer's soul"
Not even low-level -- any programming. If you really like to code, you are going to learn it whether in School, College, or University. To me, the best times I learned was outside of official education, shutting myself away in my bedroom. "Official education" is nothing more that doing what you are told for a peice of paper. What is its worth these days?
Whether AI exists or not - those that like coding will invest the time to code. This is what will seperate average to good programmers or developers. What seperates a good programmer to a great programmer will be their lack or AI generated code... to DIY!
Thats my view... but this is a large topic and I am only scratching the surface.
At the end of the day, the question is what do you do when things aren't working. Being resilient in the face of failure is the most important skill. If AI in 2032 never gets stuck anywhere ever, then that's a totally different world we'd living in. So assuming we don't, that's the underlying thing to pass on to your kids, regardless of the actual details. Just the other day I was vibe coding and the code had two fields for date and time instead of one timestamp field and it kept getting confused, but I had to go into the code and actually read it to figure out what went wrong. Low level programming is important for programmers because you have to dig deeper to find gold. The program isn't working like it's supposed to? look at the source. The library being called by that program isn't behaving like it's supposed to? look at the source. The binary doesn't match the source? stick it in a decompiler. At the end of the day, that's where the true value lies.
1 reply →
Well, he has a point about Maths :) But, the difference is that basic Maths skills are enough to live a decent life for someone who doesn't do Maths for a career. Basic programming usually isn't enough to pass job interviews and one needs to know the language for a career, atleast for now. I'm actually learning a lot of basic Maths concepts now that I have a kid I need to teach sometimes and have some money I need to invest and understand about rate of return, compounding etc.
This is simply wrong.
If you think about math as only solving differential equations and inverting matrices by hand, then maybe. This might be how maths are taught in secondary school, but is not at all representative of university-level maths. I use many fields of math on a daily basis at my job and for my personal projects, all of which I've taken courses on:
* Formal logic: boolean algebra, set theory. These are the core of any algorithm.
* Graph theory: working with parse trees, ASTs, and other problems involving relationships.
* Linear algebra: any problem that requires working with vectors or matrices, e.g graphics, many areas of machine learning, ...
* Category theory: type systems, algebraic data types, many other functional programming abstractions.
I'm sure there are many more that I've taken for granted.
I have seen folks who are relatively new to programming work like this.
Rather than simple laziness, it was often because they felt intimidated by their lack of knowledge and wanted to be more productive.
However, the result of a ChatGPT based workflow is that reasoning often is the very last resort. Ask the LLM for a solution, paste it in, get an error, paste that in, get a new solution, get another error, ask for a fix again, etc. etc.
Before someone chimes in to say this is like Stack Overflow: no it isn't. Real people expect you to put some work and effort into first describing your problem, then solving it. You would rarely find someone willing to go through such an exercise with you, and they probably wouldn't hallucinate broken code to you while doing it.
15 minutes of this and it turns out to be something silly that ChatGPT would never catch - e.g. you have installed a very old version of the Python module for some internal company reason. But because the reasoning muscle isn't being built up, and the context isn't being built up, they can't figure it out.
They didn't see the bit on the docs page that says "this function was added in version 1.5" because they didn't write the function call, and didn't open the documentation, and perhaps wouldn't even consider opening the documentation because that's what ChatGPT is for. In fact, they might not have even consciously chosen that library because again.. that's what ChatGPT is for.
> Ask the LLM for a solution, paste it in, get an error, paste that in, get a new solution, get another error, ask for a fix again, etc. etc.
That's exactly what I've seen as well. The students don't even read the code, let alone try to reason through how it works. They just develop hand-eye coordination for copy-pasting.
> Rather than simple laziness, it was often because they felt intimidated by their lack of knowledge and wanted to be more productive.
Part of it really is laziness, but what you say is also true. Unfortunately, this is the nature of learning. Reading or listening is by itself a weak stimulus for building neural pathways. You need to actively recall and apply, and struggle with problems until they yield. It is so much easier to look up a solution somewhere. And now you don't even to look anything up anymore -- just ask.
Just a funny, or depressing, aside - and then a point about LLMs.
Real coding can, unfortunately, be as bad as that or worse. Here is one very famous HN comment from 2018, and I know what he is talking about because participating in this madness was my first job after university, dispelling a lot of my illusions:
https://news.ycombinator.com/item?id=18442941
I went into that job (of porting Oracle to another Unix platform for an Oracle platform partner) full of enthusiasm and gave up finding any meaning or enjoyment after the first few weeks, or trying to understand or improve anything. If AI could do at least some of that job it would actually a big plus.
(it's the working-on-Oracle-code comment if you didn't already guess it)
I think there's a good chance code becomes more like biology. You can understand the details, but there are sooo many of them, and there are way too many connections directly and indirectly across layers. You have to find higher level methods because it's too much for a direct comprehension.
I saw a main code contributor in a startup I worked at work kind of like that. Not all his fault, forced to move too quickly and the code was so ill defined, not even the big boss knowing what they wanted and only talking in meta terms and always coming up with new sometimes contradicting ideas. The code was very hard to comprehend and debug, especially since much of it was distributed algorithms. So his approach was running it with demo data, observing higher level outcomes, and tweaking this or that component until it kind of worked. It never worked reliably, it was demo-quality software at best. But he managed to implement all the new ideas from management at least.
I found that style interesting and could not dismiss it outright, even though I really really did not want to have to debug that thing in production. But I saw something different from what I was used to, focus on a higher level, working when you just can't have the same depth of understanding of what you are doing as one would traditionally like. Given my Oracle experience, I saw how this would be a useful style IRL for many big long-running projects, like that Oracle code, that you had no chance of comprehending or improving without "rm -rf" and a restart which you could not do.
I think education needs to also show these more "biology-level complexity" and more statistical higher level approaches. Much of our software is getting too complex for the traditional low-level methods.
I see LLMs as just part of such a toolkit for the future. On the one hand, there is supplying code for "traditional" smaller projects, where you still have hope to be in control and have at least the seniors fully understand the system. On the other hand, LLMs could help with too-complex systems, not with making them understandable, that is impossible for those messy systems, but with being able to still productively work with them, add new features and debug issues. Code such as in the Oracle case. A new tool for even higher levels of messiness and complexity in our systems, which we won't be able to engineer away due to real life constraints.
I think AI will have a dual effect. It will make some folks smarter and others dumber.
For example, you could have ChatGPT write your code for you, then explain it to you step by step.
It can be an interactive conversation.
Or you could copy/paste it.
In one case it acts as a tutor.
In another case it just does your work for you.
I agree with this.
I've used AI as a crutch for a time, and felt my skills get worse. Now I've set it up to never have it give me entire solutions, just examples and tips on how to get it done.
I've struggled with Shader Programming for a while, tried to learn it from different sources and failed a lot. It felt like something unreachable for me, I don't really know why really. But with the help of an AI that's fine-tuned for mentoring, I really understood some of the concepts. It outlined what I should do and asked socratic questions that made me think. I've gotten way better at it and actually have a pretty solid understanding of the concepts now (well, I think).
But sometimes at work I do give in and get it to write an entire script for me, out of laziness and maybe boredom. Their significant advances as of late with "extended thinking" and the likes made them much more likely to one-shot the writing of a slightly complex script... Which in turn made it harder to not just say "hey, that sounds like boring work, let's have the AI do the biggest part of it and I'll patch up the rest".
I have a similar setup going on. I'm a heavy user of LLMs, but the only time I use the code they generate is for throwaway scripts. I like to describe the problem I'm working on, paste in my code, and ask about everything wrong with it. Am I missing something? Are there glaring flaws or inefficiencies? Are there better ways to approach this? I never take suggestions unless I fully understand and agree with them. There are lots of poor suggestions, but lots of really good ones too.
Infinite tailored critique and advice. I have found this immensely valuable, and I have learned lots doing it. LLMs are static analyzers on steroids.
What AI tuned for mentoring did you use?
It is one thing to get code explained to you (which can also be good) but another to engage in finding a solution, explore the problem space, fail a couple of times and learn from your mistakes also, and of course the embodied process itself of writing the code. Learning is an active process; having stuff explained to you is not bad but it does not lead to the same depth of understanding. Granted, not all subjects and cases benefit the same from deeper understanding and it is impossible to get into depth with everything. So this is a trade-off in each case to decide how much one may want to go in, and it is great that we also now have this option to not go in the same depth. But imo one should be mindful about it, and make conscious decisions on how they use LLMs in case where they may think that understanding a subject more is also important.
There are still ways that LLMs can be used in that case, eg having them review your code, suggest alternatives to your code, eg more idiomatic ways to do sth, when you delve into sth new etc, and treat their output critically of course, but actually writing one's code is important for some kinds of understanding.
> In one case it acts as a tutor
This can be very useful when you are learning programming.
You don't always have a tutor available and you shouldn't only rely on tutors.
It might be useful when you start learning a new programming language/framework, but you should learn on how to articulate a problem and search for solutions, e.g. going through stackoverflow posts and identify if the post applies and solves your problem.
After a while (took way too long for me) you realize that the best way to solve problems is by looking up the documentation/manpage of a project/programming language/whatever and really try to understand the problem at its core.
I wonder how much even this approach would help. I would liken it to studying past exam papers with the solutions on hand. My experience is you actually have to solve the problems yourself to actually properly absorb the concepts, rather than just copy them into your short term memory for a short while.
Ai will make experts more effective and remove most people who are going to grow into experts.
Basically most people will be idiots, except for the mental exercise type people who like using their mental muscles.
So education will stop being a way to move up in life.
I agree - the truly curious will be rewarded while those who couldn’t care less will mindlessly copy and paste. Maybe that will give the rest of us job security?
It's just Google (web search) v2, if you are able to input the right terms and interpret the results critically you'll be accelerated. If not, you're just another mark.
Also there's no context or docs to dig into, it just spits something out that looks right but might be relying on deprecated code or completely wrong.
Ask it to explain something? At least it's confident I guess.
> There is a generation of completely unemployable "graduates" in the pipeline.
I feel like that was always the case, at least since like 10 years ago and by my definition.
I wasn't unemployable as a graduate, I found a job after all. But I was near enough useless and started from the ground up.
I've always felt my real education in software engineering started at work.
20 odd years later I lead a large engineering team and see the same with a lot of graduates we hire. There's a few exceptions but most are as clueless as I was at that age.
Yeah, I graduated around 2000 and had to learn how to work on a professional software engineering team.
That doesn't mean my education was worthless—quite the opposite. It's just that what you learn in a software engineering degree isn't "how to write code and do software development in a professional team in their specific programming language and libraries and frameworks and using their specific tooling and their office politics."
2 replies →
That matches up with my general expectations of graduates. They should be smart, but are not expected to really know much.
A diploma from the type of school the author describes is already pretty worthless, imo.
I don't get why schools can't just get strict in response to these issues. No electronics in class, period. Accessibility problems can be fixed by having each impaired student get a volunteer scribe for the class.
You're in school to learn, and electronics hinder in-person education more than they help, especially as ChatGPT style AI is available on them.
The "no devices in school" rule has been tried, scientifically tested, and it doesn't really improve outcomes: https://www.thelancet.com/journals/lanepe/article/PIIS2666-7...
The real damage is in the brains and attention spans, traditional school just can't compete with the massive dopamine overstimulus of System A thinking students get every day for an average of 6-8h outside school, by simply requiring focused System B reasoning on tiresome and (comparatively) dull tasks while enforcing dopamine withdrawal.
Your appealing to authority with a lancet article but the article just concludes that kids don't spend less time on their phones because of the school bans.
Irrespective of brain feedback mechanisms after school it is still a better teaching/learning environment for students to have a device ban during school time.
What kids or parents enable after school is beyond school policies. Nevertheless teachers should be minimally protected in their ability to teach and kids in their ability to learn.
1 reply →
The first author's commentary at https://www.bmj.com/content/388/bmj-2024-082569.full is easier to read than the paper you linked to.
In it she suggests that rather than thinking of a smart phone ban like a smoking ban,
> A more constructive analogy than smoking might be driving cars. In response to increasing injuries and deaths from car crashes, rather than banning cars, society built an ecosystem of product safety regulations for companies (seatbelts, airbags) and consumers (vehicle safety tests, penalties), public infrastructure (traffic lights), and education (licences) to support safer use. Comparative efforts in product safety and education are needed to supplement debates about smartphone and social media bans and to balance the positive and indispensable role of digital technologies against their potential harms.
It's an intriguing analogy because we know well how dangerous cars are to health and the environment, we know there are people who don't want to drive but are forced to because there are no alternatives, and we know how much many drivers oppose support for bike lanes, mass transit, and other alternatives.
And we know the history of how the UK over her entire life has transformed to be more and more car dependent.
If we embrace that analogy, then we need to support alternatives to being digital, with the right to an offline life.
I don't know what System A and System B are, a DDG search for "System A {thinking,reasoning}" finds nothing useful, and the paper says nothing about it nor about comparing dopamine levels.
4 replies →
> Students' sleep, classroom behaviour, exercise or how long they spend on their phones overall also seems to be no different for schools with phone bans and those without, the academics found.
> However, they did find that spending longer on smartphones and social media in general was linked with worse results for all of those measures.
https://www.bbc.com/news/articles/cy8plvqv60lo
About the same study. Again, when kids are not on their phones they do better at school. Period. A ban is just a way to try to get there. If it's not effective because kids skirt the rules, we try something else
I think payola diplomas will probably continue to be valuable, since they represent non-falsifiable economic power/sacrifice. Even if schools just literally sold diplomas for 100k, they would still be useful for business to filter out people who are too poor to matter (i.e. they have such divergent interests from shareholders/management that it would be more trouble that its worth to try and socialize them to a particular professional role).
This is a bit less cut-and-dried, but IMO cryptocurrency has normalized this kind of view where simply wasting resources is itself a way to generate, or at least represent, value.
Computers were supposed to be bicycles for the mind, but increasingly we want them to think for us.
Well, I see an e-bike analogy around the corner. People dont want to invest the energy anmore, now that they can buy expensive batteries to help with the pedaling. That is pretty much the human nature.
They were, but that vision was killed as soon as the phrase you quote was spoken.
LLMs are, in fact, one of the few products in the past decades that - at least for now - align with this vision. That's because they empower the end users directly. Anyone can just go to chatgpt.com or claude.ai to access a tool that will understand their problem, no matter how clumsily formulated, and solve it, or teach them how to solve it, or otherwise address it in a useful fashion. That's pure and quite general force multiplier.
But don't you worry, plenty of corporations and countless startups are hard at work to, like with all computing before, strip down the bicycle and offer you Uber and theme park rides for your mind.
> LLMs are, in fact, one of the few products in the past decades that - at least for now - align with this vision. That's because they empower the end users directly.
Oh BULLSHIT. Computer users have been empowered since the very first programming languages were invented. They simply chose not to engage with them.
8 replies →
Full self driving Teslas for the mind
Lol
I like the implication that they might drive you into the median or the side of a semi truck. Very apt analogy - we built it because we could, without asking whether we should
1 reply →
We have robots do physical chores for us: washing machine, robo-vac etc, so why can't we have robots that do mental chores for us? For most of us, our jobs aren't a pleasure, but a chore necessary to earn money to pay rent. How many factory workers do you think enjoy bolting the same car parts to a car over and over again till retirement?
So if I can outsource the mundane, annoying and repetitive parts of SW development (like typing the coding) to a machine, so that I can focus on the parts I enjoy (debugging, requirements gathering, customer interaction, architecture etc), what's wrong with that?
If the end product is good and fulfills the customers needs who cares if a large part of it was written by a machine and not by a human?
I also wish we can go back to the days we were coding in assembly in stead of say JavaScript, but that's not gonna happen professionally for 99% of jobs, you either use JS to ship quickly or get run over by the companies who use JS while you write assembly. ML assisted coding will be the next step.
> We have robots do physical chores for us: washing machine, robo-vac etc, so why can't we have robots that do mental chores for us?
Sure, we can! That's in some sense what computers are. It's nice that they can quickly multiply two integers far faster than you can. Handing off that mental chore to the computer allows you to do your job better in every way.
The difference (and yes, I know that I'm perhaps falling into the trap of "but this time it's different!") is that AI models are very often used in a completely different capacity. You inspect the plates, load up the dishwasher, run it, and inspect the results. You don't just wave your hand over the kitchen and say "this dirty, do fix", and then blindly trust you'll have clean cutlery in a few hours.
Moreover, the menial tasks and assembly-line work that you describe are all repetitive. Most interesting coding isn't (since code has zero duplication cost, duplicate work is pointless – outside of the obvious things like fun and learning, but you want to keep those out of this discussion anyway).
> So if I can outsource the mundane, annoying and repetitive parts of SW development (like typing the coding) to a machine, so that I can focus on the parts I enjoy (debugging, requirements gathering, customer interaction, architecture etc), what's wrong with that?
Nothing is wrong with that. Except you'll still need to inspect the AI's output. And in order to do that, you'll need to have a good understanding of the problem and how it solved it. Maybe you do. That's excellent! This discussion is lamenting that, seemingly, more and more people don't.
There's middle ground between bolting same parts all day and completely avoiding anything difficult. Both body and mind atrophy when they aren't used and that necessarily includes some repetition.
> So if I can outsource the mundane, annoying and repetitive parts of SW development (like coding) to a machine, so that I can focus on the parts I enjoy (debugging, requirements gathering, customer interaction, etc), what's wrong with that?
That's ok when you already understand programming and can guide the codegen and step in to correct when it generates bullshit. But you don't get to that level without learning programming yourself. Education is built from the ground up towards higher and higher levels of abstraction. You don't get to skip learning arithmetic on your way to learning quantum physics, just because numpy will do all your arithmetic once you get there. In other words, it's ok for people who don't like cooking to order takeout, but you don't become a professional cook this way.
1 reply →
LLMs are narrative machines, not analysis machines.
The article isn’t about code, and on HN we default to that all the time.
> but increasingly we want them to think for us
Which is understandable. All societies are constrained by lack of experts / intelligence. Think about how relatively inaccessible healthcare is, even in rich countries.
Unfortunately, they got batteries for greater mobility and are now more like e-bikes for the mind.
E-bikes are actually better for you than not cycling at all (you still need to push the pedals with a pedelec, and altho that's less strenuous than a non-electric bike it's more than most other forms of transit people otherwise use): https://www.peopleforbikes.org/news/the-health-benefits-of-e...
2 replies →
Worse, some professors encourage this!
I had a data structures professor (over a year ago now) that actively encouraged a class of sophomores - most of whom were fresh out of "intro to Java" - to have Copilot (GPT-4 at the time I believe) help churn out assignment code on the university's dime.
Being somewhat ahead and an avowed LLM hater, I mostly forgot about this and plowed through the assignments unassisted... until the first midterm (on paper, in person) hit. The mean was something like a 40.
I eventually spoke to some classmates that weren't in my immediate group, and predictably heard several variations on "I let Copilot become a crutch."
Ugh. Fortunately there was ample opportunity to turn grades around, but I'm sure some people are still feeling that bad advice in their GPAs.
> There is a generation of completely unemployable "graduates" in the pipeline.
A friend who's a high-school teacher says all the students want to be software engineers, so there's also a glut of them coming...
Did anyone tell them about the current market for SW developers? :o
I've completed a Master's course in CS and some students came from other technical studies, like applied mathematics and physics. Of course after only 2 years they did not learn any particular programming language well, but they learned other skills related to CS, performed experiments in very narrow fields.
This kind of ruins the bogus startup founder narrative of disruption being an unconditionally positive thing.