I teach math at a large university (30,000 students) and have also gone “back to the earth”, to pen-and-paper, proctored and exams.
Students don’t seem to mind this reversion. The administration, however, doesn’t like this trend. They want all evaluation to be remote-friendly, so that the same course with the same evaluations can be given to students learning in person or enrolled online. Online enrollment is a huge cash cow, and fattening it up is a very high priority. In-person, pen-and-paper assessment threatens their revenue growth model. Anyways, if we have seven sections of Calculus I, and one of these sections is offered online/remote, then none of the seven are allowed any in person assessment. For “fairness”. Seriously.
LLMs aren't destroying the University or the essay.
LLMs are destroying the cheap University or essay.
Cheap can mean a lot of things, like money or time or distance. But, if Universities want to maintain a standard, then they are going to have to work for it again.
No more 300+ person freshman lectures (where everyone cheated anyways). No more take-home zoom exams. No more professors checked out. No more grad students doing the real teaching.
I guess, I'm advocating for the Oxbridge/St. John's approach with under 10 class sizes where the proctor actually knows you and if you've done the work. And I know, that is not a cheap way to churn out degrees.
>I guess, I'm advocating for the Oxbridge/St. John's approach with under 10 class sizes where the proctor actually knows you and if you've done the work. And I know, that is not a cheap way to churn out degrees.
I could understand US tuition if that were the case. These days with overworked adjuncts make it McDonalds at Michelin star prices.
Believe it or not, 300-person freshman lectures can be done well. They just need a talented instructor who's willing to put in the prep, and good TAs leading sections. And if the university fosters the right culture, the students mostly won't cheat.
But yeah, if the professor is clearly checked out and only interested in his research, and the students are being told that the only purpose of their education is to get a piece of paper to show to potential employers, you'll get a cynical death-spiral.
(I've been on both sides of this, though back when copy-pasting from Wikipedia was the way to cheat.)
Over here in Finland, higher education is state funded, and the funding is allocated to universities mostly based on how many degrees they churn out yearly. Whether the grads actually find employment or know anything is irrelevant.
So, it's pretty hard for universities over here to maintain standards in this GenAI world, when the paying customer only cares about quantity, and not quality. I'm feeling bad for the students, not so much for foolish politicians.
After a short stint as a faculty member at a McU institution, I agree with much of this.
Provide machine problems and homework as exercises for students to learn, but assign a very low weight to these as part of an overall grade. Butt in seat assessments should be the majority of a course assessment for many courses.
This is depressing. I'm late GenX, I didn't cheat in college (engineering, RPI), nor did my peers. Of course, there was very little writing of essays so that's probably why, not to mention all of our exams were in person paper-and-pencil (and this was 1986-1990, so no phones). Literally impossible to cheat. We did have study groups where people explained the homework to each other, which I guess could be called "cheating", but since we all shared, we tended to oust anyone who didn't bring anything to the table. Is cheating through college a common millenial / gen z thing?
Cheap "universities" are fine for accreditation. Exams can be administered via in-person proctoring services, which test the bare minimum. The real test would be when students are hired, in the probationary period. While entry-level hires may be unreliable, and even in the best case not help the company much, this is already a problem (perhaps it can be solved by the government or some other outside organization paying the new hire instead of the company, although I haven't thought about it much).
Students can learn for free via online resources, forums, and LLM tutors (the less-trustworthy forums and LLMs should primarily be used to assist understanding the more-trustworthy online resources). EDIT: students can get hands-on-experience via an internship, possibly unpaid.
Real universities should continue to exist for their cutting-edge research and tutoring from very talented people, because that can't be commodified. At least until/if AI reaches expert competence (in not just knowledge but application), but then we don't need jobs either.
There are excellent 1000-student lecture courses and shitty 15-student lecture courses. There are excellent take-home exams and shitty in-class exams. There are excellent grad student teaching assistants and shitty tenured credentialed professors. You can't boil quality down to a checklist.
The masses get the cheap AI education. The elite get the expensive, small class, analog education. There won't be a middle class of education, as in the current system - too expensive for too little gain.
10 is a small number. There's a middle ground. When I studied, we had lectures for all students, and a similar amount of time in "work groups," as they were called. That resembled secondary education: one teacher, around 30 students, but those classes were mainly focused on applying the newly acquired knowledge, making exercises, asking questions, checking homework, etc. Later, I taught such classes for programming 101, and it was perfectly doable. Work group teachers were also responsible for reviewing their students' tests.
But that commercially oriented boards are ruining education, that's a given. That they would stoop to this level is a bit surprising.
I see that pressure as well. I find that a lot of the problems we have with AI are in fact AI exposing problems in other aspects of our society. In this case, one problem is that the people who do the teaching and know what needs to be learned are the faculty, but the decisions about how to teach are made by administrators. And another problem is that colleges are treating "make money" as a goal. These problems existed before AI, but AI is exacerbating them (and there are many, many more such cases).
I think things are going to have to get a lot worse before they get better. If we're lucky, things will get so bad that we finally fix some shaky foundations that our society has been trying to ignore for decades (or even centuries). If we're not lucky, things will still get that bad but we won't fix them.
Instructors and professors are required to be subject matter experts but many are not required to have a teaching certification or education-related degree.
So they know what students should be taught but I don't know that they necessarily know how any better than the administrators.
I've always found it weird that you need teaching certification to teach basic concepts to kindergartners but not to teach calculus to adults.
I totally agree. I think the neo-liberal university model is the real culprit. Where I live, Universities get money for each student who graduates. This is up to 100k euros for a new doctorate. This means that the University and its admin want as many students to graduate as possible. The (BA&MA) students also want to graduate in target time: if they do, they get a huge part of their student loans forgiven.
What has AI done? I teach a BA thesis seminar. Last year, when AI wasn't used as much, around 30% of the students failed to turn in their BA thesises. 30% drop-out rate was normal. This year: only 5% dropped out, while the amount of ChatGPT generated text has skyrocketed. I think there is a correlation: ChatGPT helps students write their thesises, so they're not as likely to drop out.
The University and the admins are probably very happy that so many students are graduating. But also, some colleagues are seeing an upside to this: if more graduate, the University gets more money, which means less cuts to teaching budgets, which means that the teachers can actually do their job and improve their courses, for those students who are actually there to learn. But personally, as a teacher, I'm at loss of what to do. Some thesises had hallucinated sources, some had AI slop blogs as sources, the texts are robotic and boring. But should I fail them, out of principle on what the ideal University should be? Nobody else seems to care. Or should I pass them, let them graduate, and reserve my energy to teach those who are motivated and are willing to engage?
In Australia Universities that have remote study have places where people can do proctored exams in large cities. The course is done remotely but the exam, which is often 50%+ of the final grade, is done in a place that has proctored exams as a service.
The Open University in the UK started in 1969. Their staff have a reputation for good interaction with students, and I have seen very high quality teaching materials produced there. I believe they have always operated on the basis of remote teaching but on-site evaluation. The Open University sounds like an all-round success story and I'm surprised it isn't mentioned more in discussions of remote education.
Variations in this system are in active use in the US as well.
Do you feel it is effective?
It seems to me that there is a massive asymmetry in the war here: proctoring services have tiny incentives to catch cheaters. Cheaters have massive incentives to cheat.
I expect the system will only catch a small fraction of the cheating that occurs.
Where I'm studying its proctored-online. They have a custom browser and take over your computer while you're doing the exam. Creepy AF but saves travelling 1,300 km to sit an exam.
Can you tell us: Is "remote study" a relatively recent phenom in AU -- COVID era, or much older? I am curious to learn more. And, what is the history behind it? Was it created/supported because AU is so vast and many people a state might not live near the campus?
Also: I think your suggestion is excellent. We may see this happen in the US if AI cheating gets out of control (which it well).
Proctoring services done well could be valuable, but it’s smaller rural and remote communities that would benefit most. Maybe these services could be offered by local schools, libraries, etc.
Those I ask are unanimously horrified that this is the choice they are given. They are devastated that the degree for which they are working hard is becoming worthless yet they all assert they don't want exams back. Many of them are neurodivergent who do miserably in exam conditions and in contrast excel in open tasks that allow them to explore, so my sample is biased but still.
They don't have a solution. As the main victims they are just frustrated by the situation, and at the "solutions" thrown at it by folks who aren't personally affected.
It is always interesting to me when people say they are "bad test takers". You mean you are bad at the part where we find out how much you know? Maybe you just don't know the material well enough.
caveat emptor - I am not ND so maybe this is a real concern for some, but in my experience the people who said this did not know the material. And the accommodations for tests are abused by rich kids more than they are utilized by those that need them.
I don't think I understand, as a terrible test taker myself.
The solution I use when teaching is to let evaluation primarily depend on some larger demonstration of knowledge. Most often it is CS classes (e.g. Machine Learning), so I don't really give much care for homeworks and tests and instead be project driven. I don't care if they use GPT or not. The learning happens by them doing things.
This is definitely harder in other courses. In my undergrad (physics) our professors frequently gave takehome exams. Open book, open notes, open anything but your friends and classmates. This did require trust, but it was usually pretty obvious when people worked together. They cared more about trying to evaluate and push us if we cared than if we cheated. They required multiple days worth of work and you can bet every student was coming to office hours (we had much more access during that time too). The trust and understanding that effort mattered actually resulted in very little cheating. We felt respected, there was a mutual understanding, and tbh, it created healthy competition among us.
Students cheat because they know they need the grade and that at the end of the day they won't won't actually be evaluated on what they learned, but rather on what arbitrary score they got. Fundamentally, this requires a restructuring, but that's been a long time coming. The cheating literally happens because we just treated Goodhart's Law as a feature instead of a bug. AI is forcing us to contend with metric hacking, it didn't create it.
IMO exams should be on the easier side and not require much computing (mainly knowledge, and not unnecessary memorization). They should be a baseline, not a challenge for students who understand the material.
Students are more accurately measured via long, take-home projects, which are complicated enough that they can’t be entirely done by AI.
Unless the class is something that requires quick thinking on the job, in which case there should be “exams” that are live simulations. Ultimately, a student’s GPA should reflect their competence in the career (or possible careers) they’re in college for.
We have an Accessible Testing Center that will administer and proctor exams under very flexible conditions (more time, breaks, quiet/privacy, …) to help students with various forms of neurodivergence. They’re very good and offer a valuable service without placing any significant additional burden on the instructor. Seems to work well, but I don’t have first hand knowledge about how these forms of accommodations are viewed by the neurodivergent student community. They certainly don’t address the problem of allowing « explorer » students to demonstrate their abilities.
> Many of them are neurodivergent who do miserably in exam conditions
I mean, for every neurodivergent person who does miserably in exam conditions you have one that does miserably in homework essays because of absence of clear time boundaries.
In my undergraduate experience, the location of which shall remain nameless, we had amble access to technology but the professors were fairly hostile to it and insisted on pencil and paper for all technical classes. There were some English or History classes here and there that allowed a laptop for writing essays during an "exam" that was a 3 hour experience with the professor walking around the whole time. Anyway, when I was younger I thought the pencil and paper thing to be silly. Why would we eschew brand new technology that can make us faster! And now that I'm an adult, I'm so thankful they did that. I have such a firm grasp of the underlying theory and the math precisely because I had to write it down, on my own, from memory. I see what these kids do today and they have been so woefully failed.
Teachers and professors: you can say "no". Your students will thank you in the future.
I have a Software Engineering degree from Harvard Extension and I had to take quite a few exams in physically proctored environments. I could very easily manage in Madrid and London. It is not too hard for either the institution or the student.
I am now doing an Online MSc in CompSci at Georgia Tech. The online evaluation and proctoring is fine. I’ve taken one rather math-heavy course (Simulation) and it worked. I see the program however is struggling with the online evaluation of certain subjects (like Graduate Algorithms).
I see your point that a professor might prefer to have physical evaluation processes. I personally wouldn’t begrudge the institution as long as they gave me options for proctoring (at my own expense even) or the course selection was large enough to pick alternatives.
Professional proctored testing centers exist in many locations around the world now. It's not that complicated to have a couple people at the front, a method for physically screening test-takers, providing lockers for personal possessions, providing computers for test administration, and protocols for checking multiple points of identity for each test taker.
This hybrid model is vastly preferable to "true" remote test taking in which they try to do remote proctoring to the student's home using a camera and other tools.
is it ok for students to submit images of hand-written solutions remotely?
seriously it reminds me of my high school days when a teacher told me i shouldn’t type up my essays because then they couldn’t be sure i actually wrote them.
maybe we will find our way back to live oral exams before long…
Centralization and IT-ification has made flouting difficult. There’s one common course site on the institution’s learning management system for all sections where assignments are distributed and collected via upload dropbox, where grades are tabulated and communicated.
So far, it’s still possible to opt out of this coordinated model, and I have been. But I suspect the ability to opt out will soon come under attack (the pretext will be ‘uniformity == fairness’). I never used to be an academic freedom maximalists who viewed the notion in the widest sense, but I’m beginning to see my error.
I attended Purdue. Since I graduated, it launched its "Purdue Global" online education. Rankings don't suggest it's happened yet, but I'm worried it will cheapen the brand and devalue my degree.
I remember sitting with the faculty in charge of offering online courses when I visited as an alum back in 2014. They seemed to look at it as a cash cow in their presentation. They were eager to be at the forefront of online CS degrees at the time.
Remote learning also opens up a lot of opportunities to people that would not otherwise be able to take advantage of them. So it's not _just_ the cash cow that benefits from it.
Some US universities do this remotely via proctoring software. They require pencil and paper to be used with a laptop that has a camera. Some do mirror scans, room scans, hand scans, etc. The Georgia Tech OMS CS program used to do this for the math proofs course and algorithms (leet code). It was effective and scalable. However, the proctoring seems overly Orwellian, but I can understand the need due to cheating as well as maintaining high standards for accreditation.
Maybe we should consider the possibility that this isn't a good idea? Just a bit? No? Just ignore how obviously comparable this is to the most famous dystopian fiction in literary history?
Just wow. If you're willing to do that, I don't know what to tell you.
Stanford requires pen & paper exams for their remote students; the students first need to nominate an exam monitor (a person) who in turn receives and prints the assignments, meets the student at an agreed upon place, the monitor gives them the printed exams and leaves, then collects the exam after allotted time, scans it and sends it back to Stanford.
When it was generally accepted by our society that the goal of all work is victory, not success. Capitalism frames everything as a competition, even when collaboration is obviously superior. Copyright makes this an explicit rule.
Hand written essays are inherently ableist. I would be at a massive disadvantage. I grew up during the 60's, but handwriting was alway slow and error prone for me. As soon as I could use a word processor I blossomed.
It's probably not as bad for mathematical derivations. I still do those by hand since they are more like drawing than expression.
So is testing; people who don't have the skills don't do well. Hell, the entire concept of education is ableist towards learning impaired kids. Let's do away with it entirely.
Would you hire someone as a writer who is completely illiterate? Of course that's an extreme edge case, but at some point equality stops and the ability to do the work is actually important.
I was a slow handwriter, too. I always did badly on in-class essay exams because I didn't have time to write all that I knew needed to be said. What saved my grade in those classes was good term papers.
Having had much occasion to consider this issue, I would suggest moving away from the essay format. Most of the typical essay is fluff that serves to provide narrative cohesion. If knowledge of facts and manipulation of principles are what is being evaluated, presentation by bullet points should be sufficient.
how would you propose to filter out able cheaters instead? there's also in person one on one verbal exam, but economics and logistics of that are insanely unfavorable (see also - job interviews.)
I teach computer science / programming, and I don't know what a good AI policy is.
On the one hand, I use AI extensively for my own learning, and it's helping me a lot.
On the other hand, it gets work done quickly and poorly.
Students mistake mandatory assignments for something they have to overcome as effortlessly as possible. Once they're past this hurdle, they can mind their own business again. To them, AI is not a tutor, but a homework solver.
I can't ask them to not use computers.
I can't ask them to write in a language I made the compiler for that doesn't exist anywhere, since I teach at a (pre-university) level where that kind of skill transfer doesn't reliably occur.
So far we do project work and oral exams: Project work because it relies on cooperation and the assignment and evaluation is open-ended: There's no singular task description that can be plotted into an LLM. Oral exams because it becomes obvious how skilled they are, how deep their knowledge is.
But every year a small handful of dum-dums made it all the way to exam without having connected two dots, and I have to fail them and tell them that the three semesters they have wasted so far without any teachers calling their bullshit is a waste of life and won't lead them to a meaningful existence as a professional programmer.
Teaching Linux basics doesn't suffer the same because the exam-preparing exercise is typing things into a terminal, and LLMs still don't generally have API access to terminals.
Maybe providing the IDE online and observing copy-paste is a way forward. I just don't like the tendency that students can't run software on their own computers.
I'm not that old, and yet my university CS courses evaluated people with group projects, and in-person paper exams. We weren't allowed to bring computers or calculators into the exam room (or at least, not any calculators with programming or memory). It was fine.
I don't see why this is so hard, other than the usual intergenerational whining / a heaping pile of student entitlement.
If anything, the classes that required extensive paper-writing for evaluation are the ones that seem to be in trouble to me. I guess we're back to oral exams and blue books for those, but again...worked fine for prior generations.
Yup. ~25 years ago competitions / NOI / leet_coding as they call it now were in a proctored room, computers with no internet access, just plain old borland c, a few problems and 3h of typing. All the uni exams were pen & paper. C++ OOP on paper was fun, but iirc the scoring was pretty lax (i.e. minor typos were usually ignored).
> I don't see why this is so hard, other than the usual intergenerational whining / a heaping pile of student entitlement.
You know that grading paper exams is a lot more hassle _for the teachers_?
Your overall point might or might not still stand. I'm just responding to your 'I don't see why this is so hard'. Show some imagination for why other people hold their positions.
(I'm sure there's lots of other factors that come into play that I am not thinking of here.)
I'm not too old either and in my university, CS was my major, we did group projects and in person paper exams as well.
We wrote c++ on paper for some questions and were graded on it. Ofcourse the tutors were lenient on the syntax they cared about the algorithm and the data structures not so much for the code. They did test syntax knowledge as well but more in code reasoning segments, i.e questions like what's the value of a after these two statements or after this loop is run.
We also had exams in the lab with computers disconnected from the internet. I don't remember the details of the grading but essentially the teaching team was in the room and pretty much scored us then and there.
> Students mistake mandatory assignments for something they have to overcome as effortlessly as possible.
It has been interesting to see this idea propagate throughout online spaces like Hacker News, too. Even before LLMs, the topic of cheating always drew a strangely large number of pro-cheating comments from people arguing that college is useless, a degree is just a piece of paper, knowledge learned in classes is worthless, and therefore cheating is a rational decision.
Meanwhile, whenever I’ve done hiring or internships screens for college students it’s trivial to see which students are actually learning the material and which ones treat every stage of their academic and career as a game they need to talk their way through while avoiding the hard questions.
I teach computer science / programming, and I know what a good AI policy is: No AI.
(Dramatic. AI is fine for upper-division courses, maybe. Absolutely no use for it in introductory courses.)
Our school converted a computer lab to a programming lab. Computers in the lab have editors/compilers/interpreters, and whitelist documentation, plus an internal server for grading and submission. No internet access otherwise. We've used it for one course so far with good results, and and extending it to more courses in the fall.
An upside: our exams are now auto-graded (professors are happy) and students get to compile/run/test code on exams (students are happy).
>Students mistake mandatory assignments for something they have to overcome as effortlessly as possible.
This is the real demon to vanquish. We're approaching course design differently now (a work in progress) to tie coding exams in the lab to the homework, so that solving the homework (worth a pittance of the grade) is direct preparation for the exam (the lion's share of the grade).
> Our school converted a computer lab to a programming lab. Computers in the lab have editors/compilers/interpreters, and whitelist documentation, plus an internal server for grading and submission. No internet access otherwise. We've used it for one course so far with good results, and and extending it to more courses in the fall.
Excellent approach. It requires a big buy-in from the school.
Thanks for suggesting it.
I'm doing something for one kind of assignment inspired by the game "bashcrawl" where you have to learn Linux commands through an adventure-style game. I'm bundling it in a container and letting you submit your progress via curl commands, so that you pass after having run a certain set of commands. Trying to make the levels unskippable by using tarballs. Essentially, if you can break the game instead of beating it honestly, you get a passing grade, too.
>Our school converted a computer lab to a programming lab. Computers in the lab have editors/compilers/interpreters, and whitelist documentation, plus an internal server for grading and submission. No internet access otherwise. We've used it for one course so far with good results, and extending it to more courses in the fall.
As an higher education (university) IT admin who is responsible for the CS program's computer labs and is also enrolled in this CS program, I would love to hear more about this setup, please & thank you. As recently as last semester, CS professors have been doing pen'n paper exams and group projects. This setup sounds great!
Isn't auto-grading cheating by the instructors? Isn't part of their job providing their expert feedback by actually reading the code the students have generating and providing feedback and suggestions for improvement even at for exams? A good educational program treats exams as learning opportunities, not just evaluations.
So if the professors can cheat and they're happy about having to do less teaching work, thereby giving the students a lower-quality educational experience, why shouldn't the students just get an LLM to write code that passes the auto-grader's checks? Then everyone's happy - the administration is getting the tuition, the professors don't have to grade or give feedback individually, and the students can finish their assignments in half an hour instead of having to stay up all night. Win win win!
> Oral exams because it becomes obvious how skilled they are, how deep their knowledge is.
Assuming you have access to a computer lab, have you considered requiring in-class programming exercises, regularly? Those could be a good way of checking actual skills.
> Maybe providing the IDE online and observing copy-paste is a way forward. I just don't like the tendency that students can't run software on their own computers.
And you'll frustrate the handful of students who know what they're doing and want to use a programmer's editor. I know that I wouldn't have wanted to type a large pile of code into a web anything.
You can provide vscode, vim and emacs all in some web interface, and those are plenty good enough for those use cases. Choosing the plugin list for each would also be a good bikeshedding exercise for the department.
> But every year a small handful of dum-dums made it all the way to exam without having connected two dots, and I have to fail them and tell them that the three semesters they have wasted so far without any teachers calling their bullshit is a waste of life
Yeah, I've had teachers like that, who tell you that you're a "waste of life" and "what are you doing here?" and "you're dumb", so motivational.
I guess this "tough love" attitude helps for some people? But I think mostly it's just that people think it works for _other_ people, but rarely people think that this works when applied to themselves.
Like, imagine the school administration walking up to this teacher and saying "hey dum dum, you're failing too many students and the time you've spent teaching them is a waste of life."
Many teachers seem to think that students go to school/university because they're genuinely interested in motivated. But more often then not, they're there because of societal pressure, because they know they need a degree to have any kind of decent living standard, and because their parents told them to. Yeah you can call them names, call them lazy or whatever, but that's kinda like pointing at poor people and saying they should invest more.
> I use AI extensively for my own learning, and it's helping me a lot. On the other hand, it gets work done quickly and poorly.
> small handful of dum-dums made it all the way to exam without having connected two dots, and I have to fail them ... won't lead them to a meaningful existence
I don't see a problem, the system is working.
The same group of people that are going to loose their job to an LLM arent getting smarter because of how they are using LLM's.
Ideally the system would encourage those dum-dums to realize they need to change their ways before they're screwed. Unless the system working is that people get screwed and cause problems for the rest of society.
> The same group of people that are going to loose their job to an LLM arent getting smarter because of how they are using LLM's.
Students who use LLMs and professional programmers who use LLMs: I wouldn't say it's necessarily the same group of people.
Sure, their incentives are the same, and they're equally unlikely to maintain jobs in the future.
But students can be told that their approach to become AI secretaries isn't going to pan out. They're not actively sacrificing a career because they're out of options. They can still learn valuable skills, because what they were taught has not been made redundant yet, unlike mediocre programmers who can only just compete with LLM gunk.
I ran one semester embracing AI, and... I don't know, I don't have enough to compare with, but clearly it leaves a lot of holes in people's understanding. They generate stuff that they don't understand. Maybe it's fine. But they're certainly worse programmers than I was after having spent the same time without LLMs.
When I was studying games programming we used an in house framework developed by the lecturers for OGRE.
At the time it was optional, but I get the feeling that if they still use that framework, it just became mandatory, because it has no internet facing documentation.
That said, I imagine they might have chucked it in for Unity before AI hit, in which case they are largely out of luck.
>But every year a small handful of dum-dums made it all the way to exam without having connected two dots, and I have to fail them and tell them that the three semesters they have wasted so far without any teachers calling their bullshit is a waste of life and won't lead them to a meaningful existence as a professional programmer.
This happened to me with my 3d maths class, and I was able to power through a second run. But I am not sure I learned anything super meaningful, other than I should have been cramming better.
If there is another course where students design their own programming language, maybe you could use the best of the previous year's. That way LLMs are unlikely to be able to (easily) produce correct syntax. Just a thought from someone who teaches in a totally different neck of the mathematical/computational woods.
Modern LLMs can one-shot code in a totally new language, if you provide the language manual. And you have to provide the language manual, because otherwise how can the students learn the language.
I had numerous in person paper exams in CS (2009 - 2013) where we had to not only pseudo code an algorithm from a description, but also do the reverse of saying/describing what a chunk of pseudo code would do.
There you go. Actually that would be a great service, wouldn't it? Having them explain to an LLM what they are doing, out loud, while doing it, online. On a site that you trust to host it.
> Teaching Linux basics doesn't suffer the same because the exam-preparing exercise is typing things into a terminal, and LLMs still don't generally have API access to terminals.
Huh, fighting my way through a Linux CLI is exactly the kind of thing I use Chatgpt for professionally.
I did study it in compsci, but those commands are inherently not memorable.
Yes, LLMs have had API access to terminals for quite a while now. I've been using Windsurf and Claude Code to type terminal commands for me for a long while (and `gh copilot suggest` before that) and couldn't be happier. I still manually review most of them before approving, but I've seen that the chances of the AI getting an advanced incantation right on the first try are much higher than mine, and I haven't yet once had it make a disastrous one, while that's happened to me quite a few times with commands I typed on my own.
> their bullshit is a waste of life and won't lead them to a meaningful existence as a professional programmer
That's where you're wrong. Being a professional programmer is 10% programming, 40% office politics, and 50% project management. If your student managed to get halfway through college without any actual programming skills, they're perfect candidate, because they clearly own the 90% of skills needed to be a professional programmer.
In my experience, it's 70% programming, 20% office politics, and 10% project management. People who realize late they're no good at programming, or don't enjoy it, will pivot towards other kinds of work, like project management. But people who think they'll have luck managing people without having any grasp of the skill set of the people they manage, they either need really good people skills, or they're obnoxiously incompetent in both humans and computers.
This is only temporary. It will be able to code like anyone in time. The only way around this will be coding in-person, but only in elementary courses. Everyone in business will be using AI to code, so that will be the way in most university courses as well.
IMO no amount of AI should be used during an undergrad education, but I can see how people would react more strongly to its use in these intro to programming courses. I don't think there's as much of an issue with using it to churn out some C for an operating systems course or whatever. The main issue with it in programming education is when learning rudiments of programming IS the point of the course. Same with using to it crank out essays for freshman English courses. These courses are designed to introduce fundamental raw skills that everything else builds on. Someone's ability to write good code isn't as big a deal for classes in OS, algs, compilers, ML, etc., as the main concepts of those courses are.
I’m enrolled in an undergraduate CS program as an experienced (10 year) dev. I find AI incredibly useful as a tutor.
I usually ask it to grade my homework for me before I turn it in. I usually find I didn’t really understand some topic and the AI highlights this and helps set my understanding straight. Without it I would have just continued on with an incorrect understanding of the topic for 2-3 weeks while I wait for the assignment to be graded. As an adult with a job and a family this is incredibly helpful as I do homework at 10pm and all the office hours slots are in the middle of my workday.
I do admit though it is tough figuring out the right amount to struggle on my own before I hit the AI help button. Thankfully I have enough experience and maturity to understand that the struggle is the most important part and I try my best to embrace it. Myself at 18 would definitely not have been using AI responsibly.
When I was in college if AI was available I would have abused it way too much and been much worse off for it.
This is my biggest concert about GenAI in our field. As an experienced dev I've been around the block enough times to have a good feel of how things should be done and can catch when and LLM goes off on a tangent that is a complete rabbit hole, but if this had been available 20 years ago I would have never learned and become an experienced dev because I absolutely would have over relied on an LLM. I worry that 10 years from now getting mid career dev will be like trying to get a COBOL dev now, except COBOL is a lot easier to learn.
I’m wondering how the undergrad CS course is as an experienced dev and why you decided to do that? I have been a software developer for 5 years with an EE degree, and as I do more software engineering and less EE I feel like I am missing some CS concepts that my colleagues have. Is this your situation too or did you have another reason? And why not a masters?
A mix of feeling I’m “missing” some CS concepts and just general intellectual curiosity.
I am planning on doing a masters but I need some undergrad CS credits to be a qualified candidate. I don’t think I’m going to do the whole undergrad.
Overall my experience has been positive. I’ve really enjoyed Discrete Math and coming to understand how I’ve been using set theory without really understanding it for years. I’m really looking forward to my classes on assembly/computer architecture, operating systems, and networks. They did make me take CS 101-102 as prereqs which was a total waste of time and money, but I think those are the only two mandatory classes with no value to me.
Not GP, but in my experience most MSC programs will require that you have substantial undergrad CS coursework in order to be accepted. There are a few programs designed for those without that background.
I have a friend who is self-medicating untreated adhd with street amphetamines and he talks about it similarly. I can't say with any certainty that either of you is doing anything wrong or even dangerous. But I do think you both are overconfident in your assessment of the risks.
A bit off-topic, but I think AI has the potential to supercharge learning for the students of the future.
Similar to Montessori, LLMs can help students who wander off in various directions.
I remember often being “stuck” on some concept (usually in biology and chemistry), where the teacher would hand-wave something as truth, this dismissing my request for further depth.
Of course, LLMs in the current educational landscape (homework-heavy) only benefit the students who are truly curious…
My hope is that, with new teaching methods/styles, we can unlock (or just maintain!) the curiosity inherent in every pupil.
(If anyone knows of a tool like this, where an LLM stays on a high-level trajectory of e.g. teaching trigonometry, but allows off-shoots/adventures into other topical nodes, I’d love to know about it!)
>>> Of course, LLMs in the current educational landscape (homework-heavy) only benefit the students who are truly curious
I think you hit on a major issue: Homework-heavy. What I think would benefit the truly curious is spare time. These things are at odds with one another. Present-day busy work could easily be replaced by occupying kids' attention with continual lessons that require a large quantity of low-quality engagement with the LLM. Or an addictive dopamine reward system that also rewards shallow engagement -- like social media.
I'm 62, and what allowed me to follow my curiosity as a kid was that the school lessons were finite, and easy enough that I could finish them early, leaving me time to do things like play music, read, and learn electronics.
And there's something else I think might be missing, which is effort. For me, music and electronics were not easy. There was no exam, but I could measure my own progress -- either the circuit worked or it didn't. Without some kind of "external reference" I'm not sure that in-depth research through LLMs will result in any true understanding. I'm a physicist, and I've known a lot of people who believe that they understand physics because they read a bunch of popular books about it. "I finally understand quantum mechanics."
> I'm 62, and what allowed me to follow my curiosity as a kid was that the school lessons were finite, and easy enough that I could finish them early, leaving me time to do things like play music, read, and learn electronics.
I see both sides of this. When I was a teenager, I went to a pretty bad middle school where there were fights everyday, and I wasn’t learning anything from the easy homework. On the upside, I had tons of free time to teach myself how to make websites and get into all kinds of trouble botting my favorite online games.
My learning always hit a wall though because I wasn’t able to learn programming on my own. I eventually asked my parents to send me to a school that had a lot more structure (and a lot more homework), and then I properly learned math and logic and programming from first principles. The upside: I could code. The downside: there was no free time to apply this knowledge to anything fun
>I'm 62, and what allowed me to follow my curiosity as a kid was that the school lessons were finite, and easy enough that I could finish them early, leaving me time to do things like play music, read, and learn electronics.
Yeah I feel like teachers are going to try and use LLMs as an excuse to push more of the burden of schooling to their pupils homelife somehow. Like, increasing homework burdens to compensate.
Spare time, haha, most people nowadays have a hard time having some dead time. The habitual checking of socials or feeds has killed the mind wandering time. People feel uncomfortable or consiser life boring with the device induced dopamine fix. Corporations got us by the balls.
The last thing I need when researching a hard problem is an interlocutor who might lie to me, make up convincing citations to nowhere, and tell me more or less what I want to hear.
Still better than the typical classroom experience. And you can always ask again, there's no need to avoid offending the person who has a lot of power over you.
The longer I go without seeing cases of ai supercharging learning, the more suspicious I get that it just won’t. And no, self reports that it makes internet denizens feel super educated, don’t count.
The problem is that many students come to university unequipped with the discipline it takes to actually study. Teaching students how to effectively learn is a side-effect of university education.
> I remember often being “stuck” on some concept (usually in biology and chemistry), where the teacher would hand-wave something as truth, this dismissing my request for further depth.
This resonates with me a lot. I used to dismiss AI as useless hogwash, but have recently done a near total 180 as I realised it's quite useful for exploratory learning.
Not sure about others but a lot of my learning comes from comparison of a concept with other related concepts. Reading definitions off a page usually doesn't do it for me. I really need to dig to the heart of my understanding and challenge my assumptions, which is easiest done talking to someone. (You can't usually google "why does X do Y and not Z when ABC" and then spin off from that onto the next train of reasoning).
Hence ChatGPT is surprisingly useful. Even if it's wrong some of the time. With a combination of my baseline knowledge, logic, cross referencing, and experimentation, it becomes useful enough to advance my understanding. I'm not asking ChatGPT to solve my problem, more like I'm getting it to bounce off my thoughts until I discover a direction where I can solve my problem.
I am building such an AI tutoring experience, focusing on a Socratic style with product support for forking conversations onto tangents. Happy to add you to the waitlist, will probably publish an MVP in a few weeks.
I haven’t personally tried it, but the high-level demos of “khanmigo” created by khan academy seem really promising. I’ll always have a special place in my heart (and brain) for the work of Sal Khan and the folks at khan academy.
yeah this is a good point, just adjust coursework from multiple choice tests and fill in the blank homework to larger scale projects.
Putting together a project using the AI help will be a very close mimicry of what real work will be like and if the teacher is good they will learn way more than being able to spout information from memory.
I've always though that the education system was broken and next to worthless. I've never felt that teachers ever tried to _teach_ me anything, certainly not how to think. In fact I saw most attempts at thought squashed because they didn't fit neatly into the syllabus (and so couldn't be graded).
The fact that AI can do your homework should tell you how much your homework is worth. Teaching and learning are collaborative exercises.
> The fact that AI can do your homework should tell you how much your homework is worth.
Homework is there to help you practise these things and have help you progress, find the areas where you're in need of help and more practise. It is collaborative, it's you, your fellow students and your teachers/professors.
I'm sorry that you had bad teachers, or had needs that wasn't being meet by the education system. That is something that should be addressed. I just don't think it's reasonable to completely dismiss a system that works for the majority. Being mad at the education system isn't really a good reason for say "AI/computers can do all these things, so why bother practising them?"
Schools should learn kids to think, but if the kids can't read or reasonably do basic math, then expecting them to have independent critical thinking seems a way of. I don't know about you, but one of the clear lessons in "problem math" in schools was to learn to reason about numbers and result, e.g. is it reasonable that a bridge span 43,000km? If not, you probably did something wrong in your calculations.
These conversations are always eye-opening for the number of people who don’t understand homework. You’re exactly right that it’s practice. The test is the test (obviously) and the homework is practice with a feedback loop (the grade).
Giving people credit for homework helps because it gives students a chance to earn points outside of high pressure test times and it also encourages people to do the homework. A lot of people need the latter.
My friends who teach university classes have experimented with grading structures where homework is optional and only exam scores count. Inevitably, a lot of the class fails the exams because they didn’t do any practice on their own. They come begging for opportunities to make it up. So then they circle back to making the homework required and graded as a way to get the students to practice.
ChatGPT short circuits this once again. Students ChatGPT their homework then fail the first exam. This time there is little to do, other than let those students learn the consequences of their actions.
> The fact that AI can do your homework should tell you how much your homework is worth.
A lot of people who say this kind of thing have, frankly, a very shallow view of what homework is. A lot of homework can be easily done by AI, or by a calculator, or by Wikipedia, or by looking up the textbook. That doesn't invalidate it as homework at all. We're trying to scaffold skills in your brain. It also didn't invalidate it as assessment in the past, because (eg) small kids don't have calculators, and (eg) kids who learn to look up the textbook are learning multiple skills in addition to the knowledge they're looking up. But things have changed now.
Completely agree - I always thought the framing of "exercises" is the right one, the point is that your brain grows by doing. It's been possible for a long time to e.g. google a similar algebra problem and find a very relevant math stackexchange post, doesn't mean the exercises were useless.
"The fact that forklift truck can lift over 500kg should tell you how worthwhile it is for me to go to a gym and lift 100kg." - complete non-sequitur.
Then maybe the homework assignment has been poorly chosen. I like how the article's author has decided to focus on the process and not the product and I think that's probably a good move.
I remember one of my kids' math teachers talked about wanting to switch to in inverted classroom. The kids would be asked to read a some part of their textbook as homework and then they would work through exercise sheets in class. To me, that seemed like a better way to teach math.
> But things have changed now.
Yep. Students are using AIs to do their homework and teachers are using AIs to grade.
Yep, making time to sit down to do homework, forming an understanding of planning the doing part, forming good habits of doing them, knowing how to look up stuff, in a book index or on Wikipedia or by searching or asking AI.
The expectation is still that some kind of text output needs to be found and then read, digested.
> The fact that AI can do your homework should tell you how much
you still have to learn. The goal of learning is not to do a job. It's to enrich you, broaden your mind, and it takes work on your part.
In similar reasoning, you could argue that you can take a car to go anywhere, or let everything be delivered on your doorstep, so why should I my child learn to walk?
The fact that AI can replace the work that you are measured on should tell you something about the measurement itself.
The goal of learning should be to enrich the learner. Instead, the goal of learning is to pass measure. Success has been quietly replaced with victory. Now LLMs are here to call that bluff.
As a student, you can make "getting the diploma" the only goal, and so it rests entirely on the educators and the institution to ensure that the only way you can do that is by learning the material and becoming competent in its applications.
However, you can instead recognize the difficulty and time that this would require on the part of the educator, and therefore expense to the student, and you can recognize that you have the goal of not just obtaining a piece of paper but actually learning a skill. With this mindset, it makes sense to take the initiative to treat the homework as an opportunity to learn and practice. It's is one of those things that's worth as much as you put into it. Of course, one can use their judgement to decide which homework is worth spending time on to learn the material, and which can be safely sailed through with minimum effort.
Having a skilled teacher that you can really collaborate with and who can spend the time to evaluate your skills in a personal way is of course going to lead to better learning outcomes than the traditional education system. It will also be far more expensive. Although, AI is offering something somewhat akin to this experience at a much lower price, to those who are able to moderate their usage so that they are learning from the AI instead of just offloading tasks to it.
Homework isn't about doing the homework, it's teaching you to learn and evidence that you have and can learn. Yeah you can have an AI do it just as much as you can have someone else do it, but that doesn't teach you anything and if you earn the paper at the end of it, it's effectively worthless.
Unis should adjust their testing practices so that their paper (and their name) doesn't become worthless. If AI becomes a skill, it should be tested, graded, and certified accordingly. That is, separate the computer science degree from the AI Assisted computer science degree.
Current AI can ace math and programming psets at elite institutions, and yet prior to GPT not only did I learn loads from the homework, I often thoroughly enjoyed it too. I don’t see how you can make that logical leap.
Its a problem of incentives. For many courses the psets make up a large chunk of your grade. Grades determine your suitability for graduate school, internships, jobs, etc. So if your final goal is one of those then you are highly incentivized to get high grades, not necessarily to learn the material.
> The fact that AI can do your homework should tell you how much your homework is worth.
I mean... if you removed the substring "home" from that sentence, is it still true in your opinion?
That is, do you believe that because AI can perform some task, that task must not have any value? If there's a difference, help me understand it better please.
No that's not exactly what I meant. It's not that "If AI can do X then X is worthless", but rather in _this_ case it's inappropriate.
Homework should be a part of the collaborative process of learning (as others above have already elaborated on). If teachers are having a problem with AI generated homework being submitted, it shows that the system is broken because they couldn't have been collaborating with students on their learning then.
> Teaching and learning are collaborative exercises.
That's precisely where we went wrong. Capitalism has redefined our entire education system as a competition; just like it does with everything else. The goal is not success, it's victory.
If the trend continues, it seems like most college degrees will be completely worthless.
If students using AI to cheat on homework are graduating with a degree, then it has lost all value as a certificate that the holder has completed some minimum level of education and learning. Institutions that award such degrees will be no different than degree mills of the past.
I’m just grateful my college degree has the year 2011 on it, for what it’s worth.
All of the best professors I had either did not grade homework or weighted it very small and often on a did-you-do-it-at-all basis and did not grade attendance at all. They provided lectures and assignments as a means to learn the material and then graded you based on your performance in proctored exams taken either in class or at the university testing center.
For most subjects at the university level graded homework (and graded attendance) has always struck me as somewhat condescending and coddling. Either it serves to pad out grades for students who aren't truly learning the material or it serves to force adult students to follow specific learning strategies that the professor thinks are best rather than giving them the flexibility they deserve as grown adults.
Give students the flexibility to learn however they think is best and then find ways to measure what they've actually learned in environments where cheating is impossible. Cracking down on cheating at homework assignments is just patching over a teaching strategy that has outgrown its usefulness.
> rather than giving them the flexibility they deserve as grown adults
I have had so many very frustrating conversations with full grown adults in charge of teaching CS. I have no faith at all that students would be able to choose an appropriate method of study.
My issue with the instruction is the very narrow belief in the importance of certain measurable skills. VERY narrow. I won’t go into details, for my own sanity.
> All of the best professors I had either did not grade homework or weighted it very small and often on a did-you-do-it-at-all basis and did not grade attendance at all. They provided lectures and assignments as a means to learn the material and then graded you based on your performance in proctored exams taken either in class or at the university testing center.
I have the opposite experience - the best professors focused on homework and projects and exams were minimal to non-existent. People learn different ways, though, so you might function better having the threat/challenge of an exam, whereas I hated having to put everything together for an hour of stress and anxiety. Exams are artificial and unlike the real world - the point is to solve problems, not to solve problems in weirdly constrained situations.
Maybe schools and universities need to stop considering homework to be evidence of subject matter mastery. Grading homework never made sense to me. What are you measuring, really, and how confident are you of that measurement?
You can't put the toothpaste back into the tube. Universities need to accept that AI exists, and adjust their operations accordingly.
Provide an incentive for students to do the thing they should be doing anyway.
Give an opportunity to provide feedback on the assignment.
It is totally useless as an evaluation mechanic, because of course the students that want to can just cheat. It’s usually pretty small, right? IIRC when I did tutoring we only gave like 10-20% for the aggregate homework grade.
> If the trend continues, it seems like most college degrees will be completely worthless.
I suspect the opposite: Known-good college degrees will become more valuable. The best colleges will institute practices that confirm the material was learned, such as emphasizing in-person testing over at-home assignments.
Cheating has actually been rampant at the university level for a long time, well before LLMs. One of the key differentiators of the better institutions is that they are harder to cheat to completion.
At my local state university (where I have friends on staff) it’s apparently well known among the students that if they pick the right professors and classes they can mostly skate to graduation with enough cheating opportunity to make it an easy ride. The professors who are sticklers about cheating are often avoided or even become the targets of ratings-bombing campaigns
I've tried re-enrolling in a STEM major last year, after a higher education "pause" of 16-ish years. 85% of the class used GPTs to solve homework, and it was quite obvious most of them haven't even read the assignment.
The immediate effect was the distrust of the professors towards most everyone and lots classes felt like some kind of babysitting scheme, which I did not appreciate.
> I’m just grateful my college degree has the year 2011 on it, for what it’s worth.
College students still cram and purge. Nobody forced to sit through OChem remembers their Diels-Alder reaction except the organic chemists.
College degrees probably don't have as much value as we've historically ascribed to them. There's a lot of nostalgia and tradition pent up in them.
The students who do the best typically fill their schedule with extra-curricular projects and learning that isn't dictated by professors and grading curves.
I've been hiring people for the better part for 15 years and I never considered them to be valuable outside of the fact that it appears you're able to do one project for a sustained period of time. My impressions was unless your degree confers something such that you are in a job that human risk can be involved, most degrees are worth very little and most serious people know that.
To be clear, I think that most college degrees were generally low value (even my own), but still had some value. The current trend will be towards zero value unless something changes.
This is not related to "AI", but I have an amusing story about online cheating.
* I have a nephew who was switched into online college classes at the beginning of the pandemic.
* As soon as they switched to online, the class average on the exams shot up, but my nephew initially refused to cheat.
* Eventually he relented (because everyone else was doing it) and he pasted a multitude of sticky notes on the wall at the periphery of his computer monitor.
* His father walks into his room, looks at all the sticky notes and declares, "You can't do this!!! It'll ruin the wallpaper!"
If you’re hiring humans just to use AI, why even hire humans? Either AI will replace them or employers will realize that they prefer employees who can think. In either case, being a human who specializes in regurgitating AI output seems like a dead end.
TBF this problem doesn’t seem that new to me. I was forced to do my lab work in Vim and C via SSH because the faculty felt that Java IDEs with autocomplete were doing a disservice to learning.
It was (to some degree), and could still be. The status quo was more effective, relatively speaking, before the AI boom. The status quo appears to be trending towards ineffective, post-AI boom.
So in order to remain useful, the status quo of higher education will probably have to change in order to adapt to the ubiquity of AI, and LLMs currently.
Just because you can cheat at something doesn't mean doing it legitimately isn't useful.
Thinking of that. We have build these expensive machines with massive investments to be able to output what we expect college students to output... Wouldn't that tell us that well maybe that output has some value, intent or use? Or we would not have spend those resources...
Just because machine can do things, doesn't mean humans should be able to do it too. Say reading a text aloud.
I mean this seems a solved problem: hand-and-paper written onsite exams + blackboard-and-chalk oral onsite exams. If this is too costly (is it? many countries manage), make students take them less often.
I teach on a small university. These are some of the measures we take:
- Hand written midterms and exams.
- The students should explain how they designed and how they coded their solutions to programming exercises (we have 15-20 students per class, with more students it become more difficult).
- Presentations of complex topics (after that the rest of the students should comment something, ask some question, anything related to the topic)
- Presentation of a handwritten one page hand written notes, diagram, mindmap, etc., about the content discussed.
- Last minute changes to more elaborated programming labs that should be resolved in-class (for example, "the client" changed its mind about some requirement or asked a new feature).
The real problem is that it is a (lot) more work for the teachers and not everyone is willing to "think outside of the box".
Back when I was doing my BSc in Software Engineering, we had a teacher who did her Data Structure and Algorithms exams with pen and paper. On one of them, she basically wrote 4 coding problems (which would be solved in 4 short ~30 LOC).
We had to write the answer with pen and paper, writing the whole program in C. And the teacher would score it by transcribing the verbatim text in her computer, and if it had one single error (missed semicolon) or didn't compile for some reason, the whole thing was considered wrong (each question was 25% of the exam score)
I remember I got 1 wrong (missed semicolon :( ) and got a 75% (1-100 pointing system). It's crazy how we were able to do that sort of thing in the old days.
We definitely exercised our attention to detail and concentration muscles with that teacher.
Yes, pen and paper. The approach is to pseudocode the solution, minor syntax errors aren’t punished (and indeed are generally expected anyway). The point is to simply show that you understand and can work through the concepts involved, it’s not being literally compiled.
Writing a small algorithm with pen & paper on programming exams in universities of all sizes was still common when I was in uni in the 2010s and there’s no reason to drop that practice now.
One of the most offensive words in the anthropomophization of LLMs is: hallucinate.
It's not only an anthropomorphism, it's also a euphemism.
A correct interpretation of the word would imply that the LLM has some fantastical vision that it mistakes for reality. What utter bullsh1t.
Let's just use the correct word for this type of output: wrong.
When the LLM generates a sequence of words, that may or may not be grammatically correct, but infers a state or conclusion that is not factually correct; lets state what actually happened: the LLM generated text was WRONG.
It didn't take a trip down Alice's rabbit hole, it just put words together into a stream that inferred a piece of information that was incorrect, it was just WRONG.
The euphemistic aspect of using this word is a greater offense than the anthropomorphism, because it's painting some cutesy picture of what happened, instead of accurately acknowledging that the s/w generated an incorrect result. It's covering up for the inherent short comings of the tech.
When a person hallucinates a dragon coming for them, they are wrong, but we still use a different word to more precisely indicate the class of error.
Not all llm errors are hallucinations - if an llm tells me that 3 + 5 is 7, It's just wrong. If it tells me that the source for 3 + 5 being 7 is a seminal paper entitled "On the relative accuracy of summing numbers to a region +-1 from the fourth prime", we would call that a hallucination. In modern parlance " hallucination" has become a term of art to represent a particular class of error that llms are prone to. (Others have argued that "confabulation" would be more accurate, but it hasn't really caught on.)
It's perfectly normal to repurpose terms and anthropomorphizations to represent aspects of the world or systems that we create. You're welcome to try to introduce other terms that don't include any anthropomorphization, but saying it's "just wrong" conveys less information and isn't as useful.
I think your defense of reusing terms for new phenomenon is fair.
But in this specific case, I would say the reuse of this particular word, to apply to this particular error, is still incorrect.
A person hallucinating is based on a many leveled experience of consciousness.
The LLM has nothing of the sort.
It doesn't have a hierarchy of knowledge which it is sorting to determine what is correct and what is not. It doesn't have a "world view" based on a lifetime of that knowledge sorting.
In fact, it doesn't have any representation of knowledge at all. Much less a concept of whether that knowledge is correct or not.
What it has is a model of what words came in what order, in the training set on which it was "trained" (another, and somewhat more accurate, anthropomorphism).
So without anything resembling conscious thought, it's not possible for an LLM to do anything even slightly resembling human hallucination.
As such, when the text generated by an LLM is not factually correct, it's not an hallucination, it's just wrong.
I teach an "advanced" shell scripting course with an exam.
I mark "hallucinations" as "LLM Slop" in my grading sheets, when someone gives me a 100-character sed filter that just doesn't work that there is no way we discussed in class/in examples/in materials, or a made up API endpoint, or non-nonsensical file paths that reference non-existent commands.
Slop is an overused term these days, but it sums it up for me. Slop, from a trough, thrown out by an uncaring overseer, to be greedily eaten up by the piggies, who don't care if its full of shit.
- decide what I wanted to say about the subject, from the set of opinions I already possess
- search for enough papers that could support that position. Don't read the papers, just scan the abstracts.
- write the essay. Scan the reference papers for the specific bit of it that best supported the point I want to make.
There was zero learning involved in this process. The production of the essay was more about developing journal search skills than absorbing any knowledge about the subject. There are always enough papers to support any given point of view, the trick was finding them.
I don't see how making this process even more efficient by delegating the entire thing to an LLM is affecting any actual education here.
Confession. I became disillusioned with a teacher of a subject in school, who I was certain had taken a disliking to me.
I tested it by getting hold of a paper which had received an A from another school on the same subject, copying it verbatim and submitting it for my assignment. I received a low grade.
Despite confirming what I suspected, it somehow still wasn't a good feeling.
To be honest, that's a problem on your part. It is completely possible to write a paper on anything, using the scientific method as your framework.
But the problem is that in many cases, the degrees (like MBA, which I too hold) are merely formalities to move up the corporate ladder, or pivot to something else. You don't get rewarded extra for actually doing science. And, yes, I've done the exact same thing you did, multiple times, in multiple different classes. Because I knew that if what I did just looked and sounded proper enough, I'd get my grade.
To be fair, one of the first things I noticed when entering the "professional" workforce, was that the methodology was the same: Find proof / data that supports your assumptions. And if you can't find any, find something close enough and just interpret / present it in a way that supports your assumptions.
No need for any fancy hypothesis testing, or having to conclude that your assumptions were wrong. Like it is not your opinion or assumption anyway, and you don't get rewarded for telling your boss or clients that they're wrong.
Is there even such a thing as the "science of business"? One can form a hypothesis, and then conduct an experiment, but the experimental landscape is so messy that eliminating all other considerations is impossibly hard.
For example, there's a popular theory that the single major factor in startup success is timing - that the market is "ready" for ideas at specific times, and getting that timing right is the key factor in success. But it's impossible to predict when the market timing is right, you only find out in retrospect. How would you ever test this theory? There are so many other factors, half of which are outside the control of the experimenter, that you would have to conduct the experiment hundreds of time (effectively starting and failing at hundreds of startups) to exclude the confounding factors.
To be honest, I found "the material" irrelevant, mostly. There's vast swathes of papers written about obscure and tiny parts of the overall subject. Any given paper is probably correct, but covering such a tiny part of the subject that spending the time reading all of them is inefficient (if not impossible).
Also, given that the subject in question is "business", and the practice of business was being changed (as it is again now) by the application of new technology, so a lot of what I was reading was only borderline applicable any more.
MBAs are weird. To qualify to do one you need to have years of practical experience managing in actual business. But then all of that knowledge and experience is disregarded, and you're expected to defer to papers written by people who have only ever worked in academia and have no practical experience of what they're studying. I know this is the scientific process, and I respect that. But how applicable is the scientific process to management? Is there even a "science" of management?
So, like all my colleagues, I jumped through the hoops set in front of me as efficiently as possible in order to get the qualification.
I'm not saying it was worthless. I did learn a lot. The class discussions, hearing about other people's experiences, talking about specific problems and situations, this was all good solid learning. But the essays were not.
> - search for enough papers that could support that position. Don't read the papers, just scan the abstracts.
Wrote wrote those papers? How did they learn to write them? At some point, somebody along the chain had to, you know, produce an actual independent thought.
Interesting question. It seems to me that the entire business academia could be following the method I've outlined and no-one would notice. Or care.
It's not like the hard sciences - no-one is able to refute anything, because you can't conduct experiments. You can always find some evidence for any given hypothesis, as the endless stream of self-help (and often contradictory) business books show.
None of the academics I was reading had actually run a business or had any practical experience of business. They were all lifelong academics who were writing about it from an academic perspective, referencing other academics.
Business is not short of actual independent thought. Verification is the thing it's missing. How does anyone know that the brilliant idea they just had is actually brilliant? The only way is to go and build a business around it and see if it works. Academics don't do that. How is this science then?
Someone needs to experience the real world and translate it into LLM training data.
ChatGPT can’t know if the cafe around the corner has banana bread, or how it feels to lose a friend to cancer. It can’t tell you anything unless a human being has experienced it and written it down.
Capitalism barely concerns itself with humans and whether human experiences exist or not is largely irrelevant for the field. As far as capitalism knows, humans are nothing but a noisy set of knobs that regulate how much profit one can make out of a situation. While tongue-in-cheek, this SMBC comic [1] about the Ultimatum game is an example of the type of paradoxes one gets when looking at life exclusively from an economics perspective.
The question is not "what's the value of a human under capitalism?" but rather "how do we avoid reducing humans to their economic output?". Or in different terms: it is not the blender's job to care about the pain of whatever it's blending, and if you find yourself asking "what's the value of pain in a blender-driven world?" then you are solving the wrong problem.
I’m similarly worried about businesses all making “rational” decisions to replace their employees with “AI”, wherever they think they can get away with it. (Note that’s not the same thing as wherever “AI” can do the job well!)
But I think one place where this hits a wall is liability and accountability. Lots of low stakes things will be enshittified by “AI” replacements for actual human work. But for things like airline pilots, cancer diagnoses, heart surgery - the cost of mistakes is so large, that humans in the loop are absolutely necessary. If nothing else, at least as an accountability shield. A company that makes a tumor-detector black box wants to be an assistive tool to improve doctor’s “efficiency”, not the actual front line medical care. If the tool makes a mistake, they want no liability. They want all the blame on the doctor for trusting their tool and not double checking its opinion. I hear that’s why a lot of “AI” tools in medicine are actually reducing productivity: double checking an “AI’s” opinion is more work than just thinking and evaluating with your own brain.
The funny thing is my first thought was "maybe reduced nominal productivity by increased throughness is exactly what we need when evaluating potential tumors". Keeping doctors off autopilot and not so focused that radiologists fail to see hidden gorillas in x-rays. And yes that was a real study.
The "value of a human" - same in this age as it has always been - is our ability to be truly original and to think outside the box. (That's also what makes us actually quite smart, and what makes current cutting-edge "AI" actually quite dumb).
AI is incapable of producing anything that's not basically a statistical average of its inputs. You'll never get an AI Da Vinci, Einstein, Kant, Pythagoras, Tolstoy, Kubrick, Mozart, Gaudi, Buddha, nor (most ironically?) Turing. Just to name a few historical humans whose respective contributions to the world are greater than the sum of the world's respective contributions to them.
Have you tried image generation? It can easily apply high level concepts from one area to another area and produce something that hasn't been done before.
Unless you loosen the meaning of statistical average so much that it ends up including human creativity. At the end of the day it's basically the same process of applying an idea from one field to another.
Most humans are not Da Vinci, Einstein, Kant, etc. Does that make them not valuable as humans?
You should determine your own value if you don't want to be controlled by anyone else.
If you don't want to determine your own value, you're probably no worse off letting an AI do that than anything else. Religion is probably more comfortable, but I'm sure AI and religion will mix before too long.
I use AI to help my high-school age son with his AP Lang class. Crucially, I cleared all of this with his teacher beforehand. The deal was that he would do all his own work, but he'd be able to use AI the help him edit it.
What we do is he first completes an essay by himself, then we put it into a Claude chat window, along with the grading rubric and supporting documents. We instruct Claude to not change his structure or tone but edit for repetitive sentences, word count, correct grammar, spelling, and make sure his thesis is sound and pulled throughout the piece. He then takes that output and compares it against his original essay paragraph-by-paragraph, and he looks to see what changes were made and why, and crucially, if he thinks its better than what he originally had.
This process is repeated until he arrives at an essay that he's happy with. He spends more time doing things this way than he did when he just rattled off essays and tried to edit on his own. As a result, he's become a much better writer, and it's helped him in his other classes as well. He took the AP test a few weeks ago and I think he's going to pass.
To offer a flip side of the coin, I can't imagine I would have the patience outside of school, to have learned Rust this past year without AI.
Having a personal tutor who I can access at all hours of the day, and who can answer off hand questions I have after musing about something in the shower, is an incredible asset.
At the same time, I can totally believe if I was teleported back to school, it would become a total crutch for me to lean on, if anything just so I don't fall behind the rest of my peers, who are acing all the assignments with AI. It's almost a game theoretic environment where, especially with bell curve scaling, everyone is forced into using AI.
Different times have different teaching tasks, which is the sign of human progress.
Just like after the invention of computers, those methods of how to do manual calculations faster can be eliminated from teaching tasks. Education shifted towards teaching students how to use computational tools effectively. This allowed students to solve more complex problems and work on higher-level concepts that manual calculations couldn't easily address.
In the era of AI, what teachers need to think about is not to punitively prohibit students from using AI, but to adjust the teaching content to better help students master related subjects faster and better through AI.
On one hand I tend to agree because these students will also be able to use AI when they actually hit the workplace, but on the other hand it has never happened that the tools we use are better than us at so many tasks.
How long before a centaur team of human + AI is less effective than the AI alone?
As an engineering undergrad, I don't think any online work should count toward the student's grade, unless you're allowed to use the Internet however you want to complete it. There simply isn't any other way of structuring the course that doesn't punish the honest students.
I think we (as in, the whole species) need to reflect on what the purpose of education is and what it should be, because in theory there's no reason why anybody should pay for a college tuition and then undermine their own mastery of the subject. Obviously 90% of the student body sees it as a ticket to being taken seriously by prospective employers and the other 10% definitely does not deserve to be taken seriously because by prospective employers because they can't even admit an uncomfortable truth about themselves.
Anyways this isn't actually useful advice because no one person can enact change on a societal scale but I do enjoy standing on this soapbox and telling at people.
BTW academic success has never been a fair measure of anything, standards and curriculum vary widely between institutions. I spent four years STRUGGLING to get a 3.2 GPA in high school then when I got to undergrad we had to take this "math placement exam" that was just basic algebra and I only had difficulty with one or two problems but I knew several kids with >= 4.0 GPA who had to take remedial algebra because they failed.
But somehow there's always massive pushback against standardized testing even when they let you take it over and over and over again until you get the grade you wanted (SAT).
i was actually trying to accuse the 10% of lying to themselves on a subconscious level, because the portion of undergraduates who actually came there to learn and not just because it's a roadblock in the way of gainful employment is a rounding error.
More to the point, the universities need to realize they're more like job certification centers and stop pretending their students aren't just there to take tests and get certified. Ideally they'd stop co-operating with employers that want to use them as a filter for their hiring process instead but even I'm not dumb enough to think that could ever happen, they'd be cutting off a massive source of revenue and putting themselves at a competitive disadvantage.
Like I said I don't actually have a viable solution to any of this but as long we all lie to ourselves about education being some noble institution that it clearly isn't (i mean for undergrad and masters, it might actually still be that at the phd level) then nobody will ever solve anything.
I predict that asking students to hand-write assignments is not going to go well. Unfortunately, universities built on the consumer model (author teaches at Arizona State) are incentivized to listen to student feedback over the professor’s good intentions.
So don't accredit universities that want to turn into degree mills.
Beat this game of prisoner's dilemma with a club at the accreditation level. Students can complain all they want, but if they want a diploma which certifies that they are able to perform the skills they learned, they will have to actually perform those skills.
> So don't accredit universities that want to turn into degree mills.
This is way outside the scope of something that a faculty member who is, as the article says, trying to teach has any hope of implementing within a reasonable time frame. Of course the ideal is that faculty, as major stakeholders in the educational institution, should ideally be active in all levels of university governance, but I think it is important to realize how much of a prerequisite there is for an individual professor even to get their voice heard by an accrediting body, let alone to change its accrediting procedures.
That's setting aside the fact that, even if faculty really mobilized to make such changes, in the absolute best case the changes would be slow to implement, and the effects would be slow to manifest, as universities are on multi-year accreditation cycles and there would need to be at least a few reputable universities that were disaccredited before others started taking the guidance seriously. Even if I were willing to throw everything into the politics of university governance, which would make my teaching suffer immensely, I'm not willing to say that we'll just have to wait a decade to see the effects.
The consumer model isn't all bad. But it can lead to wildly different outcomes based on self-selection and incentives.
Take gyms, for example. You have your cheap commodity convenience gyms like Planet Fitness, where a lot of people sign up (especially at the beginning of the year) but few actually stick to it to get any real gains. But you also have pricy fitness clubs with mandatory beginner classes, where the member base tends to be smaller but very committed to it.
I feel like students that are OK with just phoning it in with AI fall into the Planet Fitness mindset. If you're serious about gains (physically or intellectually), you'll listen to the instructors and persist through uncomfortable challenges.
I think a better approach might be to get students to use AI as a writing coach. Get them to commit to a short handwritten essay during class, then use AI give them feedback on the essay. Their interaction with the AI and how they respond to the feedback becomes the assessment material. That's not compatible with the authors "Butlerian Jihad" ideology, though.
This is insane to me. Why not title the class "how to use AI?" Why not make this the title of every class?
I see no future in education other than making homework completely ungraded, and putting 100% of the grade into airgapped exams. Sure, the pen and paper CS exam isn't reflective of a real world situation, but the universities need some way to objectively measure understanding once the pupil has been disconnected from his oracle.
The bigger problem is that kids can just hand write an essay that an AI gave them.
I teach at a university and I just scale my homework assignments until they reach or exceed sightly the amount of work I expect a student to be able to do with AI. Before I would give them a problem set. Next semester homeworks will be more like entire projects.
Punishing honest students by ensuring that they will fail unless they cheat is an absurd solution. In school I went to great lengths to do my work well and on my own. It was disheartening to see other students openly cheat and do well, but at least I knew that I was performing well on my own merits.
Under your system, I would have been actively punished for not cheating. What's the point of developing a cure that's worse than the disease?
> I just scale my homework assignments until they reach or exceed sightly the amount of work I expect a student to be able to do with AI.
1. Absurd. The measurement should be learning not “work”. My students move rocks with a forklift… so I give them more rocks to move?
2. From the university I’m looking for intellectual leadership. Professors thinking critically about what learning means and how to discuss it with students. The potential is there, but let’s not walk like zombies unthinking into a future where the disappearance of ChatGPT 8.5 renders thousands of people unable to meet the basic requirements of their jobs. Or its appearance renders them unemployed.
I understand your intentions but I'm skeptical even this solves the problem.
Realistically I think we're just moving away from knowledge-work and efforts to resuscitate it are just varying levels of triage for a bleeding limb.
In the actual workplace with people making hundreds of thousands a year (the top echelon of what your class is trying to prepare students for) I'm not seeing output increase with AI tools so clearly effort is just decreasing for the same amount of output.
Perhaps your class is just supposed to be easier now and that's okay.
then you're discriminating against students not using AI. I for sure know I really would be depressed to be asked for a huge pile of work I'll do myself when other will just cheat and have free time to do something else work on interesting projects or see friends whatever.
This claim is absurd and the comment is unserious.
Would the teacher then grade the massive workload with AI also? There isn't really a limit to how much output an AI can generate and the more someone demands, the less likely it is that the final result will be looked at in any depth by a human.
Let students use AI as they will when learning, but verify without allowing them to use it -- in class -- otherwise you have no way of knowing what they know. Job interviewers face the same problem.
I think AI is the perfect final ingredient to ruin the higher education system, which is already in ruins (at least over here in Finland).
Even before AI, our governments have long wanted more grads to make statistics look good and to suppress wages, but don't want to pay for it. So what you get are more students, lower quality of education, lower standards to make students graduate faster. Thanks to AI, now students don't have to really meet even those low standards to pass the courses. What is left is just a huge waste of young people's time and tax payer's money.
There are very few degrees I'm going to recommend to my children. Most just don't provide good value for one's time anymore.
AI for classical education can be an issue, but AI for inverted classes is perfect.
Going to school to listen to a teacher for hours and take notes, sitting in a group of peers to whom you are not allowed to speak, and then going home to do some homework on your own, this whole concept is stupid and deserves to die.
Learning lessons is the activity you should do within the confort of your home, with the help of everything you can including books, AIs, youtube videos or anything that float your boat. Working and practice, on the other hand, are social activities that benefit a lot from interacting with teachers and other students, and deserves to be done collectively at school.
For inverted classes, AI are no problem at all; at the contrary, they are very helpful.
The AI tools should be helping more than hurting. But take my example: I am in 3 year long litigation with soon to be ex-wife, she recently fired her attorneys and for 2 weeks used chatGPT to write very well worded, very strong and very logically appealing motions practically almost destroying my attorney on multiple occasions and he had to work overtime costing me extra $80,000 in litigation costs. And finally once we got in front of the judge, the ex could not combine two logical sentences together. The paper can defend itself on its face but it also turned out that not a single citation she cited had anything to do with the case at hand, which chatGPT is known for in legal circles. She admit using the tool and only got a verbal reprimand. The judge told majority of that "work" was legal and she cannot stop her from exercising her first amendment right, be it written by AI she had to form questions, edit responses, etc. And I wasn't able to recover a single dime since on its face her motions did make sense, although judge denied majority of her ridiculous pleadings.
Its really frightening! Its like handling over the smartest brain possible to someone who is dumb, but also giving them very simple GUI that they actually can operate and ask good enough questions/prompts to get smart answers. Once the public at large figure this one out, I can only imagine courts being flooded with all kinds of absurd pleadings. Being the judge in the near future will most likely be the least wanted job.
A good start for this debate would be to reconsider the term "AI", perhaps choosing a term that's more intuitive, like "automation" or "robot assistant". It's obvious that learning to automate a task is no way to learn how to do it yourself. Nor is asking a robot to do it for you.
Students need to understand that learning to write requires the mastery of multiple distinct cognitive and organizational skills, only the last of which is to generate text that doesn't sound stupid.
Each of writing's component tasks must be understood and explicitly addressed by the student, to wit: (1) choosing a topic to argue, and the component points to make a narrative, (2) outlining the research questions needed to answer each point, and finally, (3) choosing ONLY the relevant points that are necessary AND sufficient to the argument AND based on referenced facts, and that ONLY THEN can be threaded into a coherent logical narrative exposition that makes the intended argument and that leads to the desired conclusion.
Only then has the student actually mastered the craft of writing an essay. If they are not held responsible for implementing each and every one of these steps in the final product, they have NOT learned how to write. Their robot did. That essay is a FAIL because the robot has earned the grade; not they. They just came along for the ride, like ballast in a sailing ship.
Quite relevant here (maybe not where Universities are directly concerned) is the history of Luddites breaking automated textile equipment to protest their high skilled jobs disappearing in favour of much less skilled jobs of machine operators.
Not sure I agree with either/or. In person assessments are still pretty robust. I think an ideal university will teach both with a clear division between them (e.g. whether a particular assessment or module allows AI). What I'm currently struggling with is how to design an assessment in which the student is allowed to use AI - how do I actually assess it? Where should the bar actually be? Can it be relative to peers? Does this reward students willing to pay for more advanced AI?
The author is teaching a skill an LLM can do well enough to pass his exams.
Is learning English composition in the literary sense now worth what it costs to learn it at a university? That's a very real question now.
Not sure this is the provocative question you think it is. Were you educated in a university? Do you like being able to write English well? Would you rather that neither be true about you?
Yes, that's what we did and are still doing. Most grade schools don't allow calculators on basic arithmetic classes. Colleges don't integrate WolframAlpha into Calculus 101 exams. etc.
I want my math graduates to be skilled at using CAS systems. Yes, even in Calculus 1.
The lack of computer access for teaching math which objectively is supercharged by computation is a massive disservice to millions of individuals who could have used those CAS systems.
I don't want my engineers solving equations by hand. I especially don't want anyone who claims to be a "statistician" to not be skilled in Python (or historically, R)
I was allowed to use calculators during my A-level Math/Physics/Chem exams, but knowing what to punch in was half the battle. Hell, they even give you most of the formulae on the very first page of the exam sheet, but again, application of that knowledge is the hard part.
Point being, the fundamentals matter. I can't do mental arithmetic very well these days because it's been years since I've practiced, but I know how it works in the first place and can do it if need be. How is a kid learning geometry or calculus supposed to get by and learn to spot the patterns that make sense and the ones that don't without first knowing the fundamentals underlaying the more complex concepts?
When I took multivariable calculus in tyool 2007, we were forbidden from using our calculators. "You can use a slide rule or an abacus" and I did indeed bring the former to one exam, but of course the problems were written in such a way that you didn't actually need it.
the difference is using my calculator in real life works ALL the time and is cheap. I can depend on it. And i still need to think about the broader problem even if i have a calculator. The calculator only removes the mindless rote memorization of the steps needed to do arithmetic, etc.
My calculator doesn’t depend on a fancy AI model in the cloud. It’s not randomly rate limited during peak times due to capacity constraints. It’s not expensive to use, whereas the good LLM models are.
Did i mention calculators are actually deterministic? In other, always reliable. It’s difficult to compare the two. One gives a false sense of accomplishment because it’s say 80% reliable, and the other is always 100% reliable.
Outsourcing a specific task to a deterministic tool you own is clearly not the same thing as outsourcing all of your cognition to a probabilistic tool owned by people with ongoing political and revenue motives that don’t align with your own.
He's implying, rightfully so, that we've repeatedly adapted to various technologies that fundamentally threatened the then status quo of education. We'll do it again.
The best class I took in college was a 3-hour long 5-person discussion group on Metaphysics. It’s a shame that college costs continue to rise, because I still don’t think anything beats small class sizes and active participation.
Ironically I have used ChatGPT in similar ways to have discussions, but it still isn’t quite the same thing as having real people to bounce ideas off of.
It's not that hard to save remote education accreditation. You just need a test pod.
Take one of those soundproofed office pods, something like what https://framery.com/en/ sells. Stick a computer in it, and a couple of cameras as well. The OS only lets you open what you want to open on it. Have the AI watch the student in real time, and flag any potential cheating behaviors, like how modern AI video baby monitors watch for unsafe behaviors in the crib.
If a $2-3000 pod sounds too expensive for you over the course of your child's education, I'm sure remote schoolers can find ways to rent pods at much cheaper scale, like a gym subscription model. If the classes you take are primarily exam-based anyway you might be able to get away with visiting it once a week or less.
I'm surprised nobody ever brings up this idea. It's obvious you have to fight fire with fire here, unless you want to 10x the workload of any teacher who honestly cares about cheating.
There's a few comments here about how AI will revolutionize learning because it's personalized or lets users explore or whatever. That's fundamentally missing the point. College students who are using AI aren't using it to learn better, they're using it to learn _less_. The point of writing an essay isn't the essay itself, it's the process of writing the essay: research, organization, writing, etc. The point of doing math problems isn't to get the answer, it's to _do the work_ to find the answer. If you let AI do that, you're not learning better, you're learning worse.
Now, granted, AI can help with things students are passionate about. If you want to do gamedev you might be able to get an AI to walk you through making a game in Unity or Godot. But societally we've decided that school should be about instilling a wide variety of base knowledge that students may not care about: history, writing, calculus. The idea is that you don't know what you're going to need in your life, and it's best to have a broad foundation so that if you run into something that needs it you'll at least know where to start. 99% of the time developing CRUD apps you're not going to need to know that retrieving an item from an array is O(n), but when some sales manager goes in and adds 2 million items to the storefront and now loading a page takes 12 seconds and you can't remove all that junk because it's for an important sales meeting 30 minutes from now, it's helpful to know that you might be able to replace it with a hashmap that's O(1) instead. AI's fine for learning things you want to learn, but you _need_ to learn more than just what you _want_ to learn. If you passed your Data Structures and Algorithms class by copy/pasting all the homework questions into ChatGPT, are you going to remember what big-O notation even means in 5 years?
I'm kind of happy that I did my maths courses just about before LLMs did become available. The math homework was the only thing in my CS studies where I sat sometimes 6+ hours on the weekly exercises and I always allocated one day for them. I sometimes felt really tempted to look stuff up and also rarely found an answer on Metroid Mathplanet forums. But it's really hard to Google math exercises and if the teachers are motivated enough to write new slightly altered questions each year they are practically impossible to Google. With LLMs I'm sure that I would have looked up a lot more. In the end getting 90% of the points and really struggling for it was rewarding and taught me a lot - although I'll probably never need these skills.
Schools need to re-think what the purpose of essays was in the first place and re-invent homework to suit the existance of LLMs.
If it's to understand the material, then skip the essay writing part and have them do a traditional test. If it's to be able to write, they probably don't need that skill anymore so skip the essay writing. If it's to get used to researching on their own, find a way to have them do that which doesn't work with LLMs. Maybe very high accuracy is required (a weak point for LLMs), or the output is not an LLM-friendly form, or it's actually difficult to do so the students have to be better than LLMs.
> "If it's to be able to write, they probably don't need that skill anymore..."
Any person who can't write coherently and in a well organized way isn't going to be able to prompt a LLM effectively either. Writing skills become _more_ important in the age of LLMs, not less.
Writing well requires both having organized thinking and writing skills to carry the reader's thinking and feelings along where you want them to go. You can put your organized thinking into an LLM, perhaps as bullet points or dense explanations and it can do the writing skills part. You just need to be able to read well enough to evaluate the output, which is a lot easier.
Basically it comes to this: a sufficiently large proportion of a student's grade must come from work impossible to generate with AI, e.g. in-person testing.
Unfortunately, 18-year-olds generally can't be trusted to go a whole semester without succumbing to the siren call of easy GenAI A's. So even if you tell them that the final will be in-person, some significant chunk of them will still ChatGPT their way through and bomb the final.
Therefore, professors will probably have to have more frequent in-person tests so that students get immediate feedback that they're gonna fail if they don't actually learn it.
Literally this. The education system is lazy and tests people only every 30 days, with a test or midterm. This is the system's fault. Quiz every day. Catch where people are struggling, early. The quiz can be on their phones and let you know when they switch apps. Just have them close their laptops, take out their phones, scan QR codes from the screen in front, or pasted on a wall, and then 5 min quiz on their phones. That's what I did.
>Unfortunately, 18-year-olds generally can't be trusted to go a whole semester without succumbing to the siren call of easy GenAI A's. So even if you tell them that the final will be in-person, some significant chunk of them will still ChatGPT their way through and bomb the final.
I really think we need these policies to be developed by the opposite of misanthropists.
I wonder if culture has gone wrong where children or students simply cannot be failed anymore. Or sometimes even given less than perfect grades...
Maybe we should go back to times where failing students was seen more so fault of the student than the system. At least when majority of students pass and there is no proven fault by faculty.
I've found LLMs to often be a time-suck rather than supercharge my own learning. A huge part of thinking is reconsidering your initial assumptions when you start to struggle in research, mathematical problem solving, programming, whatever it may be. AI makes it really easy to go down a rabbit hole and spend hours filling in details to a question or topic that wasn't quite right to begin with.
Basically analog thinking is still critical, and schools need to teach it. I have no issues with classrooms bringing back the blue exam books and evaluating learning quality that way.
AI definitely makes it easier for students to finish their assignments, but that’s part of the problem. It’s getting harder to tell whether they actually understand anything.What’s more worrying is how fast they’re losing the habit of thinking for themselves.
And it’s not just in school. I see the same thing at work. People rely on AI tools so much, they stop checking if they even understand what they’re doing.
It’s subtle, but over time, that effort to think just starts to fade.
> An infinitely patient digital tutor that can tackle any question…..You might feel like you are learning when querying a chatbot, but those intellectual gains are often illusory.
I get the shade here (kind of?) and have seen both sides in my life, but isn’t having a tutor exactly what you need to learn?
IMO, using it as an information butler is leagues different from a digital tutor. That’s the key— don’t bring forklifts to the gym lol
if I were teaching english today, i would ask students to write essays taking the positions that an AI is not allowed to. steelman something appalling. stand up in class and debate like your life or grade depends on it and fail anyone who doesn't, and if that excludes people, maybe they don't belong in a university.
in everything young people actually like, they train, spar, practice, compete, jam, scrimmage, solve, build, etc. the pedagogy needs to adapt and reframing it in these terms will help. calling it homework is the source of a flawed mental model that problematizes the work instead of incentivising it, and now that people have a tool to solve the problem, they're applying their intelligence to the problem.
arguably there's no there there for the assignments either, especially for a required english credit. the institution itself is a transaction that gets them a ticket to an administrative job. what's the homework assignment going to get them they value? well roundedness, polish, acculturation, insight, sensitivity, taste? these are not valuable or differentiating to kids in elite institutions who know they are competing globally for jobs that are 95% concrete political maneuvering, and most of them (especially in stem) probably think the class signifiers that english classes yield are essentially corrupt anyway.
maybe it's schadenfreude and an old class chip on my part, but what are they going to do, engage in the discourse and become public intellectuals? argue about rimbaud and voltaire over coffee, cigarettes and jazz? Some of them have higher follower counts than there were readers of the novels or articles being taught in their classes. More people read their tweets every day than have ever read a book by Chiang. AI isn't the problem, it's a forcing function and a solution. Instructors should reflect on what their institutions have really become.
In Roman times, teaching focused on wrestling to prepare young people for life. Now, in the AI age, what to teach, and why, have once again become major questions, especially when AI can pass the bar exams and a Ph.D. is no longer a significant achievement. Critical thinking, and life experiences could be the target but would they do it?
Use AI to determine potential essay topics that are as close to 'AI-proof' as possible.
Here is an example prompt:
"Describe examples of possible high school essay topics where students cannot use AI engines such as perplexity or ChatGPT to help complete the assignment. In other words - AI-proof topics, assignments or projects"
For the vast majority that enroll higher education: Because they want a job. They need a job.
The degree is the key that unlocks the door to a job. Not the knowledge itself, but the actual physical diploma.
And it REALLY, REALLY doesn't help that there are so many jobs out there that could be done just fine with a HS diploma. But because reasons, you now need a college degree for that job.
The problem isn't new. For decades people have bought fake degrees, hired people to do their work, even hired people the impersonate themselves.
> Perhaps we should reconsider the purpose of teaching. If one does not want to learn, why are we teaching them?
Certainly there's something to be said for reconsidering much of the purpose (and mechanisms) of post-secondary education, but we often 'force' children and young adults to do things they don't want to do for their own good. I think it's better we teach our children the importance of learning - the lack of which is what results in, as another commenter puts it, students viewing homework as "something they have to overcome"
Those protestors mostly went to government schools, and were likely radicalized because of their time in them. Being in school doesn't make the hate in your heart go away. It forces you to rub shoulders with the exact kind of people you believe are subhuman - and even gives more ammunition for them to use in their mind when arguments against racism are made to them.
There's a reason why conservatives are so obsessed with school choice, LGBT book bans, etc.
The author teaches a college-level writing class. Are you suggesting that, if you voluntarily take a writing class, it's unreasonable if the professor expects you to do some writing outside of class?
Caveat, I'm just armchair-commenting and I haven't thought much about this.
After kids learn to read and do arithmetic, shouldn't we go back to apprenticeships? The system of standardized teaching and grading seems to be about to collapse, and what's the point of memorizing things when you can carry all that knowledge in your pocket? And, anyway, it doesn't stick until you have to use it for something. Plus, a teacher seems to be insufficient to control all the students in a classroom (but that's nothing new; it amazes me that I was able to learn anything at all in elementary school, with all the mayhem there always was in the classroom).
Okay, I can already see a lot of downsides to this, starting with the fact that I would be an illiterate farmer if some in my family had had a say in my education. But maybe the aggregate outcome would be better than what is coming?
> I want my students to write unassisted because I don’t want to live in a society where people can’t compose a coherent sentence without a bot in the mix.
Kicking against the pricks.
It is understandable that professional educators are struggling with the AI paradigm shift, because it really is disrupting their profession.
But this new reality is also an opportunity to rethink and improve the practice of education.
Take the author comment above: you can't disagree with the sentiment but a more nuanced take is that AI tools can also help people to be better communicators, speakers, writers. (I don't think we've seen the killer apps for this yet but I'm sure we will soon).
If you want students to be good at spelling and grammar then do a quick spelling test at the start of each lesson and practice essay writing during school time with no access to computers. (Also, bring back Dictation?)
Long term: yes I believe we're going to see an effect on people's cognition abilities as AI becomes increasingly integrated into our lives. This is something we as a society should grapple with and develop new enlightened policies and teaching methods.
You can't put the genie back in the bottle, so adapt, use AI tools wisely, think deeply about ways to improve education in this new era.
I'm curious to see how the paper-and-pen pivot goes. There's something radical about going analog again in a world that's hurtling toward frictionless everything
The idea with calculators was that as a tool there are higher level questions that calculators would help you answer. A simple example is that calculators don't solve word problems, but you can use them to do the intermediate computations.
What are the higher level questions that LLMs will help with, but for which humans are absolutely necessary? The concern I have is that this line doesn't exist -- and at the very best it is very fuzzy.
Ironically, this higher level task for humans might be ensuring that the AIs aren't trying to get us (whatever that means, genocide, slavery, etc...).
I think as a culture we've fetishized formal schooling way past its value. I mean, how much of what you "learned" in school do you actually use or remember? I'm not against education, education is very important, but I'm not sure that schooling is really the optimal route to being educated. They're related, but they're not the same.
The reality is, if someone wants to learn something then there's very little need to cheat, and if they don't want to learn the thing but they're required to, the cheating sort of doesn't matter in the end because they won't retain or use it.
Or to put it simpler, you can lead a horse to water but..
The fetishizing enabled the massive explosion in what's basically a university industrial complex financed off the backs of student loans. To keep growing the industry needed more suckers...I mean students to extract student loans from. This meant watering down the material even in technical degrees like engineering, passing kids who should have failed, and lowering admission standards (masked by grade inflation). Many programs are really really bad now like what should be high school freshman level material. Criticizing the university system gets you called anti-intellectual and a redneck.
A lot of debate around the idea of student loan forgiveness but nobody is trying to address how the student loan problem got so bad in the first place.
All primary schooling is designed to teach people about everything they can learn. If we don’t, many of them will end up in the coal mines because it’s the only thing they know.
>I think as a culture we've fetishized formal schooling way past its value. I mean, how much of what you "learned" in school do you actually use or remember? I'm not against education, education is very important, but I'm not sure that schooling is really the optimal route to being educated. They're related, but they're not the same.
Yeah its absolutely bonkers. I spent 9 months out of school traveling, and the provided homework actually set me ahead of my peers when I had returned.
No ones stopped and considered "What is a school for".
For some people it seems to be mandatory state sponsored childcare. For others its about food? Some people tell me it sucks but its the best way to get kids to socialise?
I feel like if it was an engineering project there would be a formal requirements study, but because its a social program what we get instead is just a big bucket of feelings and no defined scope.
During my time I have come to view schooling as an adversary. I am considering whether it might be prudent to instruct my now toddler that school is designed to break him, and that his role is actually to achieve in spite of it, and that some of his education will come in opposition to the institution.
One of the last courses I took during my CS degree we had one on one 10 minute zoom calls with TAs who would ask a series of random detailed questions about any line of code in any file of our term project. It was easy to complete if you wrote the code by hand and I imagine would have been difficult for students who extensively cheated.
In terms of creative writing I think we need to accept that any proper assessment will require a short essay to be written in person. Especially at the high school level there's no reason why a 12th grade student should be passing english class if they can't write something half-decent in 90 minutes. And it doesn't need to be pen and paper - I'm sure there are ways to lock a chromebook into some kind of notepad software that lacks writing assistance.
Education should not be thought of as solely a pathway to employment it's about making sure people are competent enough to interface with most of society and to participate in our broader culture. It's literally an exercise in enlightenment - we want students to have original insights about history, culture, science, and art. It is crucial to produce people who are pleasant to be around and who are interesting to talk to - otherwise what's the point?
It's honestly encouraging to see an educator thinking about solutions instead of wagging a finger at LLMs and technology and this new generation. Homework in its current form cannot exist AND be beneficial for the students -- educators need to evolve with the technology to work alongside it. The Google Docs idea was smart, but the return to pen and paper in the classroom is great. Good typists will hate it at first, but transcribing ideas more slowly and semi-permanently has its benefits.
Between widespread social media/short form video addiction and GPT for all homework starting in middle school, I think ASI is nearly guaranteed by virtue of the human birth/death process, with no further model improvement required.
I am kinda shocked that the thing which would be shared on HN, unironically, is an essay of the attraction to the idea of the butlerian Jihad. Interesting times.
As late as 1984, Danny Dunn shared a place of honor on my bookshelves, along with Encyclopedia Brown.
The long list of titles is interesting and almost leads us to a self-referential thought. These series were often known as "boiler-room novels" because they were basic and formulaic, and it was possible to command a team of entry-level writers to churn them out.
LLMs is is here to stay and will change learning for the better (we will be full-scale disrupted 3-5yr from now in EDU), it is a self-guided tutor like never before and 100% Amazing, except for when it hallucinates.
I use it [Copilot / GPT / Khanmingo] all the time to figure out new tools and prototype workflows, check code for errors, and learn new stuff including those classes at universities which cost way too much.
If universities feel threatened by AI cry me a river.
No professor or TA was *EVER* able to explain calculus and differential equations to me, but Khanmingo and ChatGPT can. So the educational establishment can deal with this.
Back in the day, when I first tried college, I simply could not comprehend higher level math. We had one professor, and a couple of TAs - but it was impenetrable for me. They just said to me Go to the library and try some different books", or "Try to find some students and discuss the topics". Tried that, but to no avail.
I was a poor math student in HS, but I loved electronics, so that's why I decided to pursue electrical engineering. Seeing that I simply could not handle the math, I dropped out after the first year, and started working as an electricians apprentice.
Some years later YouTube had really taken off, and I decided to check out some of the math tutors there. Found Khan Academy, and over the course of a week, everything just fell into place. I stared from the absolute beginning, and binged/worked myself up to HS pre-calc math. His style of short-from teaching just worked, and he's a phenomenal educator on top.
Spent the summer studying math, and enrolled college again in the fall. Got A's and B's in all my engineering math classes. If I ever got stuck, or couldn't grok something, I trawled youtube for math vids / tutors until I found someone that could explain it in a way I could understand.
These days I use LLMs in the way you do, and I sort of view it as an extension of the the way I learned things before: infinite number of tutors.
Of course, one problem is that one doesn't know what one doesn't know. Is the model lying to you? Well, luckily there are many different models, and you can compare to see what they say.
Exactly. I can remember 2-3 teachers in my life that were good but most were absolutely terrible.
I even remember taking a Philosophy of AI class in 1999, something that should have been as interesting and intellectual stimulating to any thinking student, and the professor managed to clear the lecture hall from 300 to 50 before I stopped going too with his constant self-aggrandizing bullshit.
I had a history teacher in high school that didn't try to hide he was a teacher so he could travel in the summer and then made a large part of the class about his former and upcoming travels.
Most weren't this bad but they just sucked at explaining concepts and ideas.
The whole education system should obviously be rebuilt from the ground up but it will be decades before we bother with this. Someone above mentioned the Roman's teaching wrestling to students. We are those Romans and we are just going to keep teaching wrestling. I learned to wrestle, my father learned to wrestle so my kids are going to learn to wrestle because that is what defines an educated person!
> I think there is a good case to be made for trying to restrict AI use among young people the way we try to restrict smoking, alcohol, gambling, and sex.
I would go further than that, along two axes: it's not just AI and it's not just young people.
An increasing proportion of our economy is following a drug dealer playbook: give people a free intro, get them hooked, then attach your siphon and begin extracting their money. The subscription-model-ization of everything is an obvious example. Another is the "blitzscaling" model of offering unsustainably low prices to drive out competition and/or get people used to using something that they would never use if they had to pay the true cost. More generally, a lot of companies are more focused on hiding costs (environmental, psychological, privacy, etc.) from their customers than on actually improving their products.
Alcohol, gambling, and sex, are things that we more or less trust adults to do sensibly and in moderation. Many people can handle that, and there are modest guardrails in place even so (e.g., rules that prevent selling alcohol to drunk people, rules that limit gambling to certain places). I would put many social media and other tech offerings more in the category of dangerous chemicals or prescription drugs or opiates (like the laudanum the article mentions). This would restrict their use, yes, but the more important part is to restrict their production and set high standards for the companies that engage in such businesses.
Basically, you shouldn't be able to show someone --- child or adult --- an infinite scrolling video feed, or give them a GPT-style chatbot, or offer free same-day shipping, without getting some kind of permit. Those things are addictive and should be regulated like drugs.
And the penalties for failing to do everything absolutely squeaky clean should be ruinous. The article mentions one of Facebook's AIs showing CSAM to kids. One misstep on something like that should be the end of the company, with multi-year jail terms for the executives and the venture capitalists who funded the operation. Every wealthy person investing in these kinds of things should live in constant fear that something will go wrong and they will wind up penniless in prison.
As others have already mentioned, I believe that it's mainly the curious and engaged students who will benefit greatly from AI. And for those who cheat or use AI to deceive and end up failing a written exam, well, maybe that's not such a bad thing after all...
Let me just say that I always like these types of conversation on here. Tech dorks and education are an interesting conversation. I'll throw in my 2 cents as a HS CS teacher.
First off, I respect the author of the article for trying pen and paper, but that’s just not an option at a lot of places. The learning management systems are often tied in through auto grading with google classroom or something similar. Often you’ll need to create digital versions of everything to put in management systems like Atlas. There’s also school policy to consider and that’s a whole nother can of worms. All that aside though.
The main thing that most people don't have in the forefront of their mind in this conversation is the fact that most students (or adults) don't want to learn. Most people don't want to change. Most students will do anything and everything in their power to avoid those two things. I’ve often thought about why, maybe to truly learn you need to ignore your ego and accept that there’s something you don’t know; maybe it’s a biological thing and humans are averse to spending calories on mental processes that they don’t see as a future benefit – who knows.
This problem runs core to all of modern education (and probably has since the idea of mandatory mass education was called from the pits of hell a few hundred years ago). LLMs have really just brought us a society to a place where it can no longer be ignored because students no longer have the need to do what they see as busy work. Sadly, they don’t inherently understand how writing essays on oppressed children hiding in attics more than half a century ago helps them in their modern tiktok filled lives.
The other issue is that, for example, in the schools I’ve worked at, since the advent of LLMs, many teachers and most of the admin all take this bright and cheery approach to LLMs. They say things like, “The students need to be shown how to do it right,” or “help the students learn from ChatGPT.” The fact that the vast majority of students in high school just don’t care escapes them. They feel like it’s on the teachers to wield and to help the students wield this mighty new weapon in education. But in reality, It’s just the same war we’ve always had between predator and prey (or guard and prisoner) but I fear in this one, only one side will win. The students will learn how to use chat better and the teachers will have nothing to defend against it, so they will all throw up their hand as start using chat to grade thing. Before you know it, the entire education system is just chat grading work submitted by chat under the guise of, “oh but the student turned it in so it’s theirs.”
The only thing LLMs have done, and more than likely ever do, in education is to make it blatantly obvious that students are not empty vessels yearning for a drink from the fountain of knowledge that can only be provided to them by the high and mighty educational institution. Those students do exist and they will always find a way to learn. I also assume that many of us here fall into that, but those of us that do are not the majority.
My students already complain about the garbage chat created assignments their teachers are giving them. Entire chunks of my current school are using chat to create tests, exams, curriculum, emails and all other forms of “teacher work”. Several teachers, who are smart enough, are already using chat to grade thing. The CEO of the school is pushing for every grade (1-12) having 2 AI classes a week where they are taught how to “properly” use LLMs. It’s like watching a train wreck in slow motion.
The only way to maintain mandatory mass education is by accepting no one cares, finding a way to remove LLMs from the mix, or switch of Waldorf, homeschooling or some other better system than mandatory mass education. The wealthy will be able to, the rest will suffer.
There is a tremendous lack of understandings between the genx and millennial teachers and the way they see and use AI, and how younger people are using it.
Kids use AI like an operating system, seamlessly integrated into their workflows, their thinking, their lives. It’s not a tool they pick up and put down; it’s the environment they navigate, as natural as air. To them, AI isn’t cheating—it’s just how you get things done in a world that’s always been wired, always been instant. They do not make major life decisions without consulting their systems. They use them like therapists. It’s is far more than a Google replacement or a writing tool already.
This author’s fixation on “desirable difficulty” feels like a sermon from a bygone era, steeped in romanticized notions of struggle as the only path to growth. It’s yet another “you can’t use a calculator because you won’t always have one” — the same tired dogma that once insisted pen-and-paper arithmetic was the pinnacle of intellectual rigor (even after calculators arrived: they have in fact always been with us every day since).
The Butlerian Jihad metaphor is clever but deeply misguided casting AI as some profane mimicry of the human mind ignores how it’s already reshaping cognition, not replacing it.
The author laments students bypassing the grind of traditional learning, but what if that grind isn’t the sacred rite they think it is? What if “desirable difficulty” is just a fetishized relic of an agrarian education system designed to churn out obedient workers, not creative thinkers?
The reality is, AI’s not going away, and clutching pearls about its “grotesque” nature won’t change that. Full stop.
Students aren’t “cheating” when they use it… they’re adapting to a world where information is abundant and synthesis is king. The author’s horror at AI-generated essays misses the point: the problem isn’t the tech, it’s the assignments (and maybe your entire approach).
If a chatbot can ace your rhetorical analysis, maybe the task itself is outdated, testing rote skills instead of real creativity or critical thinking.
Why are we still grading students on formulaic outputs when AI can do that faster?
The classroom should be a lab for experimentation, not a shrine to 19th century pedagogy, which is most definitely is. I was recently lectured by a teacher about how he tries to make every one of his students a mathematician, and became enraged when I gently asked him how he’s dealing with the disruption to mathematicians as a profession that AI systems are currently doing. There is an adversarial response underneath a lot of teacher’s thin veneers of “dealing with the problem of AI” that is just wrong and such a cope.
That obvious projection leads directly to this “adversarial” grading dynamic. The author’s chasing a ghost, trying to police AI use with Google Docs surveillance or handwritten assignments. That’s not teaching. What it is standing in the way of civilization Al progress because it doesn’t fit your ideas. I know there are a lot of passionate teachers out there, and some even get it, but most definitely do not.
Kids will find workarounds, just like they always have, because they’re not the problem; the system is. If students feel compelled to “cheat” with AI, it’s because the stakes (GPAs, scholarships, future prospects) are so punishingly high that efficiency becomes survival.
Instead of vilifying them, why not redesign assessments to reward originality, process, and collaboration over polished products? AI could be a partner in that, not an enemy.
The author’s call for a return to pen and paper feels like surrender dressed up as principle and it’s rediculously out of touch.
It’s not about fostering “humanity” in the classroom; it’s about clinging to a nostalgic ideal of education that never served everyone equally anyway.
Meanwhile, students are already living in the future, where AI is as foundational as electricity.
The real challenge isn’t banning the “likeness bots” but teaching kids how to wield them critically, ethically, and creatively.
Change isn’t coming. It is already here. Resisting it won’t make us more human; it’ll just leave us behind.
ChatGPT is only 2.5 years old. How are kids using AI like it's always been around? I really hope they aren't making major life decisions consulting chatbots from big tech companies, instead of their relatives, teachers and friends. I'm old enough to recall when social media was viewed as this incredibly positive tech for humanity. How things have changed. One wonders how we'll view the impact AIs in a few years.
I teach Enterpise Architecture on graduate level. I would absolutely not mind people using AI as an OS or an information source or a therapist. I would not mind them looking things up in an encyclopedia, so why mind them using AI.
What I do mind is:
- the incredible generic slop AI generates. Let’s improve communication, make a better strategy, improve culture.
- the unwavering belief in AI. I tell my students, why using AI will not give them a good grade. They get a case solved by all major LLMs, graded, with thorough feedback and a bad grade. I tell them, that literally writing anything at all as the answer would not give a much worse answer. And still they go and use AI and get bad grades.
- the incredible intellectual laziness it seems to foster. I criticize TOGAF in my course (let’s not get into that) and explicitly state it to be outside of the course material. Repeatedly, in writing and verbally. And what do the students do? They ask a LLM, that inevitably starts referring to TOGAF. And the answer is copied in the case analysis without even an attempt to actually utilize TOGAF or to justify the choice made
My students actually get worse grades and are worse off in terms of being able to solve actual real-life problems, because they use AI. Getting a degree should increase their intellectual capabilities but people actively choose not to, thus wasting their time. And that’s I’m not OK with.
How do you test "real creativity" and "critical thinking" in a way that is both scalable and reliably tells apart those who get it and those who don't?
It's interesting to note that your comment and my comment ended up right at the end, having been downvoted, with no downvoters commenting on why they disagree with you, or my, points.
I assume it's because many of the commenters of this post are skewed towards academia, and perhaps view the disruption by AI to the traditional methods of grading student work as a challenge to their profession.
As we have seen many times throughout history, when disruptive forces of technical or demographic changes or a new set of market forces occurs, incumbents often struggle to adapt to the new situation.
Established traditional education is a massive ship to turn around.
Your comments contain much food for thought and deserve to be debated. I agree with you that educators should not be branding students as cheaters. Using AI in an educational context is a rational and natural thing to do, especially for younger students.
> ... AI as some profane mimicry of the human mind ignores how it’s already reshaping cognition, not replacing it.
- Yes, this is such an important point and it's why we need enlightened policy making leading to meaningful education reform.
I do disagree with you about incorporating more pen and paper activities - I think this would provide balance and some important key skills.
No doubt AI is challenging to many areas of society, especially education. I'm not saying it's a wonderful thing that we don't need to worry about, but we do need to think deeply about its impacts and how we can harness its positive strengths and radically improve teaching and learning outcomes. It's not about locking students in exam rooms with high tech surveillance.
With AI it's disappointing that the prevalent opinions of many educators are seemingly stuck and struggling to adapt.
Decades of research into learning shows that "desirable difficulty" is not, as you put it, "just a fetishized relic of an agrarian education system designed to churn out obedient workers, not creative thinkers." Rather, difficulty means you are encountering things you do not already understand. If you are not facing difficulties then your time is being wasted. The issue is that AI allows people to avoid facing difficulties and thus allows them to waste their time.
You think we will make progress by learning to use AI in certain ways, and that assignments can be crafted to inculcate this. But a moment's acquaintance with people who use AI will show you that there is a huge divide between some uses of AI and others, and that some people use AI in ways which is not creative and so on. Ideally this would prompt you to reflect on what characteristics of people incline them towards using AI in certain ways, and what we can do to promote the characteristics that incline people to use AI in productive and interesting ways, etc. The end result of such an inquiry will be something like what the author of this piece has arrived at, unfortunately. Any assignment you think is immune to lazy AI use is probably not. The only real solution is the adversarial approach the author adopts.
I teach math at a large university (30,000 students) and have also gone “back to the earth”, to pen-and-paper, proctored and exams.
Students don’t seem to mind this reversion. The administration, however, doesn’t like this trend. They want all evaluation to be remote-friendly, so that the same course with the same evaluations can be given to students learning in person or enrolled online. Online enrollment is a huge cash cow, and fattening it up is a very high priority. In-person, pen-and-paper assessment threatens their revenue growth model. Anyways, if we have seven sections of Calculus I, and one of these sections is offered online/remote, then none of the seven are allowed any in person assessment. For “fairness”. Seriously.
I think you've identified the main issue here:
LLMs aren't destroying the University or the essay.
LLMs are destroying the cheap University or essay.
Cheap can mean a lot of things, like money or time or distance. But, if Universities want to maintain a standard, then they are going to have to work for it again.
No more 300+ person freshman lectures (where everyone cheated anyways). No more take-home zoom exams. No more professors checked out. No more grad students doing the real teaching.
I guess, I'm advocating for the Oxbridge/St. John's approach with under 10 class sizes where the proctor actually knows you and if you've done the work. And I know, that is not a cheap way to churn out degrees.
>I guess, I'm advocating for the Oxbridge/St. John's approach with under 10 class sizes where the proctor actually knows you and if you've done the work. And I know, that is not a cheap way to churn out degrees.
I could understand US tuition if that were the case. These days with overworked adjuncts make it McDonalds at Michelin star prices.
3 replies →
Believe it or not, 300-person freshman lectures can be done well. They just need a talented instructor who's willing to put in the prep, and good TAs leading sections. And if the university fosters the right culture, the students mostly won't cheat.
But yeah, if the professor is clearly checked out and only interested in his research, and the students are being told that the only purpose of their education is to get a piece of paper to show to potential employers, you'll get a cynical death-spiral.
(I've been on both sides of this, though back when copy-pasting from Wikipedia was the way to cheat.)
1 reply →
Over here in Finland, higher education is state funded, and the funding is allocated to universities mostly based on how many degrees they churn out yearly. Whether the grads actually find employment or know anything is irrelevant.
So, it's pretty hard for universities over here to maintain standards in this GenAI world, when the paying customer only cares about quantity, and not quality. I'm feeling bad for the students, not so much for foolish politicians.
2 replies →
After a short stint as a faculty member at a McU institution, I agree with much of this.
Provide machine problems and homework as exercises for students to learn, but assign a very low weight to these as part of an overall grade. Butt in seat assessments should be the majority of a course assessment for many courses.
>> (where everyone cheated anyways)
This is depressing. I'm late GenX, I didn't cheat in college (engineering, RPI), nor did my peers. Of course, there was very little writing of essays so that's probably why, not to mention all of our exams were in person paper-and-pencil (and this was 1986-1990, so no phones). Literally impossible to cheat. We did have study groups where people explained the homework to each other, which I guess could be called "cheating", but since we all shared, we tended to oust anyone who didn't bring anything to the table. Is cheating through college a common millenial / gen z thing?
8 replies →
Cheap "universities" are fine for accreditation. Exams can be administered via in-person proctoring services, which test the bare minimum. The real test would be when students are hired, in the probationary period. While entry-level hires may be unreliable, and even in the best case not help the company much, this is already a problem (perhaps it can be solved by the government or some other outside organization paying the new hire instead of the company, although I haven't thought about it much).
Students can learn for free via online resources, forums, and LLM tutors (the less-trustworthy forums and LLMs should primarily be used to assist understanding the more-trustworthy online resources). EDIT: students can get hands-on-experience via an internship, possibly unpaid.
Real universities should continue to exist for their cutting-edge research and tutoring from very talented people, because that can't be commodified. At least until/if AI reaches expert competence (in not just knowledge but application), but then we don't need jobs either.
5 replies →
There are excellent 1000-student lecture courses and shitty 15-student lecture courses. There are excellent take-home exams and shitty in-class exams. There are excellent grad student teaching assistants and shitty tenured credentialed professors. You can't boil quality down to a checklist.
2 replies →
I think this is where it's going to end up.
The masses get the cheap AI education. The elite get the expensive, small class, analog education. There won't be a middle class of education, as in the current system - too expensive for too little gain.
10 is a small number. There's a middle ground. When I studied, we had lectures for all students, and a similar amount of time in "work groups," as they were called. That resembled secondary education: one teacher, around 30 students, but those classes were mainly focused on applying the newly acquired knowledge, making exercises, asking questions, checking homework, etc. Later, I taught such classes for programming 101, and it was perfectly doable. Work group teachers were also responsible for reviewing their students' tests.
But that commercially oriented boards are ruining education, that's a given. That they would stoop to this level is a bit surprising.
2 replies →
Oxbridge supervisinons/tutorials are typically two students, and at a push three (rarely)
certainly not anywhere close to ten!
1 reply →
All degrees are basically the same though and of 95% of the value is signaling nobody really cares about the education part
I see that pressure as well. I find that a lot of the problems we have with AI are in fact AI exposing problems in other aspects of our society. In this case, one problem is that the people who do the teaching and know what needs to be learned are the faculty, but the decisions about how to teach are made by administrators. And another problem is that colleges are treating "make money" as a goal. These problems existed before AI, but AI is exacerbating them (and there are many, many more such cases).
I think things are going to have to get a lot worse before they get better. If we're lucky, things will get so bad that we finally fix some shaky foundations that our society has been trying to ignore for decades (or even centuries). If we're not lucky, things will still get that bad but we won't fix them.
Instructors and professors are required to be subject matter experts but many are not required to have a teaching certification or education-related degree.
So they know what students should be taught but I don't know that they necessarily know how any better than the administrators.
I've always found it weird that you need teaching certification to teach basic concepts to kindergartners but not to teach calculus to adults.
39 replies →
I totally agree. I think the neo-liberal university model is the real culprit. Where I live, Universities get money for each student who graduates. This is up to 100k euros for a new doctorate. This means that the University and its admin want as many students to graduate as possible. The (BA&MA) students also want to graduate in target time: if they do, they get a huge part of their student loans forgiven.
What has AI done? I teach a BA thesis seminar. Last year, when AI wasn't used as much, around 30% of the students failed to turn in their BA thesises. 30% drop-out rate was normal. This year: only 5% dropped out, while the amount of ChatGPT generated text has skyrocketed. I think there is a correlation: ChatGPT helps students write their thesises, so they're not as likely to drop out.
The University and the admins are probably very happy that so many students are graduating. But also, some colleagues are seeing an upside to this: if more graduate, the University gets more money, which means less cuts to teaching budgets, which means that the teachers can actually do their job and improve their courses, for those students who are actually there to learn. But personally, as a teacher, I'm at loss of what to do. Some thesises had hallucinated sources, some had AI slop blogs as sources, the texts are robotic and boring. But should I fail them, out of principle on what the ideal University should be? Nobody else seems to care. Or should I pass them, let them graduate, and reserve my energy to teach those who are motivated and are willing to engage?
25 replies →
In Australia Universities that have remote study have places where people can do proctored exams in large cities. The course is done remotely but the exam, which is often 50%+ of the final grade, is done in a place that has proctored exams as a service.
Can't this be done in the US as well ?
The Open University in the UK started in 1969. Their staff have a reputation for good interaction with students, and I have seen very high quality teaching materials produced there. I believe they have always operated on the basis of remote teaching but on-site evaluation. The Open University sounds like an all-round success story and I'm surprised it isn't mentioned more in discussions of remote education.
Variations in this system are in active use in the US as well.
Do you feel it is effective?
It seems to me that there is a massive asymmetry in the war here: proctoring services have tiny incentives to catch cheaters. Cheaters have massive incentives to cheat.
I expect the system will only catch a small fraction of the cheating that occurs.
52 replies →
Where I'm studying its proctored-online. They have a custom browser and take over your computer while you're doing the exam. Creepy AF but saves travelling 1,300 km to sit an exam.
2 replies →
Can you tell us: Is "remote study" a relatively recent phenom in AU -- COVID era, or much older? I am curious to learn more. And, what is the history behind it? Was it created/supported because AU is so vast and many people a state might not live near the campus?
Also: I think your suggestion is excellent. We may see this happen in the US if AI cheating gets out of control (which it well).
2 replies →
Not even just large cities. Decent sized towns have them too, usually with local high school teachers or the like acting as proctors.
Proctoring services done well could be valuable, but it’s smaller rural and remote communities that would benefit most. Maybe these services could be offered by local schools, libraries, etc.
3 replies →
nope. too much impact on profit.
> Students don’t seem to mind this reversion.
Those I ask are unanimously horrified that this is the choice they are given. They are devastated that the degree for which they are working hard is becoming worthless yet they all assert they don't want exams back. Many of them are neurodivergent who do miserably in exam conditions and in contrast excel in open tasks that allow them to explore, so my sample is biased but still.
They don't have a solution. As the main victims they are just frustrated by the situation, and at the "solutions" thrown at it by folks who aren't personally affected.
It is always interesting to me when people say they are "bad test takers". You mean you are bad at the part where we find out how much you know? Maybe you just don't know the material well enough.
caveat emptor - I am not ND so maybe this is a real concern for some, but in my experience the people who said this did not know the material. And the accommodations for tests are abused by rich kids more than they are utilized by those that need them.
5 replies →
I don't think I understand, as a terrible test taker myself.
The solution I use when teaching is to let evaluation primarily depend on some larger demonstration of knowledge. Most often it is CS classes (e.g. Machine Learning), so I don't really give much care for homeworks and tests and instead be project driven. I don't care if they use GPT or not. The learning happens by them doing things.
This is definitely harder in other courses. In my undergrad (physics) our professors frequently gave takehome exams. Open book, open notes, open anything but your friends and classmates. This did require trust, but it was usually pretty obvious when people worked together. They cared more about trying to evaluate and push us if we cared than if we cheated. They required multiple days worth of work and you can bet every student was coming to office hours (we had much more access during that time too). The trust and understanding that effort mattered actually resulted in very little cheating. We felt respected, there was a mutual understanding, and tbh, it created healthy competition among us.
Students cheat because they know they need the grade and that at the end of the day they won't won't actually be evaluated on what they learned, but rather on what arbitrary score they got. Fundamentally, this requires a restructuring, but that's been a long time coming. The cheating literally happens because we just treated Goodhart's Law as a feature instead of a bug. AI is forcing us to contend with metric hacking, it didn't create it.
> Many of them are neurodivergent who do miserably in exam conditions
Isn't this part of life? Learning to excel anyway?
27 replies →
IMO exams should be on the easier side and not require much computing (mainly knowledge, and not unnecessary memorization). They should be a baseline, not a challenge for students who understand the material.
Students are more accurately measured via long, take-home projects, which are complicated enough that they can’t be entirely done by AI.
Unless the class is something that requires quick thinking on the job, in which case there should be “exams” that are live simulations. Ultimately, a student’s GPA should reflect their competence in the career (or possible careers) they’re in college for.
2 replies →
We have an Accessible Testing Center that will administer and proctor exams under very flexible conditions (more time, breaks, quiet/privacy, …) to help students with various forms of neurodivergence. They’re very good and offer a valuable service without placing any significant additional burden on the instructor. Seems to work well, but I don’t have first hand knowledge about how these forms of accommodations are viewed by the neurodivergent student community. They certainly don’t address the problem of allowing « explorer » students to demonstrate their abilities.
7 replies →
> Many of them are neurodivergent who do miserably in exam conditions
I mean, for every neurodivergent person who does miserably in exam conditions you have one that does miserably in homework essays because of absence of clear time boundaries.
2 replies →
>Many of them are neurodivergent
if "many" are "divergent" then... are they really divergent? or are they the new typical?
1 reply →
I think having one huge exam at the end is the problem. An exam and assessment every week would be best.
Less stress at the end of the term, and the student can't leave everything to the last minute, they need to do a little work every week.
1 reply →
In my undergraduate experience, the location of which shall remain nameless, we had amble access to technology but the professors were fairly hostile to it and insisted on pencil and paper for all technical classes. There were some English or History classes here and there that allowed a laptop for writing essays during an "exam" that was a 3 hour experience with the professor walking around the whole time. Anyway, when I was younger I thought the pencil and paper thing to be silly. Why would we eschew brand new technology that can make us faster! And now that I'm an adult, I'm so thankful they did that. I have such a firm grasp of the underlying theory and the math precisely because I had to write it down, on my own, from memory. I see what these kids do today and they have been so woefully failed.
Teachers and professors: you can say "no". Your students will thank you in the future.
I have a Software Engineering degree from Harvard Extension and I had to take quite a few exams in physically proctored environments. I could very easily manage in Madrid and London. It is not too hard for either the institution or the student.
I am now doing an Online MSc in CompSci at Georgia Tech. The online evaluation and proctoring is fine. I’ve taken one rather math-heavy course (Simulation) and it worked. I see the program however is struggling with the online evaluation of certain subjects (like Graduate Algorithms).
I see your point that a professor might prefer to have physical evaluation processes. I personally wouldn’t begrudge the institution as long as they gave me options for proctoring (at my own expense even) or the course selection was large enough to pick alternatives.
Professional proctored testing centers exist in many locations around the world now. It's not that complicated to have a couple people at the front, a method for physically screening test-takers, providing lockers for personal possessions, providing computers for test administration, and protocols for checking multiple points of identity for each test taker.
This hybrid model is vastly preferable to "true" remote test taking in which they try to do remote proctoring to the student's home using a camera and other tools.
1 reply →
is it ok for students to submit images of hand-written solutions remotely?
seriously it reminds me of my high school days when a teacher told me i shouldn’t type up my essays because then they couldn’t be sure i actually wrote them.
maybe we will find our way back to live oral exams before long…
Business models rule us all. Have you tested what kind of pushback you'll receive if you happen to flout the remote rule?
Centralization and IT-ification has made flouting difficult. There’s one common course site on the institution’s learning management system for all sections where assignments are distributed and collected via upload dropbox, where grades are tabulated and communicated.
So far, it’s still possible to opt out of this coordinated model, and I have been. But I suspect the ability to opt out will soon come under attack (the pretext will be ‘uniformity == fairness’). I never used to be an academic freedom maximalists who viewed the notion in the widest sense, but I’m beginning to see my error.
1 reply →
I attended Purdue. Since I graduated, it launched its "Purdue Global" online education. Rankings don't suggest it's happened yet, but I'm worried it will cheapen the brand and devalue my degree.
I remember sitting with the faculty in charge of offering online courses when I visited as an alum back in 2014. They seemed to look at it as a cash cow in their presentation. They were eager to be at the forefront of online CS degrees at the time.
Higher ups say yes to remote learning and no to remote work. Interesting to see this side by side like this.
Remote learning also opens up a lot of opportunities to people that would not otherwise be able to take advantage of them. So it's not _just_ the cash cow that benefits from it.
Yeah, the thing AI cheating is it seems inherent not in teaching but what mechanical, bureaucratic, for-profit teaching and universities have become.
Some US universities do this remotely via proctoring software. They require pencil and paper to be used with a laptop that has a camera. Some do mirror scans, room scans, hand scans, etc. The Georgia Tech OMS CS program used to do this for the math proofs course and algorithms (leet code). It was effective and scalable. However, the proctoring seems overly Orwellian, but I can understand the need due to cheating as well as maintaining high standards for accreditation.
> seems overly Orwellian
Wow.
Maybe we should consider the possibility that this isn't a good idea? Just a bit? No? Just ignore how obviously comparable this is to the most famous dystopian fiction in literary history?
Just wow. If you're willing to do that, I don't know what to tell you.
Stanford requires pen & paper exams for their remote students; the students first need to nominate an exam monitor (a person) who in turn receives and prints the assignments, meets the student at an agreed upon place, the monitor gives them the printed exams and leaves, then collects the exam after allotted time, scans it and sends it back to Stanford.
So just have test centers, and flip the classroom.
I think this is a good approach.
Thanks for sharing this anecdote. It’s easy to forget the revenue / business side of education and that universities are in a hard spot here.
Thank you for not giving in. The slide downhill is so ravenous and will consume so much of our future until the wise intervene.
why not pay for students to take the pen and paper exams at some proctored location, perhaps independent of the university.
Can't we use AI to monitor the students?
I'm not a teacher, but I came here to say the same thing. Pen and paper.
Capitalism and the constant thirst for growth is killing society. Since when did universities care almost solely about renevnue and growth?
> Since when did universities care almost solely about renevnue and growth?
Since endowments got huge.
6 replies →
With the us government now going after their funding they may have to start caring even more
When it was generally accepted by our society that the goal of all work is victory, not success. Capitalism frames everything as a competition, even when collaboration is obviously superior. Copyright makes this an explicit rule.
Hand written essays are inherently ableist. I would be at a massive disadvantage. I grew up during the 60's, but handwriting was alway slow and error prone for me. As soon as I could use a word processor I blossomed.
It's probably not as bad for mathematical derivations. I still do those by hand since they are more like drawing than expression.
> Hand written essays are inherently ableist
So is testing; people who don't have the skills don't do well. Hell, the entire concept of education is ableist towards learning impaired kids. Let's do away with it entirely.
Would you hire someone as a writer who is completely illiterate? Of course that's an extreme edge case, but at some point equality stops and the ability to do the work is actually important.
1 reply →
I was a slow handwriter, too. I always did badly on in-class essay exams because I didn't have time to write all that I knew needed to be said. What saved my grade in those classes was good term papers.
Having had much occasion to consider this issue, I would suggest moving away from the essay format. Most of the typical essay is fluff that serves to provide narrative cohesion. If knowledge of facts and manipulation of principles are what is being evaluated, presentation by bullet points should be sufficient.
> Hand written essays are inherently ableist
Doing anything is inherently based on your ability to do it. Running is inherently ableist. Swimming is ableist. Typing is inherently ableist.
Pointing this out is just a thought terminating cliche. Ok, it's ableist. So?
> As soon as I could use a word processor I blossomed.
You understand this is inherently ableist to people that can't type?
> I still do those by hand since they are more like drawing than expression.
Way to do ableist math.
> Hand written essays are inherently ableist.
yes.
> I would be at a massive disadvantage.
yes.
...but.
how would you propose to filter out able cheaters instead? there's also in person one on one verbal exam, but economics and logistics of that are insanely unfavorable (see also - job interviews.)
1 reply →
I teach computer science / programming, and I don't know what a good AI policy is.
On the one hand, I use AI extensively for my own learning, and it's helping me a lot.
On the other hand, it gets work done quickly and poorly.
Students mistake mandatory assignments for something they have to overcome as effortlessly as possible. Once they're past this hurdle, they can mind their own business again. To them, AI is not a tutor, but a homework solver.
I can't ask them to not use computers.
I can't ask them to write in a language I made the compiler for that doesn't exist anywhere, since I teach at a (pre-university) level where that kind of skill transfer doesn't reliably occur.
So far we do project work and oral exams: Project work because it relies on cooperation and the assignment and evaluation is open-ended: There's no singular task description that can be plotted into an LLM. Oral exams because it becomes obvious how skilled they are, how deep their knowledge is.
But every year a small handful of dum-dums made it all the way to exam without having connected two dots, and I have to fail them and tell them that the three semesters they have wasted so far without any teachers calling their bullshit is a waste of life and won't lead them to a meaningful existence as a professional programmer.
Teaching Linux basics doesn't suffer the same because the exam-preparing exercise is typing things into a terminal, and LLMs still don't generally have API access to terminals.
Maybe providing the IDE online and observing copy-paste is a way forward. I just don't like the tendency that students can't run software on their own computers.
I'm not that old, and yet my university CS courses evaluated people with group projects, and in-person paper exams. We weren't allowed to bring computers or calculators into the exam room (or at least, not any calculators with programming or memory). It was fine.
I don't see why this is so hard, other than the usual intergenerational whining / a heaping pile of student entitlement.
If anything, the classes that required extensive paper-writing for evaluation are the ones that seem to be in trouble to me. I guess we're back to oral exams and blue books for those, but again...worked fine for prior generations.
> and in-person paper exams.
Yup. ~25 years ago competitions / NOI / leet_coding as they call it now were in a proctored room, computers with no internet access, just plain old borland c, a few problems and 3h of typing. All the uni exams were pen & paper. C++ OOP on paper was fun, but iirc the scoring was pretty lax (i.e. minor typos were usually ignored).
> I don't see why this is so hard, other than the usual intergenerational whining / a heaping pile of student entitlement.
You know that grading paper exams is a lot more hassle _for the teachers_?
Your overall point might or might not still stand. I'm just responding to your 'I don't see why this is so hard'. Show some imagination for why other people hold their positions.
(I'm sure there's lots of other factors that come into play that I am not thinking of here.)
14 replies →
Thing is, this hits the scaling problem in education and fucking hard.
There’s such a shortfall of teachers globally, and the role is a public good, so it’s constantly underpaid.
And if you are good - why would you teach ? You’d get paid to just take advantage of your skills.
And now we have a tool that makes it impossible to know if you have taught anyone because they can pass your exams.
I'm not too old either and in my university, CS was my major, we did group projects and in person paper exams as well.
We wrote c++ on paper for some questions and were graded on it. Ofcourse the tutors were lenient on the syntax they cared about the algorithm and the data structures not so much for the code. They did test syntax knowledge as well but more in code reasoning segments, i.e questions like what's the value of a after these two statements or after this loop is run.
We also had exams in the lab with computers disconnected from the internet. I don't remember the details of the grading but essentially the teaching team was in the room and pretty much scored us then and there.
> Students mistake mandatory assignments for something they have to overcome as effortlessly as possible.
It has been interesting to see this idea propagate throughout online spaces like Hacker News, too. Even before LLMs, the topic of cheating always drew a strangely large number of pro-cheating comments from people arguing that college is useless, a degree is just a piece of paper, knowledge learned in classes is worthless, and therefore cheating is a rational decision.
Meanwhile, whenever I’ve done hiring or internships screens for college students it’s trivial to see which students are actually learning the material and which ones treat every stage of their academic and career as a game they need to talk their way through while avoiding the hard questions.
I teach computer science / programming, and I know what a good AI policy is: No AI.
(Dramatic. AI is fine for upper-division courses, maybe. Absolutely no use for it in introductory courses.)
Our school converted a computer lab to a programming lab. Computers in the lab have editors/compilers/interpreters, and whitelist documentation, plus an internal server for grading and submission. No internet access otherwise. We've used it for one course so far with good results, and and extending it to more courses in the fall.
An upside: our exams are now auto-graded (professors are happy) and students get to compile/run/test code on exams (students are happy).
>Students mistake mandatory assignments for something they have to overcome as effortlessly as possible.
This is the real demon to vanquish. We're approaching course design differently now (a work in progress) to tie coding exams in the lab to the homework, so that solving the homework (worth a pittance of the grade) is direct preparation for the exam (the lion's share of the grade).
> Our school converted a computer lab to a programming lab. Computers in the lab have editors/compilers/interpreters, and whitelist documentation, plus an internal server for grading and submission. No internet access otherwise. We've used it for one course so far with good results, and and extending it to more courses in the fall.
Excellent approach. It requires a big buy-in from the school.
Thanks for suggesting it.
I'm doing something for one kind of assignment inspired by the game "bashcrawl" where you have to learn Linux commands through an adventure-style game. I'm bundling it in a container and letting you submit your progress via curl commands, so that you pass after having run a certain set of commands. Trying to make the levels unskippable by using tarballs. Essentially, if you can break the game instead of beating it honestly, you get a passing grade, too.
>Our school converted a computer lab to a programming lab. Computers in the lab have editors/compilers/interpreters, and whitelist documentation, plus an internal server for grading and submission. No internet access otherwise. We've used it for one course so far with good results, and extending it to more courses in the fall.
As an higher education (university) IT admin who is responsible for the CS program's computer labs and is also enrolled in this CS program, I would love to hear more about this setup, please & thank you. As recently as last semester, CS professors have been doing pen'n paper exams and group projects. This setup sounds great!
1 reply →
Isn't auto-grading cheating by the instructors? Isn't part of their job providing their expert feedback by actually reading the code the students have generating and providing feedback and suggestions for improvement even at for exams? A good educational program treats exams as learning opportunities, not just evaluations.
So if the professors can cheat and they're happy about having to do less teaching work, thereby giving the students a lower-quality educational experience, why shouldn't the students just get an LLM to write code that passes the auto-grader's checks? Then everyone's happy - the administration is getting the tuition, the professors don't have to grade or give feedback individually, and the students can finish their assignments in half an hour instead of having to stay up all night. Win win win!
2 replies →
Is it in a Faraday cage too or do you just confiscate their phones. Or do you naively believe they aren't just using AI on their phones?
3 replies →
> Oral exams because it becomes obvious how skilled they are, how deep their knowledge is.
Assuming you have access to a computer lab, have you considered requiring in-class programming exercises, regularly? Those could be a good way of checking actual skills.
> Maybe providing the IDE online and observing copy-paste is a way forward. I just don't like the tendency that students can't run software on their own computers.
And you'll frustrate the handful of students who know what they're doing and want to use a programmer's editor. I know that I wouldn't have wanted to type a large pile of code into a web anything.
> I know that I wouldn't have wanted to type a large pile of code into a web anything.
I might not have liked that, but I sure would have liked to see my useless classmates being forced to learn without cheating.
You can provide vscode, vim and emacs all in some web interface, and those are plenty good enough for those use cases. Choosing the plugin list for each would also be a good bikeshedding exercise for the department.
Even IntelliJ has gateway
1 reply →
> But every year a small handful of dum-dums made it all the way to exam without having connected two dots, and I have to fail them and tell them that the three semesters they have wasted so far without any teachers calling their bullshit is a waste of life
Wow.
Yeah, I've had teachers like that, who tell you that you're a "waste of life" and "what are you doing here?" and "you're dumb", so motivational.
I guess this "tough love" attitude helps for some people? But I think mostly it's just that people think it works for _other_ people, but rarely people think that this works when applied to themselves.
Like, imagine the school administration walking up to this teacher and saying "hey dum dum, you're failing too many students and the time you've spent teaching them is a waste of life."
Many teachers seem to think that students go to school/university because they're genuinely interested in motivated. But more often then not, they're there because of societal pressure, because they know they need a degree to have any kind of decent living standard, and because their parents told them to. Yeah you can call them names, call them lazy or whatever, but that's kinda like pointing at poor people and saying they should invest more.
7 replies →
> I use AI extensively for my own learning, and it's helping me a lot. On the other hand, it gets work done quickly and poorly.
> small handful of dum-dums made it all the way to exam without having connected two dots, and I have to fail them ... won't lead them to a meaningful existence
I don't see a problem, the system is working.
The same group of people that are going to loose their job to an LLM arent getting smarter because of how they are using LLM's.
Ideally the system would encourage those dum-dums to realize they need to change their ways before they're screwed. Unless the system working is that people get screwed and cause problems for the rest of society.
4 replies →
> The same group of people that are going to loose their job to an LLM arent getting smarter because of how they are using LLM's.
Students who use LLMs and professional programmers who use LLMs: I wouldn't say it's necessarily the same group of people.
Sure, their incentives are the same, and they're equally unlikely to maintain jobs in the future.
But students can be told that their approach to become AI secretaries isn't going to pan out. They're not actively sacrificing a career because they're out of options. They can still learn valuable skills, because what they were taught has not been made redundant yet, unlike mediocre programmers who can only just compete with LLM gunk.
Programming with AI is the job now. That’s what you need to be teaching if you want your graduates to get a job programming.
What’s changed is that “some working code” is no longer proof that a student understands the material.
You’re going to need a new way to identify students that understand the material.
There really are two opposite policies at play:
I ran one semester embracing AI, and... I don't know, I don't have enough to compare with, but clearly it leaves a lot of holes in people's understanding. They generate stuff that they don't understand. Maybe it's fine. But they're certainly worse programmers than I was after having spent the same time without LLMs.
1 reply →
You can get one of those card punching machines and have them hand in stacks of cards?
And don't forget to get on their case with accusations of technology use that equate to the Turing test
Grandpa can help with that too
When I was studying games programming we used an in house framework developed by the lecturers for OGRE.
At the time it was optional, but I get the feeling that if they still use that framework, it just became mandatory, because it has no internet facing documentation.
That said, I imagine they might have chucked it in for Unity before AI hit, in which case they are largely out of luck.
>But every year a small handful of dum-dums made it all the way to exam without having connected two dots, and I have to fail them and tell them that the three semesters they have wasted so far without any teachers calling their bullshit is a waste of life and won't lead them to a meaningful existence as a professional programmer.
This happened to me with my 3d maths class, and I was able to power through a second run. But I am not sure I learned anything super meaningful, other than I should have been cramming better.
If there is another course where students design their own programming language, maybe you could use the best of the previous year's. That way LLMs are unlikely to be able to (easily) produce correct syntax. Just a thought from someone who teaches in a totally different neck of the mathematical/computational woods.
Modern LLMs can one-shot code in a totally new language, if you provide the language manual. And you have to provide the language manual, because otherwise how can the students learn the language.
I had numerous in person paper exams in CS (2009 - 2013) where we had to not only pseudo code an algorithm from a description, but also do the reverse of saying/describing what a chunk of pseudo code would do.
There you go. Actually that would be a great service, wouldn't it? Having them explain to an LLM what they are doing, out loud, while doing it, online. On a site that you trust to host it.
> Teaching Linux basics doesn't suffer the same because the exam-preparing exercise is typing things into a terminal, and LLMs still don't generally have API access to terminals.
Huh, fighting my way through a Linux CLI is exactly the kind of thing I use Chatgpt for professionally.
I did study it in compsci, but those commands are inherently not memorable.
Yes, LLMs have had API access to terminals for quite a while now. I've been using Windsurf and Claude Code to type terminal commands for me for a long while (and `gh copilot suggest` before that) and couldn't be happier. I still manually review most of them before approving, but I've seen that the chances of the AI getting an advanced incantation right on the first try are much higher than mine, and I haven't yet once had it make a disastrous one, while that's happened to me quite a few times with commands I typed on my own.
2 replies →
> their bullshit is a waste of life and won't lead them to a meaningful existence as a professional programmer
That's where you're wrong. Being a professional programmer is 10% programming, 40% office politics, and 50% project management. If your student managed to get halfway through college without any actual programming skills, they're perfect candidate, because they clearly own the 90% of skills needed to be a professional programmer.
> Being a professional programmer is 10% programming, 40% office politics, and 50% project management.
I'd say that really depends on your job.
At smaller companies, your job will likely be 60% programming at a minimum.
Only at ~100 employees do companies fall into lots of meetings and politics.
1 reply →
In my experience, it's 70% programming, 20% office politics, and 10% project management. People who realize late they're no good at programming, or don't enjoy it, will pivot towards other kinds of work, like project management. But people who think they'll have luck managing people without having any grasp of the skill set of the people they manage, they either need really good people skills, or they're obnoxiously incompetent in both humans and computers.
Do you find that thinking of your students as dum-dums makes you a better teacher?
Neither better nor worse.
Some of my students are naturally talented.
Others achieve great results through hard work.
Some half-assedly make it.
And some don't even try.
Those are the dum-dums.
They just play games and think everything is going to work out without effort.
> it gets work done quickly and poorly
This is only temporary. It will be able to code like anyone in time. The only way around this will be coding in-person, but only in elementary courses. Everyone in business will be using AI to code, so that will be the way in most university courses as well.
IMO no amount of AI should be used during an undergrad education, but I can see how people would react more strongly to its use in these intro to programming courses. I don't think there's as much of an issue with using it to churn out some C for an operating systems course or whatever. The main issue with it in programming education is when learning rudiments of programming IS the point of the course. Same with using to it crank out essays for freshman English courses. These courses are designed to introduce fundamental raw skills that everything else builds on. Someone's ability to write good code isn't as big a deal for classes in OS, algs, compilers, ML, etc., as the main concepts of those courses are.
It already can. Im flabbergasted how people haven't still figured out how good gemini 2.5 is.
3 replies →
I’m enrolled in an undergraduate CS program as an experienced (10 year) dev. I find AI incredibly useful as a tutor.
I usually ask it to grade my homework for me before I turn it in. I usually find I didn’t really understand some topic and the AI highlights this and helps set my understanding straight. Without it I would have just continued on with an incorrect understanding of the topic for 2-3 weeks while I wait for the assignment to be graded. As an adult with a job and a family this is incredibly helpful as I do homework at 10pm and all the office hours slots are in the middle of my workday.
I do admit though it is tough figuring out the right amount to struggle on my own before I hit the AI help button. Thankfully I have enough experience and maturity to understand that the struggle is the most important part and I try my best to embrace it. Myself at 18 would definitely not have been using AI responsibly.
When I was in college if AI was available I would have abused it way too much and been much worse off for it.
This is my biggest concert about GenAI in our field. As an experienced dev I've been around the block enough times to have a good feel of how things should be done and can catch when and LLM goes off on a tangent that is a complete rabbit hole, but if this had been available 20 years ago I would have never learned and become an experienced dev because I absolutely would have over relied on an LLM. I worry that 10 years from now getting mid career dev will be like trying to get a COBOL dev now, except COBOL is a lot easier to learn.
I’m wondering how the undergrad CS course is as an experienced dev and why you decided to do that? I have been a software developer for 5 years with an EE degree, and as I do more software engineering and less EE I feel like I am missing some CS concepts that my colleagues have. Is this your situation too or did you have another reason? And why not a masters?
A mix of feeling I’m “missing” some CS concepts and just general intellectual curiosity.
I am planning on doing a masters but I need some undergrad CS credits to be a qualified candidate. I don’t think I’m going to do the whole undergrad.
Overall my experience has been positive. I’ve really enjoyed Discrete Math and coming to understand how I’ve been using set theory without really understanding it for years. I’m really looking forward to my classes on assembly/computer architecture, operating systems, and networks. They did make me take CS 101-102 as prereqs which was a total waste of time and money, but I think those are the only two mandatory classes with no value to me.
2 replies →
> And why not a masters?
Not GP, but in my experience most MSC programs will require that you have substantial undergrad CS coursework in order to be accepted. There are a few programs designed for those without that background.
1 reply →
I have a friend who is self-medicating untreated adhd with street amphetamines and he talks about it similarly. I can't say with any certainty that either of you is doing anything wrong or even dangerous. But I do think you both are overconfident in your assessment of the risks.
[dead]
A bit off-topic, but I think AI has the potential to supercharge learning for the students of the future.
Similar to Montessori, LLMs can help students who wander off in various directions.
I remember often being “stuck” on some concept (usually in biology and chemistry), where the teacher would hand-wave something as truth, this dismissing my request for further depth.
Of course, LLMs in the current educational landscape (homework-heavy) only benefit the students who are truly curious…
My hope is that, with new teaching methods/styles, we can unlock (or just maintain!) the curiosity inherent in every pupil.
(If anyone knows of a tool like this, where an LLM stays on a high-level trajectory of e.g. teaching trigonometry, but allows off-shoots/adventures into other topical nodes, I’d love to know about it!)
>>> Of course, LLMs in the current educational landscape (homework-heavy) only benefit the students who are truly curious
I think you hit on a major issue: Homework-heavy. What I think would benefit the truly curious is spare time. These things are at odds with one another. Present-day busy work could easily be replaced by occupying kids' attention with continual lessons that require a large quantity of low-quality engagement with the LLM. Or an addictive dopamine reward system that also rewards shallow engagement -- like social media.
I'm 62, and what allowed me to follow my curiosity as a kid was that the school lessons were finite, and easy enough that I could finish them early, leaving me time to do things like play music, read, and learn electronics.
And there's something else I think might be missing, which is effort. For me, music and electronics were not easy. There was no exam, but I could measure my own progress -- either the circuit worked or it didn't. Without some kind of "external reference" I'm not sure that in-depth research through LLMs will result in any true understanding. I'm a physicist, and I've known a lot of people who believe that they understand physics because they read a bunch of popular books about it. "I finally understand quantum mechanics."
> I'm 62, and what allowed me to follow my curiosity as a kid was that the school lessons were finite, and easy enough that I could finish them early, leaving me time to do things like play music, read, and learn electronics.
I see both sides of this. When I was a teenager, I went to a pretty bad middle school where there were fights everyday, and I wasn’t learning anything from the easy homework. On the upside, I had tons of free time to teach myself how to make websites and get into all kinds of trouble botting my favorite online games.
My learning always hit a wall though because I wasn’t able to learn programming on my own. I eventually asked my parents to send me to a school that had a lot more structure (and a lot more homework), and then I properly learned math and logic and programming from first principles. The upside: I could code. The downside: there was no free time to apply this knowledge to anything fun
>I'm 62, and what allowed me to follow my curiosity as a kid was that the school lessons were finite, and easy enough that I could finish them early, leaving me time to do things like play music, read, and learn electronics.
Yeah I feel like teachers are going to try and use LLMs as an excuse to push more of the burden of schooling to their pupils homelife somehow. Like, increasing homework burdens to compensate.
Spare time, haha, most people nowadays have a hard time having some dead time. The habitual checking of socials or feeds has killed the mind wandering time. People feel uncomfortable or consiser life boring with the device induced dopamine fix. Corporations got us by the balls.
The last thing I need when researching a hard problem is an interlocutor who might lie to me, make up convincing citations to nowhere, and tell me more or less what I want to hear.
Still better than the typical classroom experience. And you can always ask again, there's no need to avoid offending the person who has a lot of power over you.
15 replies →
The longer I go without seeing cases of ai supercharging learning, the more suspicious I get that it just won’t. And no, self reports that it makes internet denizens feel super educated, don’t count.
Wasn't this the promise of MOOCs in the 2010s?
The problem is that many students come to university unequipped with the discipline it takes to actually study. Teaching students how to effectively learn is a side-effect of university education.
Yes, I think curiosity dies well before university for most students.
The specific examples I recall most vividly were from 4th grade and 7th grade.
> I remember often being “stuck” on some concept (usually in biology and chemistry), where the teacher would hand-wave something as truth, this dismissing my request for further depth.
This resonates with me a lot. I used to dismiss AI as useless hogwash, but have recently done a near total 180 as I realised it's quite useful for exploratory learning.
Not sure about others but a lot of my learning comes from comparison of a concept with other related concepts. Reading definitions off a page usually doesn't do it for me. I really need to dig to the heart of my understanding and challenge my assumptions, which is easiest done talking to someone. (You can't usually google "why does X do Y and not Z when ABC" and then spin off from that onto the next train of reasoning).
Hence ChatGPT is surprisingly useful. Even if it's wrong some of the time. With a combination of my baseline knowledge, logic, cross referencing, and experimentation, it becomes useful enough to advance my understanding. I'm not asking ChatGPT to solve my problem, more like I'm getting it to bounce off my thoughts until I discover a direction where I can solve my problem.
Indeed. I never really used AI until recently but now I use it sometimes as a smarter search engine that can give me abstracts.
Eg. it's easy to ask copilot: can you give me a list of free, open source mqtt brokers and give me some statistics in the form of a table
And copilot (or any other ai) does this quite nicely. This is not something that you can ask a traditional search engine.
Offcourse you do need to know enough of the underlying material and double check what output you get for when the AI is hallucinating.
I am building such an AI tutoring experience, focusing on a Socratic style with product support for forking conversations onto tangents. Happy to add you to the waitlist, will probably publish an MVP in a few weeks.
Do you have capacity for more developers? I’ve been wanting to help make this for a long time
I haven’t personally tried it, but the high-level demos of “khanmigo” created by khan academy seem really promising. I’ll always have a special place in my heart (and brain) for the work of Sal Khan and the folks at khan academy.
yeah this is a good point, just adjust coursework from multiple choice tests and fill in the blank homework to larger scale projects.
Putting together a project using the AI help will be a very close mimicry of what real work will be like and if the teacher is good they will learn way more than being able to spout information from memory.
I've always though that the education system was broken and next to worthless. I've never felt that teachers ever tried to _teach_ me anything, certainly not how to think. In fact I saw most attempts at thought squashed because they didn't fit neatly into the syllabus (and so couldn't be graded).
The fact that AI can do your homework should tell you how much your homework is worth. Teaching and learning are collaborative exercises.
> The fact that AI can do your homework should tell you how much your homework is worth.
Homework is there to help you practise these things and have help you progress, find the areas where you're in need of help and more practise. It is collaborative, it's you, your fellow students and your teachers/professors.
I'm sorry that you had bad teachers, or had needs that wasn't being meet by the education system. That is something that should be addressed. I just don't think it's reasonable to completely dismiss a system that works for the majority. Being mad at the education system isn't really a good reason for say "AI/computers can do all these things, so why bother practising them?"
Schools should learn kids to think, but if the kids can't read or reasonably do basic math, then expecting them to have independent critical thinking seems a way of. I don't know about you, but one of the clear lessons in "problem math" in schools was to learn to reason about numbers and result, e.g. is it reasonable that a bridge span 43,000km? If not, you probably did something wrong in your calculations.
These conversations are always eye-opening for the number of people who don’t understand homework. You’re exactly right that it’s practice. The test is the test (obviously) and the homework is practice with a feedback loop (the grade).
Giving people credit for homework helps because it gives students a chance to earn points outside of high pressure test times and it also encourages people to do the homework. A lot of people need the latter.
My friends who teach university classes have experimented with grading structures where homework is optional and only exam scores count. Inevitably, a lot of the class fails the exams because they didn’t do any practice on their own. They come begging for opportunities to make it up. So then they circle back to making the homework required and graded as a way to get the students to practice.
ChatGPT short circuits this once again. Students ChatGPT their homework then fail the first exam. This time there is little to do, other than let those students learn the consequences of their actions.
3 replies →
> The fact that AI can do your homework should tell you how much your homework is worth.
A lot of people who say this kind of thing have, frankly, a very shallow view of what homework is. A lot of homework can be easily done by AI, or by a calculator, or by Wikipedia, or by looking up the textbook. That doesn't invalidate it as homework at all. We're trying to scaffold skills in your brain. It also didn't invalidate it as assessment in the past, because (eg) small kids don't have calculators, and (eg) kids who learn to look up the textbook are learning multiple skills in addition to the knowledge they're looking up. But things have changed now.
Completely agree - I always thought the framing of "exercises" is the right one, the point is that your brain grows by doing. It's been possible for a long time to e.g. google a similar algebra problem and find a very relevant math stackexchange post, doesn't mean the exercises were useless.
"The fact that forklift truck can lift over 500kg should tell you how worthwhile it is for me to go to a gym and lift 100kg." - complete non-sequitur.
1 reply →
> A lot of homework can be easily done by AI
Then maybe the homework assignment has been poorly chosen. I like how the article's author has decided to focus on the process and not the product and I think that's probably a good move.
I remember one of my kids' math teachers talked about wanting to switch to in inverted classroom. The kids would be asked to read a some part of their textbook as homework and then they would work through exercise sheets in class. To me, that seemed like a better way to teach math.
> But things have changed now.
Yep. Students are using AIs to do their homework and teachers are using AIs to grade.
Yep, making time to sit down to do homework, forming an understanding of planning the doing part, forming good habits of doing them, knowing how to look up stuff, in a book index or on Wikipedia or by searching or asking AI. The expectation is still that some kind of text output needs to be found and then read, digested.
> The fact that AI can do your homework should tell you how much
you still have to learn. The goal of learning is not to do a job. It's to enrich you, broaden your mind, and it takes work on your part.
In similar reasoning, you could argue that you can take a car to go anywhere, or let everything be delivered on your doorstep, so why should I my child learn to walk?
Let me rephrase their point, then:
The fact that AI can replace the work that you are measured on should tell you something about the measurement itself.
The goal of learning should be to enrich the learner. Instead, the goal of learning is to pass measure. Success has been quietly replaced with victory. Now LLMs are here to call that bluff.
4 replies →
As a student, you can make "getting the diploma" the only goal, and so it rests entirely on the educators and the institution to ensure that the only way you can do that is by learning the material and becoming competent in its applications.
However, you can instead recognize the difficulty and time that this would require on the part of the educator, and therefore expense to the student, and you can recognize that you have the goal of not just obtaining a piece of paper but actually learning a skill. With this mindset, it makes sense to take the initiative to treat the homework as an opportunity to learn and practice. It's is one of those things that's worth as much as you put into it. Of course, one can use their judgement to decide which homework is worth spending time on to learn the material, and which can be safely sailed through with minimum effort.
Having a skilled teacher that you can really collaborate with and who can spend the time to evaluate your skills in a personal way is of course going to lead to better learning outcomes than the traditional education system. It will also be far more expensive. Although, AI is offering something somewhat akin to this experience at a much lower price, to those who are able to moderate their usage so that they are learning from the AI instead of just offloading tasks to it.
Homework isn't about doing the homework, it's teaching you to learn and evidence that you have and can learn. Yeah you can have an AI do it just as much as you can have someone else do it, but that doesn't teach you anything and if you earn the paper at the end of it, it's effectively worthless.
Unis should adjust their testing practices so that their paper (and their name) doesn't become worthless. If AI becomes a skill, it should be tested, graded, and certified accordingly. That is, separate the computer science degree from the AI Assisted computer science degree.
Current AI can ace math and programming psets at elite institutions, and yet prior to GPT not only did I learn loads from the homework, I often thoroughly enjoyed it too. I don’t see how you can make that logical leap.
Its a problem of incentives. For many courses the psets make up a large chunk of your grade. Grades determine your suitability for graduate school, internships, jobs, etc. So if your final goal is one of those then you are highly incentivized to get high grades, not necessarily to learn the material.
2 replies →
> The fact that AI can do your homework should tell you how much your homework is worth.
I mean... if you removed the substring "home" from that sentence, is it still true in your opinion?
That is, do you believe that because AI can perform some task, that task must not have any value? If there's a difference, help me understand it better please.
No that's not exactly what I meant. It's not that "If AI can do X then X is worthless", but rather in _this_ case it's inappropriate.
Homework should be a part of the collaborative process of learning (as others above have already elaborated on). If teachers are having a problem with AI generated homework being submitted, it shows that the system is broken because they couldn't have been collaborating with students on their learning then.
> Teaching and learning are collaborative exercises.
That's precisely where we went wrong. Capitalism has redefined our entire education system as a competition; just like it does with everything else. The goal is not success, it's victory.
If the trend continues, it seems like most college degrees will be completely worthless.
If students using AI to cheat on homework are graduating with a degree, then it has lost all value as a certificate that the holder has completed some minimum level of education and learning. Institutions that award such degrees will be no different than degree mills of the past.
I’m just grateful my college degree has the year 2011 on it, for what it’s worth.
All of the best professors I had either did not grade homework or weighted it very small and often on a did-you-do-it-at-all basis and did not grade attendance at all. They provided lectures and assignments as a means to learn the material and then graded you based on your performance in proctored exams taken either in class or at the university testing center.
For most subjects at the university level graded homework (and graded attendance) has always struck me as somewhat condescending and coddling. Either it serves to pad out grades for students who aren't truly learning the material or it serves to force adult students to follow specific learning strategies that the professor thinks are best rather than giving them the flexibility they deserve as grown adults.
Give students the flexibility to learn however they think is best and then find ways to measure what they've actually learned in environments where cheating is impossible. Cracking down on cheating at homework assignments is just patching over a teaching strategy that has outgrown its usefulness.
> rather than giving them the flexibility they deserve as grown adults
I have had so many very frustrating conversations with full grown adults in charge of teaching CS. I have no faith at all that students would be able to choose an appropriate method of study.
My issue with the instruction is the very narrow belief in the importance of certain measurable skills. VERY narrow. I won’t go into details, for my own sanity.
6 replies →
> All of the best professors I had either did not grade homework or weighted it very small and often on a did-you-do-it-at-all basis and did not grade attendance at all. They provided lectures and assignments as a means to learn the material and then graded you based on your performance in proctored exams taken either in class or at the university testing center.
I have the opposite experience - the best professors focused on homework and projects and exams were minimal to non-existent. People learn different ways, though, so you might function better having the threat/challenge of an exam, whereas I hated having to put everything together for an hour of stress and anxiety. Exams are artificial and unlike the real world - the point is to solve problems, not to solve problems in weirdly constrained situations.
I don’t disagree, but in most cases degrees are handed out based on grades which in turn are based on homework.
I agree that something will have to change to avert the current trend.
1 reply →
Maybe schools and universities need to stop considering homework to be evidence of subject matter mastery. Grading homework never made sense to me. What are you measuring, really, and how confident are you of that measurement?
You can't put the toothpaste back into the tube. Universities need to accept that AI exists, and adjust their operations accordingly.
Grading homework has two reasonable objectives:
Provide an incentive for students to do the thing they should be doing anyway.
Give an opportunity to provide feedback on the assignment.
It is totally useless as an evaluation mechanic, because of course the students that want to can just cheat. It’s usually pretty small, right? IIRC when I did tutoring we only gave like 10-20% for the aggregate homework grade.
11 replies →
How do you suggest we measure whether the students have actually learned the stuff then?
5 replies →
> If the trend continues, it seems like most college degrees will be completely worthless.
I suspect the opposite: Known-good college degrees will become more valuable. The best colleges will institute practices that confirm the material was learned, such as emphasizing in-person testing over at-home assignments.
Cheating has actually been rampant at the university level for a long time, well before LLMs. One of the key differentiators of the better institutions is that they are harder to cheat to completion.
At my local state university (where I have friends on staff) it’s apparently well known among the students that if they pick the right professors and classes they can mostly skate to graduation with enough cheating opportunity to make it an easy ride. The professors who are sticklers about cheating are often avoided or even become the targets of ratings-bombing campaigns
I've tried re-enrolling in a STEM major last year, after a higher education "pause" of 16-ish years. 85% of the class used GPTs to solve homework, and it was quite obvious most of them haven't even read the assignment.
The immediate effect was the distrust of the professors towards most everyone and lots classes felt like some kind of babysitting scheme, which I did not appreciate.
> I’m just grateful my college degree has the year 2011 on it, for what it’s worth.
College students still cram and purge. Nobody forced to sit through OChem remembers their Diels-Alder reaction except the organic chemists.
College degrees probably don't have as much value as we've historically ascribed to them. There's a lot of nostalgia and tradition pent up in them.
The students who do the best typically fill their schedule with extra-curricular projects and learning that isn't dictated by professors and grading curves.
I've been hiring people for the better part for 15 years and I never considered them to be valuable outside of the fact that it appears you're able to do one project for a sustained period of time. My impressions was unless your degree confers something such that you are in a job that human risk can be involved, most degrees are worth very little and most serious people know that.
To be clear, I think that most college degrees were generally low value (even my own), but still had some value. The current trend will be towards zero value unless something changes.
It doesn't matter if your boss's policy is to require a degree.
> If students using AI to cheat on homework
This is not related to "AI", but I have an amusing story about online cheating.
* I have a nephew who was switched into online college classes at the beginning of the pandemic.
* As soon as they switched to online, the class average on the exams shot up, but my nephew initially refused to cheat.
* Eventually he relented (because everyone else was doing it) and he pasted a multitude of sticky notes on the wall at the periphery of his computer monitor.
* His father walks into his room, looks at all the sticky notes and declares, "You can't do this!!! It'll ruin the wallpaper!"
Aren't the jobs they'll get be expecting them to use AI?
If you’re hiring humans just to use AI, why even hire humans? Either AI will replace them or employers will realize that they prefer employees who can think. In either case, being a human who specializes in regurgitating AI output seems like a dead end.
7 replies →
Even if you just use AI, you need to know the right prompts to ask.
2 replies →
Would you rather be the guy using AI as a crutch or the guy who actually knows how to do things without it?
TBF this problem doesn’t seem that new to me. I was forced to do my lab work in Vim and C via SSH because the faculty felt that Java IDEs with autocomplete were doing a disservice to learning.
> the faculty felt that Java IDEs with autocomplete were doing a disservice to learning
Sounds laughably naive now, doesn’t it?
At the same time though: if AI based cheating is so effective then is college itself useful?
If calculators are so good at math, is learning math itself useful?
It’s the same old story with a new set of technology.
7 replies →
It was (to some degree), and could still be. The status quo was more effective, relatively speaking, before the AI boom. The status quo appears to be trending towards ineffective, post-AI boom.
So in order to remain useful, the status quo of higher education will probably have to change in order to adapt to the ubiquity of AI, and LLMs currently.
Just because you can cheat at something doesn't mean doing it legitimately isn't useful.
Thinking of that. We have build these expensive machines with massive investments to be able to output what we expect college students to output... Wouldn't that tell us that well maybe that output has some value, intent or use? Or we would not have spend those resources...
Just because machine can do things, doesn't mean humans should be able to do it too. Say reading a text aloud.
https://innovationlabs.harvard.edu/events/your-network-is-yo...
^ Why many go to Harvard. Very nice club.
Had I known that college degrees from before the 2020s would increase in value, I'd have gotten one. Damn it!
Good, colleges have staryed far from their purpose
The credentials were never about having become learned.
I mean this seems a solved problem: hand-and-paper written onsite exams + blackboard-and-chalk oral onsite exams. If this is too costly (is it? many countries manage), make students take them less often.
I teach on a small university. These are some of the measures we take:
- Hand written midterms and exams.
- The students should explain how they designed and how they coded their solutions to programming exercises (we have 15-20 students per class, with more students it become more difficult).
- Presentations of complex topics (after that the rest of the students should comment something, ask some question, anything related to the topic)
- Presentation of a handwritten one page hand written notes, diagram, mindmap, etc., about the content discussed.
- Last minute changes to more elaborated programming labs that should be resolved in-class (for example, "the client" changed its mind about some requirement or asked a new feature).
The real problem is that it is a (lot) more work for the teachers and not everyone is willing to "think outside of the box".
(edit: format)
I hope by 'handwritten' you don't literally mean pen and paper?
Back when I was doing my BSc in Software Engineering, we had a teacher who did her Data Structure and Algorithms exams with pen and paper. On one of them, she basically wrote 4 coding problems (which would be solved in 4 short ~30 LOC).
We had to write the answer with pen and paper, writing the whole program in C. And the teacher would score it by transcribing the verbatim text in her computer, and if it had one single error (missed semicolon) or didn't compile for some reason, the whole thing was considered wrong (each question was 25% of the exam score)
I remember I got 1 wrong (missed semicolon :( ) and got a 75% (1-100 pointing system). It's crazy how we were able to do that sort of thing in the old days.
We definitely exercised our attention to detail and concentration muscles with that teacher.
6 replies →
Yes, pen and paper. The approach is to pseudocode the solution, minor syntax errors aren’t punished (and indeed are generally expected anyway). The point is to simply show that you understand and can work through the concepts involved, it’s not being literally compiled.
Writing a small algorithm with pen & paper on programming exams in universities of all sizes was still common when I was in uni in the 2010s and there’s no reason to drop that practice now.
1 reply →
Yes, pen and paper.
One of the most offensive words in the anthropomophization of LLMs is: hallucinate.
It's not only an anthropomorphism, it's also a euphemism.
A correct interpretation of the word would imply that the LLM has some fantastical vision that it mistakes for reality. What utter bullsh1t.
Let's just use the correct word for this type of output: wrong.
When the LLM generates a sequence of words, that may or may not be grammatically correct, but infers a state or conclusion that is not factually correct; lets state what actually happened: the LLM generated text was WRONG.
It didn't take a trip down Alice's rabbit hole, it just put words together into a stream that inferred a piece of information that was incorrect, it was just WRONG.
The euphemistic aspect of using this word is a greater offense than the anthropomorphism, because it's painting some cutesy picture of what happened, instead of accurately acknowledging that the s/w generated an incorrect result. It's covering up for the inherent short comings of the tech.
When a person hallucinates a dragon coming for them, they are wrong, but we still use a different word to more precisely indicate the class of error.
Not all llm errors are hallucinations - if an llm tells me that 3 + 5 is 7, It's just wrong. If it tells me that the source for 3 + 5 being 7 is a seminal paper entitled "On the relative accuracy of summing numbers to a region +-1 from the fourth prime", we would call that a hallucination. In modern parlance " hallucination" has become a term of art to represent a particular class of error that llms are prone to. (Others have argued that "confabulation" would be more accurate, but it hasn't really caught on.)
It's perfectly normal to repurpose terms and anthropomorphizations to represent aspects of the world or systems that we create. You're welcome to try to introduce other terms that don't include any anthropomorphization, but saying it's "just wrong" conveys less information and isn't as useful.
I think your defense of reusing terms for new phenomenon is fair.
But in this specific case, I would say the reuse of this particular word, to apply to this particular error, is still incorrect.
A person hallucinating is based on a many leveled experience of consciousness.
The LLM has nothing of the sort.
It doesn't have a hierarchy of knowledge which it is sorting to determine what is correct and what is not. It doesn't have a "world view" based on a lifetime of that knowledge sorting.
In fact, it doesn't have any representation of knowledge at all. Much less a concept of whether that knowledge is correct or not.
What it has is a model of what words came in what order, in the training set on which it was "trained" (another, and somewhat more accurate, anthropomorphism).
So without anything resembling conscious thought, it's not possible for an LLM to do anything even slightly resembling human hallucination.
As such, when the text generated by an LLM is not factually correct, it's not an hallucination, it's just wrong.
1 reply →
I teach an "advanced" shell scripting course with an exam.
I mark "hallucinations" as "LLM Slop" in my grading sheets, when someone gives me a 100-character sed filter that just doesn't work that there is no way we discussed in class/in examples/in materials, or a made up API endpoint, or non-nonsensical file paths that reference non-existent commands.
Slop is an overused term these days, but it sums it up for me. Slop, from a trough, thrown out by an uncaring overseer, to be greedily eaten up by the piggies, who don't care if its full of shit.
Back in my day, we also called it Garbage In Garbage Out.
Thank fuck for saying this
s/fuck/you/
My essay-writing process for my MBA was:
- decide what I wanted to say about the subject, from the set of opinions I already possess
- search for enough papers that could support that position. Don't read the papers, just scan the abstracts.
- write the essay. Scan the reference papers for the specific bit of it that best supported the point I want to make.
There was zero learning involved in this process. The production of the essay was more about developing journal search skills than absorbing any knowledge about the subject. There are always enough papers to support any given point of view, the trick was finding them.
I don't see how making this process even more efficient by delegating the entire thing to an LLM is affecting any actual education here.
I literally wrote a friends psychology paper when I had no idea of the subject and they got a HD for it.
All I did was follow the process you outlined.
My mother used to do it as a service for foreign language students. They would record their lectures, and she would write their papers for them.
Confession. I became disillusioned with a teacher of a subject in school, who I was certain had taken a disliking to me.
I tested it by getting hold of a paper which had received an A from another school on the same subject, copying it verbatim and submitting it for my assignment. I received a low grade.
Despite confirming what I suspected, it somehow still wasn't a good feeling.
1 reply →
To be honest, that's a problem on your part. It is completely possible to write a paper on anything, using the scientific method as your framework.
But the problem is that in many cases, the degrees (like MBA, which I too hold) are merely formalities to move up the corporate ladder, or pivot to something else. You don't get rewarded extra for actually doing science. And, yes, I've done the exact same thing you did, multiple times, in multiple different classes. Because I knew that if what I did just looked and sounded proper enough, I'd get my grade.
To be fair, one of the first things I noticed when entering the "professional" workforce, was that the methodology was the same: Find proof / data that supports your assumptions. And if you can't find any, find something close enough and just interpret / present it in a way that supports your assumptions.
No need for any fancy hypothesis testing, or having to conclude that your assumptions were wrong. Like it is not your opinion or assumption anyway, and you don't get rewarded for telling your boss or clients that they're wrong.
Is there even such a thing as the "science of business"? One can form a hypothesis, and then conduct an experiment, but the experimental landscape is so messy that eliminating all other considerations is impossibly hard.
For example, there's a popular theory that the single major factor in startup success is timing - that the market is "ready" for ideas at specific times, and getting that timing right is the key factor in success. But it's impossible to predict when the market timing is right, you only find out in retrospect. How would you ever test this theory? There are so many other factors, half of which are outside the control of the experimenter, that you would have to conduct the experiment hundreds of time (effectively starting and failing at hundreds of startups) to exclude the confounding factors.
I’m sorry for that.
May I ask a different question, why didn’t, or what stoped, you from engaging with the material itself ?
To be honest, I found "the material" irrelevant, mostly. There's vast swathes of papers written about obscure and tiny parts of the overall subject. Any given paper is probably correct, but covering such a tiny part of the subject that spending the time reading all of them is inefficient (if not impossible).
Also, given that the subject in question is "business", and the practice of business was being changed (as it is again now) by the application of new technology, so a lot of what I was reading was only borderline applicable any more.
MBAs are weird. To qualify to do one you need to have years of practical experience managing in actual business. But then all of that knowledge and experience is disregarded, and you're expected to defer to papers written by people who have only ever worked in academia and have no practical experience of what they're studying. I know this is the scientific process, and I respect that. But how applicable is the scientific process to management? Is there even a "science" of management?
So, like all my colleagues, I jumped through the hoops set in front of me as efficiently as possible in order to get the qualification.
I'm not saying it was worthless. I did learn a lot. The class discussions, hearing about other people's experiences, talking about specific problems and situations, this was all good solid learning. But the essays were not.
> - search for enough papers that could support that position. Don't read the papers, just scan the abstracts.
Wrote wrote those papers? How did they learn to write them? At some point, somebody along the chain had to, you know, produce an actual independent thought.
Interesting question. It seems to me that the entire business academia could be following the method I've outlined and no-one would notice. Or care.
It's not like the hard sciences - no-one is able to refute anything, because you can't conduct experiments. You can always find some evidence for any given hypothesis, as the endless stream of self-help (and often contradictory) business books show.
None of the academics I was reading had actually run a business or had any practical experience of business. They were all lifelong academics who were writing about it from an academic perspective, referencing other academics.
Business is not short of actual independent thought. Verification is the thing it's missing. How does anyone know that the brilliant idea they just had is actually brilliant? The only way is to go and build a business around it and see if it works. Academics don't do that. How is this science then?
The fundamental question that AI raises for me, but nobody seems to answer:
In our competitive, profit-driven world--what is the value of a human being and having human experiences?
AI is neither inevitable nor necessary--but it seems like the next inevitable step in reducing the value of a human life to its 'outputs'.
Someone needs to experience the real world and translate it into LLM training data.
ChatGPT can’t know if the cafe around the corner has banana bread, or how it feels to lose a friend to cancer. It can’t tell you anything unless a human being has experienced it and written it down.
It reminds me of that scene from Good Will Hunting: https://www.imdb.com/de/title/tt0119217/quotes/?item=qt04081...
IMO you're coming at it from the wrong angle.
Capitalism barely concerns itself with humans and whether human experiences exist or not is largely irrelevant for the field. As far as capitalism knows, humans are nothing but a noisy set of knobs that regulate how much profit one can make out of a situation. While tongue-in-cheek, this SMBC comic [1] about the Ultimatum game is an example of the type of paradoxes one gets when looking at life exclusively from an economics perspective.
The question is not "what's the value of a human under capitalism?" but rather "how do we avoid reducing humans to their economic output?". Or in different terms: it is not the blender's job to care about the pain of whatever it's blending, and if you find yourself asking "what's the value of pain in a blender-driven world?" then you are solving the wrong problem.
[1] https://www.smbc-comics.com/?id=3507
I’m similarly worried about businesses all making “rational” decisions to replace their employees with “AI”, wherever they think they can get away with it. (Note that’s not the same thing as wherever “AI” can do the job well!)
But I think one place where this hits a wall is liability and accountability. Lots of low stakes things will be enshittified by “AI” replacements for actual human work. But for things like airline pilots, cancer diagnoses, heart surgery - the cost of mistakes is so large, that humans in the loop are absolutely necessary. If nothing else, at least as an accountability shield. A company that makes a tumor-detector black box wants to be an assistive tool to improve doctor’s “efficiency”, not the actual front line medical care. If the tool makes a mistake, they want no liability. They want all the blame on the doctor for trusting their tool and not double checking its opinion. I hear that’s why a lot of “AI” tools in medicine are actually reducing productivity: double checking an “AI’s” opinion is more work than just thinking and evaluating with your own brain.
The funny thing is my first thought was "maybe reduced nominal productivity by increased throughness is exactly what we need when evaluating potential tumors". Keeping doctors off autopilot and not so focused that radiologists fail to see hidden gorillas in x-rays. And yes that was a real study.
No, we already have autonomous cars driving around even though they've already killed people.
1 reply →
The "value of a human" - same in this age as it has always been - is our ability to be truly original and to think outside the box. (That's also what makes us actually quite smart, and what makes current cutting-edge "AI" actually quite dumb).
AI is incapable of producing anything that's not basically a statistical average of its inputs. You'll never get an AI Da Vinci, Einstein, Kant, Pythagoras, Tolstoy, Kubrick, Mozart, Gaudi, Buddha, nor (most ironically?) Turing. Just to name a few historical humans whose respective contributions to the world are greater than the sum of the world's respective contributions to them.
Have you tried image generation? It can easily apply high level concepts from one area to another area and produce something that hasn't been done before.
Unless you loosen the meaning of statistical average so much that it ends up including human creativity. At the end of the day it's basically the same process of applying an idea from one field to another.
Most humans are not Da Vinci, Einstein, Kant, etc. Does that make them not valuable as humans?
1 reply →
You should determine your own value if you don't want to be controlled by anyone else.
If you don't want to determine your own value, you're probably no worse off letting an AI do that than anything else. Religion is probably more comfortable, but I'm sure AI and religion will mix before too long.
I use AI to help my high-school age son with his AP Lang class. Crucially, I cleared all of this with his teacher beforehand. The deal was that he would do all his own work, but he'd be able to use AI the help him edit it.
What we do is he first completes an essay by himself, then we put it into a Claude chat window, along with the grading rubric and supporting documents. We instruct Claude to not change his structure or tone but edit for repetitive sentences, word count, correct grammar, spelling, and make sure his thesis is sound and pulled throughout the piece. He then takes that output and compares it against his original essay paragraph-by-paragraph, and he looks to see what changes were made and why, and crucially, if he thinks its better than what he originally had.
This process is repeated until he arrives at an essay that he's happy with. He spends more time doing things this way than he did when he just rattled off essays and tried to edit on his own. As a result, he's become a much better writer, and it's helped him in his other classes as well. He took the AP test a few weeks ago and I think he's going to pass.
To offer a flip side of the coin, I can't imagine I would have the patience outside of school, to have learned Rust this past year without AI.
Having a personal tutor who I can access at all hours of the day, and who can answer off hand questions I have after musing about something in the shower, is an incredible asset.
At the same time, I can totally believe if I was teleported back to school, it would become a total crutch for me to lean on, if anything just so I don't fall behind the rest of my peers, who are acing all the assignments with AI. It's almost a game theoretic environment where, especially with bell curve scaling, everyone is forced into using AI.
Same here. AI is a great tool for learning, but a challenge for education.
Different times have different teaching tasks, which is the sign of human progress.
Just like after the invention of computers, those methods of how to do manual calculations faster can be eliminated from teaching tasks. Education shifted towards teaching students how to use computational tools effectively. This allowed students to solve more complex problems and work on higher-level concepts that manual calculations couldn't easily address.
In the era of AI, what teachers need to think about is not to punitively prohibit students from using AI, but to adjust the teaching content to better help students master related subjects faster and better through AI.
On one hand I tend to agree because these students will also be able to use AI when they actually hit the workplace, but on the other hand it has never happened that the tools we use are better than us at so many tasks.
How long before a centaur team of human + AI is less effective than the AI alone?
As an engineering undergrad, I don't think any online work should count toward the student's grade, unless you're allowed to use the Internet however you want to complete it. There simply isn't any other way of structuring the course that doesn't punish the honest students.
I think we (as in, the whole species) need to reflect on what the purpose of education is and what it should be, because in theory there's no reason why anybody should pay for a college tuition and then undermine their own mastery of the subject. Obviously 90% of the student body sees it as a ticket to being taken seriously by prospective employers and the other 10% definitely does not deserve to be taken seriously because by prospective employers because they can't even admit an uncomfortable truth about themselves.
Anyways this isn't actually useful advice because no one person can enact change on a societal scale but I do enjoy standing on this soapbox and telling at people.
BTW academic success has never been a fair measure of anything, standards and curriculum vary widely between institutions. I spent four years STRUGGLING to get a 3.2 GPA in high school then when I got to undergrad we had to take this "math placement exam" that was just basic algebra and I only had difficulty with one or two problems but I knew several kids with >= 4.0 GPA who had to take remedial algebra because they failed.
But somehow there's always massive pushback against standardized testing even when they let you take it over and over and over again until you get the grade you wanted (SAT).
You mean the 10% who really want to learn should give up and embrace the degree mill merry-go-round game?
I’m as cynical as they come, but even that’s a bit too much for me.
i was actually trying to accuse the 10% of lying to themselves on a subconscious level, because the portion of undergraduates who actually came there to learn and not just because it's a roadblock in the way of gainful employment is a rounding error.
More to the point, the universities need to realize they're more like job certification centers and stop pretending their students aren't just there to take tests and get certified. Ideally they'd stop co-operating with employers that want to use them as a filter for their hiring process instead but even I'm not dumb enough to think that could ever happen, they'd be cutting off a massive source of revenue and putting themselves at a competitive disadvantage.
Like I said I don't actually have a viable solution to any of this but as long we all lie to ourselves about education being some noble institution that it clearly isn't (i mean for undergrad and masters, it might actually still be that at the phd level) then nobody will ever solve anything.
1 reply →
I predict that asking students to hand-write assignments is not going to go well. Unfortunately, universities built on the consumer model (author teaches at Arizona State) are incentivized to listen to student feedback over the professor’s good intentions.
So don't accredit universities that want to turn into degree mills.
Beat this game of prisoner's dilemma with a club at the accreditation level. Students can complain all they want, but if they want a diploma which certifies that they are able to perform the skills they learned, they will have to actually perform those skills.
> So don't accredit universities that want to turn into degree mills.
This is way outside the scope of something that a faculty member who is, as the article says, trying to teach has any hope of implementing within a reasonable time frame. Of course the ideal is that faculty, as major stakeholders in the educational institution, should ideally be active in all levels of university governance, but I think it is important to realize how much of a prerequisite there is for an individual professor even to get their voice heard by an accrediting body, let alone to change its accrediting procedures.
That's setting aside the fact that, even if faculty really mobilized to make such changes, in the absolute best case the changes would be slow to implement, and the effects would be slow to manifest, as universities are on multi-year accreditation cycles and there would need to be at least a few reputable universities that were disaccredited before others started taking the guidance seriously. Even if I were willing to throw everything into the politics of university governance, which would make my teaching suffer immensely, I'm not willing to say that we'll just have to wait a decade to see the effects.
The consumer model isn't all bad. But it can lead to wildly different outcomes based on self-selection and incentives.
Take gyms, for example. You have your cheap commodity convenience gyms like Planet Fitness, where a lot of people sign up (especially at the beginning of the year) but few actually stick to it to get any real gains. But you also have pricy fitness clubs with mandatory beginner classes, where the member base tends to be smaller but very committed to it.
I feel like students that are OK with just phoning it in with AI fall into the Planet Fitness mindset. If you're serious about gains (physically or intellectually), you'll listen to the instructors and persist through uncomfortable challenges.
I think a better approach might be to get students to use AI as a writing coach. Get them to commit to a short handwritten essay during class, then use AI give them feedback on the essay. Their interaction with the AI and how they respond to the feedback becomes the assessment material. That's not compatible with the authors "Butlerian Jihad" ideology, though.
This is insane to me. Why not title the class "how to use AI?" Why not make this the title of every class?
I see no future in education other than making homework completely ungraded, and putting 100% of the grade into airgapped exams. Sure, the pen and paper CS exam isn't reflective of a real world situation, but the universities need some way to objectively measure understanding once the pupil has been disconnected from his oracle.
1 reply →
“Butlerian Jihad” ideology is definitely overselling it.
1 reply →
The bigger problem is that kids can just hand write an essay that an AI gave them.
I teach at a university and I just scale my homework assignments until they reach or exceed sightly the amount of work I expect a student to be able to do with AI. Before I would give them a problem set. Next semester homeworks will be more like entire projects.
Punishing honest students by ensuring that they will fail unless they cheat is an absurd solution. In school I went to great lengths to do my work well and on my own. It was disheartening to see other students openly cheat and do well, but at least I knew that I was performing well on my own merits.
Under your system, I would have been actively punished for not cheating. What's the point of developing a cure that's worse than the disease?
2 replies →
> I just scale my homework assignments until they reach or exceed sightly the amount of work I expect a student to be able to do with AI.
1. Absurd. The measurement should be learning not “work”. My students move rocks with a forklift… so I give them more rocks to move?
2. From the university I’m looking for intellectual leadership. Professors thinking critically about what learning means and how to discuss it with students. The potential is there, but let’s not walk like zombies unthinking into a future where the disappearance of ChatGPT 8.5 renders thousands of people unable to meet the basic requirements of their jobs. Or its appearance renders them unemployed.
1 reply →
Sounds like an optimization for the students not interested in learning at the expense of the students who are.
I understand your intentions but I'm skeptical even this solves the problem.
Realistically I think we're just moving away from knowledge-work and efforts to resuscitate it are just varying levels of triage for a bleeding limb.
In the actual workplace with people making hundreds of thousands a year (the top echelon of what your class is trying to prepare students for) I'm not seeing output increase with AI tools so clearly effort is just decreasing for the same amount of output.
Perhaps your class is just supposed to be easier now and that's okay.
4 replies →
then you're discriminating against students not using AI. I for sure know I really would be depressed to be asked for a huge pile of work I'll do myself when other will just cheat and have free time to do something else work on interesting projects or see friends whatever.
5 replies →
You sound like everything that is wrong with the US education system concentrated on one single person.
What made you get into teaching?
This claim is absurd and the comment is unserious.
Would the teacher then grade the massive workload with AI also? There isn't really a limit to how much output an AI can generate and the more someone demands, the less likely it is that the final result will be looked at in any depth by a human.
1 reply →
As long as you are upfront that you expect AI will be used, this seems like a solid and practical approach to me.
Do you expect someone to do the same for you? I mean to increase your workload until you cannot do it even with the AI help?
Let students use AI as they will when learning, but verify without allowing them to use it -- in class -- otherwise you have no way of knowing what they know. Job interviewers face the same problem.
We're highly considering going back to onsite interviews, the big liniter is scheduling the interviewers.
I agree. An essay written on the spot in class with no electronics nearby seems like the counter.
I think AI is the perfect final ingredient to ruin the higher education system, which is already in ruins (at least over here in Finland).
Even before AI, our governments have long wanted more grads to make statistics look good and to suppress wages, but don't want to pay for it. So what you get are more students, lower quality of education, lower standards to make students graduate faster. Thanks to AI, now students don't have to really meet even those low standards to pass the courses. What is left is just a huge waste of young people's time and tax payer's money.
There are very few degrees I'm going to recommend to my children. Most just don't provide good value for one's time anymore.
What you then say is a good value for ones time instead?
AI for classical education can be an issue, but AI for inverted classes is perfect.
Going to school to listen to a teacher for hours and take notes, sitting in a group of peers to whom you are not allowed to speak, and then going home to do some homework on your own, this whole concept is stupid and deserves to die.
Learning lessons is the activity you should do within the confort of your home, with the help of everything you can including books, AIs, youtube videos or anything that float your boat. Working and practice, on the other hand, are social activities that benefit a lot from interacting with teachers and other students, and deserves to be done collectively at school.
For inverted classes, AI are no problem at all; at the contrary, they are very helpful.
AI is bad for academia and the educational industrial complex, but it is great for people that actually want to learn.
The AI tools should be helping more than hurting. But take my example: I am in 3 year long litigation with soon to be ex-wife, she recently fired her attorneys and for 2 weeks used chatGPT to write very well worded, very strong and very logically appealing motions practically almost destroying my attorney on multiple occasions and he had to work overtime costing me extra $80,000 in litigation costs. And finally once we got in front of the judge, the ex could not combine two logical sentences together. The paper can defend itself on its face but it also turned out that not a single citation she cited had anything to do with the case at hand, which chatGPT is known for in legal circles. She admit using the tool and only got a verbal reprimand. The judge told majority of that "work" was legal and she cannot stop her from exercising her first amendment right, be it written by AI she had to form questions, edit responses, etc. And I wasn't able to recover a single dime since on its face her motions did make sense, although judge denied majority of her ridiculous pleadings.
Its really frightening! Its like handling over the smartest brain possible to someone who is dumb, but also giving them very simple GUI that they actually can operate and ask good enough questions/prompts to get smart answers. Once the public at large figure this one out, I can only imagine courts being flooded with all kinds of absurd pleadings. Being the judge in the near future will most likely be the least wanted job.
Up next: the judges use LLMs to evaluate arguments.
A good start for this debate would be to reconsider the term "AI", perhaps choosing a term that's more intuitive, like "automation" or "robot assistant". It's obvious that learning to automate a task is no way to learn how to do it yourself. Nor is asking a robot to do it for you.
Students need to understand that learning to write requires the mastery of multiple distinct cognitive and organizational skills, only the last of which is to generate text that doesn't sound stupid.
Each of writing's component tasks must be understood and explicitly addressed by the student, to wit: (1) choosing a topic to argue, and the component points to make a narrative, (2) outlining the research questions needed to answer each point, and finally, (3) choosing ONLY the relevant points that are necessary AND sufficient to the argument AND based on referenced facts, and that ONLY THEN can be threaded into a coherent logical narrative exposition that makes the intended argument and that leads to the desired conclusion.
Only then has the student actually mastered the craft of writing an essay. If they are not held responsible for implementing each and every one of these steps in the final product, they have NOT learned how to write. Their robot did. That essay is a FAIL because the robot has earned the grade; not they. They just came along for the ride, like ballast in a sailing ship.
The word is "machine".
Quite relevant here (maybe not where Universities are directly concerned) is the history of Luddites breaking automated textile equipment to protest their high skilled jobs disappearing in favour of much less skilled jobs of machine operators.
I have been wrestling with this too. I only see two options: no tech university or AI wrangling university.
https://solresol.substack.com/p/you-can-no-longer-set-an-und...
Not sure I agree with either/or. In person assessments are still pretty robust. I think an ideal university will teach both with a clear division between them (e.g. whether a particular assessment or module allows AI). What I'm currently struggling with is how to design an assessment in which the student is allowed to use AI - how do I actually assess it? Where should the bar actually be? Can it be relative to peers? Does this reward students willing to pay for more advanced AI?
The author is teaching a skill an LLM can do well enough to pass his exams. Is learning English composition in the literary sense now worth what it costs to learn it at a university? That's a very real question now.
What do you mean, “worth it”?
What is the alternative, we carry on without people skilled in the English language?
Not sure this is the provocative question you think it is. Were you educated in a university? Do you like being able to write English well? Would you rather that neither be true about you?
I’m all in for blue book style exams, in person and in a classroom. There are just too much rampant cheating with or without LLM.
How did we solve this when calculators came along and ruined peoples ability to do mental arithmetic and use slide rulers?
We banned it.
Yes, that's what we did and are still doing. Most grade schools don't allow calculators on basic arithmetic classes. Colleges don't integrate WolframAlpha into Calculus 101 exams. etc.
Which is extremely stupid.
I want my math graduates to be skilled at using CAS systems. Yes, even in Calculus 1.
The lack of computer access for teaching math which objectively is supercharged by computation is a massive disservice to millions of individuals who could have used those CAS systems.
I don't want my engineers solving equations by hand. I especially don't want anyone who claims to be a "statistician" to not be skilled in Python (or historically, R)
1 reply →
The impact LLMs have on education is arguably orders of magnitude higher than calculators
Not really, its just calculators for all the other classes.
And tbh, lots of people historically would have loved a calculator that could write an essay about shakespeare or help code a simple game.
2 replies →
I was allowed to use calculators during my A-level Math/Physics/Chem exams, but knowing what to punch in was half the battle. Hell, they even give you most of the formulae on the very first page of the exam sheet, but again, application of that knowledge is the hard part.
Point being, the fundamentals matter. I can't do mental arithmetic very well these days because it's been years since I've practiced, but I know how it works in the first place and can do it if need be. How is a kid learning geometry or calculus supposed to get by and learn to spot the patterns that make sense and the ones that don't without first knowing the fundamentals underlaying the more complex concepts?
When I took multivariable calculus in tyool 2007, we were forbidden from using our calculators. "You can use a slide rule or an abacus" and I did indeed bring the former to one exam, but of course the problems were written in such a way that you didn't actually need it.
the difference is using my calculator in real life works ALL the time and is cheap. I can depend on it. And i still need to think about the broader problem even if i have a calculator. The calculator only removes the mindless rote memorization of the steps needed to do arithmetic, etc.
My calculator doesn’t depend on a fancy AI model in the cloud. It’s not randomly rate limited during peak times due to capacity constraints. It’s not expensive to use, whereas the good LLM models are.
Did i mention calculators are actually deterministic? In other, always reliable. It’s difficult to compare the two. One gives a false sense of accomplishment because it’s say 80% reliable, and the other is always 100% reliable.
Outsourcing a specific task to a deterministic tool you own is clearly not the same thing as outsourcing all of your cognition to a probabilistic tool owned by people with ongoing political and revenue motives that don’t align with your own.
We didn't, people who aren't good at doing math in their head are numerically illiterate and make bad decisions with money etc.
When it's general thinking we've trained people not to have to do anymore, it's going to be dire.
It sounds like you’re implying LLMs are to everything what calculators were to math, if so you are sorely mistaken
He's implying, rightfully so, that we've repeatedly adapted to various technologies that fundamentally threatened the then status quo of education. We'll do it again.
In my programming, algorithms and data structures courses the homework assignment completion has gone from roughly 50% before LLMs to 99% this year.
Making assignments harder would be unfair to those few students who would actually try to solve the problem without LLMs.
So what I do is require extensive comments and ahem - chain of thought reasoning in the comments - especially the WHY part.
Then I require oral defense of the code.
Sadly this is unfeasible for some of the large classes of 200, but works quite well when I have the luxury of teaching 20 students.
The best class I took in college was a 3-hour long 5-person discussion group on Metaphysics. It’s a shame that college costs continue to rise, because I still don’t think anything beats small class sizes and active participation.
Ironically I have used ChatGPT in similar ways to have discussions, but it still isn’t quite the same thing as having real people to bounce ideas off of.
It's not that hard to save remote education accreditation. You just need a test pod.
Take one of those soundproofed office pods, something like what https://framery.com/en/ sells. Stick a computer in it, and a couple of cameras as well. The OS only lets you open what you want to open on it. Have the AI watch the student in real time, and flag any potential cheating behaviors, like how modern AI video baby monitors watch for unsafe behaviors in the crib.
If a $2-3000 pod sounds too expensive for you over the course of your child's education, I'm sure remote schoolers can find ways to rent pods at much cheaper scale, like a gym subscription model. If the classes you take are primarily exam-based anyway you might be able to get away with visiting it once a week or less.
I'm surprised nobody ever brings up this idea. It's obvious you have to fight fire with fire here, unless you want to 10x the workload of any teacher who honestly cares about cheating.
There's a few comments here about how AI will revolutionize learning because it's personalized or lets users explore or whatever. That's fundamentally missing the point. College students who are using AI aren't using it to learn better, they're using it to learn _less_. The point of writing an essay isn't the essay itself, it's the process of writing the essay: research, organization, writing, etc. The point of doing math problems isn't to get the answer, it's to _do the work_ to find the answer. If you let AI do that, you're not learning better, you're learning worse.
Now, granted, AI can help with things students are passionate about. If you want to do gamedev you might be able to get an AI to walk you through making a game in Unity or Godot. But societally we've decided that school should be about instilling a wide variety of base knowledge that students may not care about: history, writing, calculus. The idea is that you don't know what you're going to need in your life, and it's best to have a broad foundation so that if you run into something that needs it you'll at least know where to start. 99% of the time developing CRUD apps you're not going to need to know that retrieving an item from an array is O(n), but when some sales manager goes in and adds 2 million items to the storefront and now loading a page takes 12 seconds and you can't remove all that junk because it's for an important sales meeting 30 minutes from now, it's helpful to know that you might be able to replace it with a hashmap that's O(1) instead. AI's fine for learning things you want to learn, but you _need_ to learn more than just what you _want_ to learn. If you passed your Data Structures and Algorithms class by copy/pasting all the homework questions into ChatGPT, are you going to remember what big-O notation even means in 5 years?
I'm kind of happy that I did my maths courses just about before LLMs did become available. The math homework was the only thing in my CS studies where I sat sometimes 6+ hours on the weekly exercises and I always allocated one day for them. I sometimes felt really tempted to look stuff up and also rarely found an answer on Metroid Mathplanet forums. But it's really hard to Google math exercises and if the teachers are motivated enough to write new slightly altered questions each year they are practically impossible to Google. With LLMs I'm sure that I would have looked up a lot more. In the end getting 90% of the points and really struggling for it was rewarding and taught me a lot - although I'll probably never need these skills.
Schools need to re-think what the purpose of essays was in the first place and re-invent homework to suit the existance of LLMs.
If it's to understand the material, then skip the essay writing part and have them do a traditional test. If it's to be able to write, they probably don't need that skill anymore so skip the essay writing. If it's to get used to researching on their own, find a way to have them do that which doesn't work with LLMs. Maybe very high accuracy is required (a weak point for LLMs), or the output is not an LLM-friendly form, or it's actually difficult to do so the students have to be better than LLMs.
> "If it's to be able to write, they probably don't need that skill anymore..."
Any person who can't write coherently and in a well organized way isn't going to be able to prompt a LLM effectively either. Writing skills become _more_ important in the age of LLMs, not less.
"Writing is nature's way of letting you know how sloppy your thinking is." -- Leslie Lamport
Writing is an essential skill.
Writing well requires both having organized thinking and writing skills to carry the reader's thinking and feelings along where you want them to go. You can put your organized thinking into an LLM, perhaps as bullet points or dense explanations and it can do the writing skills part. You just need to be able to read well enough to evaluate the output, which is a lot easier.
[dead]
The issue is trust, AI is not the issue.
Culture, not technology.
Basically it comes to this: a sufficiently large proportion of a student's grade must come from work impossible to generate with AI, e.g. in-person testing.
Unfortunately, 18-year-olds generally can't be trusted to go a whole semester without succumbing to the siren call of easy GenAI A's. So even if you tell them that the final will be in-person, some significant chunk of them will still ChatGPT their way through and bomb the final.
Therefore, professors will probably have to have more frequent in-person tests so that students get immediate feedback that they're gonna fail if they don't actually learn it.
Literally this. The education system is lazy and tests people only every 30 days, with a test or midterm. This is the system's fault. Quiz every day. Catch where people are struggling, early. The quiz can be on their phones and let you know when they switch apps. Just have them close their laptops, take out their phones, scan QR codes from the screen in front, or pasted on a wall, and then 5 min quiz on their phones. That's what I did.
>Unfortunately, 18-year-olds generally can't be trusted to go a whole semester without succumbing to the siren call of easy GenAI A's. So even if you tell them that the final will be in-person, some significant chunk of them will still ChatGPT their way through and bomb the final.
I really think we need these policies to be developed by the opposite of misanthropists.
I wonder if culture has gone wrong where children or students simply cannot be failed anymore. Or sometimes even given less than perfect grades...
Maybe we should go back to times where failing students was seen more so fault of the student than the system. At least when majority of students pass and there is no proven fault by faculty.
So they bomb their test. And? Isn't that the entire point of an exam? If you fail, you fail and presumably have to re-learn the contents.
I've found LLMs to often be a time-suck rather than supercharge my own learning. A huge part of thinking is reconsidering your initial assumptions when you start to struggle in research, mathematical problem solving, programming, whatever it may be. AI makes it really easy to go down a rabbit hole and spend hours filling in details to a question or topic that wasn't quite right to begin with.
Basically analog thinking is still critical, and schools need to teach it. I have no issues with classrooms bringing back the blue exam books and evaluating learning quality that way.
AI definitely makes it easier for students to finish their assignments, but that’s part of the problem. It’s getting harder to tell whether they actually understand anything.What’s more worrying is how fast they’re losing the habit of thinking for themselves.
And it’s not just in school. I see the same thing at work. People rely on AI tools so much, they stop checking if they even understand what they’re doing. It’s subtle, but over time, that effort to think just starts to fade.
> An infinitely patient digital tutor that can tackle any question…..You might feel like you are learning when querying a chatbot, but those intellectual gains are often illusory.
I get the shade here (kind of?) and have seen both sides in my life, but isn’t having a tutor exactly what you need to learn?
IMO, using it as an information butler is leagues different from a digital tutor. That’s the key— don’t bring forklifts to the gym lol
Maybe switching it up could work. What if learning happened at home with the use of AI and "homework" happened in class under supervision?
https://en.m.wikipedia.org/wiki/Flipped_classroom
I wrote my master thesis about that.
It‘s an old idea.
if I were teaching english today, i would ask students to write essays taking the positions that an AI is not allowed to. steelman something appalling. stand up in class and debate like your life or grade depends on it and fail anyone who doesn't, and if that excludes people, maybe they don't belong in a university.
in everything young people actually like, they train, spar, practice, compete, jam, scrimmage, solve, build, etc. the pedagogy needs to adapt and reframing it in these terms will help. calling it homework is the source of a flawed mental model that problematizes the work instead of incentivising it, and now that people have a tool to solve the problem, they're applying their intelligence to the problem.
arguably there's no there there for the assignments either, especially for a required english credit. the institution itself is a transaction that gets them a ticket to an administrative job. what's the homework assignment going to get them they value? well roundedness, polish, acculturation, insight, sensitivity, taste? these are not valuable or differentiating to kids in elite institutions who know they are competing globally for jobs that are 95% concrete political maneuvering, and most of them (especially in stem) probably think the class signifiers that english classes yield are essentially corrupt anyway.
maybe it's schadenfreude and an old class chip on my part, but what are they going to do, engage in the discourse and become public intellectuals? argue about rimbaud and voltaire over coffee, cigarettes and jazz? Some of them have higher follower counts than there were readers of the novels or articles being taught in their classes. More people read their tweets every day than have ever read a book by Chiang. AI isn't the problem, it's a forcing function and a solution. Instructors should reflect on what their institutions have really become.
In Roman times, teaching focused on wrestling to prepare young people for life. Now, in the AI age, what to teach, and why, have once again become major questions, especially when AI can pass the bar exams and a Ph.D. is no longer a significant achievement. Critical thinking, and life experiences could be the target but would they do it?
Fight fire with fire.
Use AI to determine potential essay topics that are as close to 'AI-proof' as possible.
Here is an example prompt:
"Describe examples of possible high school essay topics where students cannot use AI engines such as perplexity or ChatGPT to help complete the assignment. In other words - AI-proof topics, assignments or projects"
Perhaps we should reconsider the purpose of teaching. If one does not want to learn, why are we teaching them?
For the vast majority that enroll higher education: Because they want a job. They need a job.
The degree is the key that unlocks the door to a job. Not the knowledge itself, but the actual physical diploma.
And it REALLY, REALLY doesn't help that there are so many jobs out there that could be done just fine with a HS diploma. But because reasons, you now need a college degree for that job.
The problem isn't new. For decades people have bought fake degrees, hired people to do their work, even hired people the impersonate themselves.
> Perhaps we should reconsider the purpose of teaching. If one does not want to learn, why are we teaching them?
Certainly there's something to be said for reconsidering much of the purpose (and mechanisms) of post-secondary education, but we often 'force' children and young adults to do things they don't want to do for their own good. I think it's better we teach our children the importance of learning - the lack of which is what results in, as another commenter puts it, students viewing homework as "something they have to overcome"
Because it is necessary, think about toilet training a toddler.
That's a silly argument, onus is on the teacher to make the subject interesting for students.
I don't see how AI is making it harder to make the subject more interesting. Homework certainly isn't what gets people interested.
But regardless I don't buy that, especially in college where you pick your own set of classes.
To avoid November 6, 2024?
Those protestors mostly went to government schools, and were likely radicalized because of their time in them. Being in school doesn't make the hate in your heart go away. It forces you to rub shoulders with the exact kind of people you believe are subhuman - and even gives more ammunition for them to use in their mind when arguments against racism are made to them.
There's a reason why conservatives are so obsessed with school choice, LGBT book bans, etc.
Maybe just stop giving homework and instead give the kids some time to live. Fixed it for you.
The author teaches a college-level writing class. Are you suggesting that, if you voluntarily take a writing class, it's unreasonable if the professor expects you to do some writing outside of class?
Caveat, I'm just armchair-commenting and I haven't thought much about this.
After kids learn to read and do arithmetic, shouldn't we go back to apprenticeships? The system of standardized teaching and grading seems to be about to collapse, and what's the point of memorizing things when you can carry all that knowledge in your pocket? And, anyway, it doesn't stick until you have to use it for something. Plus, a teacher seems to be insufficient to control all the students in a classroom (but that's nothing new; it amazes me that I was able to learn anything at all in elementary school, with all the mayhem there always was in the classroom).
Okay, I can already see a lot of downsides to this, starting with the fact that I would be an illiterate farmer if some in my family had had a say in my education. But maybe the aggregate outcome would be better than what is coming?
> I want my students to write unassisted because I don’t want to live in a society where people can’t compose a coherent sentence without a bot in the mix.
Kicking against the pricks.
It is understandable that professional educators are struggling with the AI paradigm shift, because it really is disrupting their profession.
But this new reality is also an opportunity to rethink and improve the practice of education.
Take the author comment above: you can't disagree with the sentiment but a more nuanced take is that AI tools can also help people to be better communicators, speakers, writers. (I don't think we've seen the killer apps for this yet but I'm sure we will soon).
If you want students to be good at spelling and grammar then do a quick spelling test at the start of each lesson and practice essay writing during school time with no access to computers. (Also, bring back Dictation?)
Long term: yes I believe we're going to see an effect on people's cognition abilities as AI becomes increasingly integrated into our lives. This is something we as a society should grapple with and develop new enlightened policies and teaching methods.
You can't put the genie back in the bottle, so adapt, use AI tools wisely, think deeply about ways to improve education in this new era.
I'm curious to see how the paper-and-pen pivot goes. There's something radical about going analog again in a world that's hurtling toward frictionless everything
The idea with calculators was that as a tool there are higher level questions that calculators would help you answer. A simple example is that calculators don't solve word problems, but you can use them to do the intermediate computations.
What are the higher level questions that LLMs will help with, but for which humans are absolutely necessary? The concern I have is that this line doesn't exist -- and at the very best it is very fuzzy.
Ironically, this higher level task for humans might be ensuring that the AIs aren't trying to get us (whatever that means, genocide, slavery, etc...).
Those higher level questions are likely outside the scope of the class. Like write a novel or something like that.
I think as a culture we've fetishized formal schooling way past its value. I mean, how much of what you "learned" in school do you actually use or remember? I'm not against education, education is very important, but I'm not sure that schooling is really the optimal route to being educated. They're related, but they're not the same.
The reality is, if someone wants to learn something then there's very little need to cheat, and if they don't want to learn the thing but they're required to, the cheating sort of doesn't matter in the end because they won't retain or use it.
Or to put it simpler, you can lead a horse to water but..
The fetishizing enabled the massive explosion in what's basically a university industrial complex financed off the backs of student loans. To keep growing the industry needed more suckers...I mean students to extract student loans from. This meant watering down the material even in technical degrees like engineering, passing kids who should have failed, and lowering admission standards (masked by grade inflation). Many programs are really really bad now like what should be high school freshman level material. Criticizing the university system gets you called anti-intellectual and a redneck.
A lot of debate around the idea of student loan forgiveness but nobody is trying to address how the student loan problem got so bad in the first place.
All primary schooling is designed to teach people about everything they can learn. If we don’t, many of them will end up in the coal mines because it’s the only thing they know.
>I think as a culture we've fetishized formal schooling way past its value. I mean, how much of what you "learned" in school do you actually use or remember? I'm not against education, education is very important, but I'm not sure that schooling is really the optimal route to being educated. They're related, but they're not the same.
Yeah its absolutely bonkers. I spent 9 months out of school traveling, and the provided homework actually set me ahead of my peers when I had returned.
No ones stopped and considered "What is a school for".
For some people it seems to be mandatory state sponsored childcare. For others its about food? Some people tell me it sucks but its the best way to get kids to socialise?
I feel like if it was an engineering project there would be a formal requirements study, but because its a social program what we get instead is just a big bucket of feelings and no defined scope.
During my time I have come to view schooling as an adversary. I am considering whether it might be prudent to instruct my now toddler that school is designed to break him, and that his role is actually to achieve in spite of it, and that some of his education will come in opposition to the institution.
One of the last courses I took during my CS degree we had one on one 10 minute zoom calls with TAs who would ask a series of random detailed questions about any line of code in any file of our term project. It was easy to complete if you wrote the code by hand and I imagine would have been difficult for students who extensively cheated.
In terms of creative writing I think we need to accept that any proper assessment will require a short essay to be written in person. Especially at the high school level there's no reason why a 12th grade student should be passing english class if they can't write something half-decent in 90 minutes. And it doesn't need to be pen and paper - I'm sure there are ways to lock a chromebook into some kind of notepad software that lacks writing assistance.
Education should not be thought of as solely a pathway to employment it's about making sure people are competent enough to interface with most of society and to participate in our broader culture. It's literally an exercise in enlightenment - we want students to have original insights about history, culture, science, and art. It is crucial to produce people who are pleasant to be around and who are interesting to talk to - otherwise what's the point?
It's honestly encouraging to see an educator thinking about solutions instead of wagging a finger at LLMs and technology and this new generation. Homework in its current form cannot exist AND be beneficial for the students -- educators need to evolve with the technology to work alongside it. The Google Docs idea was smart, but the return to pen and paper in the classroom is great. Good typists will hate it at first, but transcribing ideas more slowly and semi-permanently has its benefits.
Between widespread social media/short form video addiction and GPT for all homework starting in middle school, I think ASI is nearly guaranteed by virtue of the human birth/death process, with no further model improvement required.
I am kinda shocked that the thing which would be shared on HN, unironically, is an essay of the attraction to the idea of the butlerian Jihad. Interesting times.
No mention of Danny Dunn. Tsk.
https://www.semicolonblog.com/?p=32946
As late as 1984, Danny Dunn shared a place of honor on my bookshelves, along with Encyclopedia Brown.
The long list of titles is interesting and almost leads us to a self-referential thought. These series were often known as "boiler-room novels" because they were basic and formulaic, and it was possible to command a team of entry-level writers to churn them out.
LLMs is is here to stay and will change learning for the better (we will be full-scale disrupted 3-5yr from now in EDU), it is a self-guided tutor like never before and 100% Amazing, except for when it hallucinates.
I use it [Copilot / GPT / Khanmingo] all the time to figure out new tools and prototype workflows, check code for errors, and learn new stuff including those classes at universities which cost way too much.
If universities feel threatened by AI cry me a river.
No professor or TA was *EVER* able to explain calculus and differential equations to me, but Khanmingo and ChatGPT can. So the educational establishment can deal with this.
Back in the day, when I first tried college, I simply could not comprehend higher level math. We had one professor, and a couple of TAs - but it was impenetrable for me. They just said to me Go to the library and try some different books", or "Try to find some students and discuss the topics". Tried that, but to no avail.
I was a poor math student in HS, but I loved electronics, so that's why I decided to pursue electrical engineering. Seeing that I simply could not handle the math, I dropped out after the first year, and started working as an electricians apprentice.
Some years later YouTube had really taken off, and I decided to check out some of the math tutors there. Found Khan Academy, and over the course of a week, everything just fell into place. I stared from the absolute beginning, and binged/worked myself up to HS pre-calc math. His style of short-from teaching just worked, and he's a phenomenal educator on top.
Spent the summer studying math, and enrolled college again in the fall. Got A's and B's in all my engineering math classes. If I ever got stuck, or couldn't grok something, I trawled youtube for math vids / tutors until I found someone that could explain it in a way I could understand.
These days I use LLMs in the way you do, and I sort of view it as an extension of the the way I learned things before: infinite number of tutors.
Of course, one problem is that one doesn't know what one doesn't know. Is the model lying to you? Well, luckily there are many different models, and you can compare to see what they say.
In your situation where LLMs can cover most material better than the university, what benefits does the university still provide you, if any?
Exactly. I can remember 2-3 teachers in my life that were good but most were absolutely terrible.
I even remember taking a Philosophy of AI class in 1999, something that should have been as interesting and intellectual stimulating to any thinking student, and the professor managed to clear the lecture hall from 300 to 50 before I stopped going too with his constant self-aggrandizing bullshit.
I had a history teacher in high school that didn't try to hide he was a teacher so he could travel in the summer and then made a large part of the class about his former and upcoming travels.
Most weren't this bad but they just sucked at explaining concepts and ideas.
The whole education system should obviously be rebuilt from the ground up but it will be decades before we bother with this. Someone above mentioned the Roman's teaching wrestling to students. We are those Romans and we are just going to keep teaching wrestling. I learned to wrestle, my father learned to wrestle so my kids are going to learn to wrestle because that is what defines an educated person!
> I think there is a good case to be made for trying to restrict AI use among young people the way we try to restrict smoking, alcohol, gambling, and sex.
I would go further than that, along two axes: it's not just AI and it's not just young people.
An increasing proportion of our economy is following a drug dealer playbook: give people a free intro, get them hooked, then attach your siphon and begin extracting their money. The subscription-model-ization of everything is an obvious example. Another is the "blitzscaling" model of offering unsustainably low prices to drive out competition and/or get people used to using something that they would never use if they had to pay the true cost. More generally, a lot of companies are more focused on hiding costs (environmental, psychological, privacy, etc.) from their customers than on actually improving their products.
Alcohol, gambling, and sex, are things that we more or less trust adults to do sensibly and in moderation. Many people can handle that, and there are modest guardrails in place even so (e.g., rules that prevent selling alcohol to drunk people, rules that limit gambling to certain places). I would put many social media and other tech offerings more in the category of dangerous chemicals or prescription drugs or opiates (like the laudanum the article mentions). This would restrict their use, yes, but the more important part is to restrict their production and set high standards for the companies that engage in such businesses.
Basically, you shouldn't be able to show someone --- child or adult --- an infinite scrolling video feed, or give them a GPT-style chatbot, or offer free same-day shipping, without getting some kind of permit. Those things are addictive and should be regulated like drugs.
And the penalties for failing to do everything absolutely squeaky clean should be ruinous. The article mentions one of Facebook's AIs showing CSAM to kids. One misstep on something like that should be the end of the company, with multi-year jail terms for the executives and the venture capitalists who funded the operation. Every wealthy person investing in these kinds of things should live in constant fear that something will go wrong and they will wind up penniless in prison.
As others have already mentioned, I believe that it's mainly the curious and engaged students who will benefit greatly from AI. And for those who cheat or use AI to deceive and end up failing a written exam, well, maybe that's not such a bad thing after all...
Stop giving boring ass essay assignments. Forest, trees.
Let me just say that I always like these types of conversation on here. Tech dorks and education are an interesting conversation. I'll throw in my 2 cents as a HS CS teacher.
First off, I respect the author of the article for trying pen and paper, but that’s just not an option at a lot of places. The learning management systems are often tied in through auto grading with google classroom or something similar. Often you’ll need to create digital versions of everything to put in management systems like Atlas. There’s also school policy to consider and that’s a whole nother can of worms. All that aside though.
The main thing that most people don't have in the forefront of their mind in this conversation is the fact that most students (or adults) don't want to learn. Most people don't want to change. Most students will do anything and everything in their power to avoid those two things. I’ve often thought about why, maybe to truly learn you need to ignore your ego and accept that there’s something you don’t know; maybe it’s a biological thing and humans are averse to spending calories on mental processes that they don’t see as a future benefit – who knows.
This problem runs core to all of modern education (and probably has since the idea of mandatory mass education was called from the pits of hell a few hundred years ago). LLMs have really just brought us a society to a place where it can no longer be ignored because students no longer have the need to do what they see as busy work. Sadly, they don’t inherently understand how writing essays on oppressed children hiding in attics more than half a century ago helps them in their modern tiktok filled lives.
The other issue is that, for example, in the schools I’ve worked at, since the advent of LLMs, many teachers and most of the admin all take this bright and cheery approach to LLMs. They say things like, “The students need to be shown how to do it right,” or “help the students learn from ChatGPT.” The fact that the vast majority of students in high school just don’t care escapes them. They feel like it’s on the teachers to wield and to help the students wield this mighty new weapon in education. But in reality, It’s just the same war we’ve always had between predator and prey (or guard and prisoner) but I fear in this one, only one side will win. The students will learn how to use chat better and the teachers will have nothing to defend against it, so they will all throw up their hand as start using chat to grade thing. Before you know it, the entire education system is just chat grading work submitted by chat under the guise of, “oh but the student turned it in so it’s theirs.”
The only thing LLMs have done, and more than likely ever do, in education is to make it blatantly obvious that students are not empty vessels yearning for a drink from the fountain of knowledge that can only be provided to them by the high and mighty educational institution. Those students do exist and they will always find a way to learn. I also assume that many of us here fall into that, but those of us that do are not the majority.
My students already complain about the garbage chat created assignments their teachers are giving them. Entire chunks of my current school are using chat to create tests, exams, curriculum, emails and all other forms of “teacher work”. Several teachers, who are smart enough, are already using chat to grade thing. The CEO of the school is pushing for every grade (1-12) having 2 AI classes a week where they are taught how to “properly” use LLMs. It’s like watching a train wreck in slow motion.
The only way to maintain mandatory mass education is by accepting no one cares, finding a way to remove LLMs from the mix, or switch of Waldorf, homeschooling or some other better system than mandatory mass education. The wealthy will be able to, the rest will suffer.
In case you missed it, South Park did an episode about that two years ago :
https://en.wikipedia.org/wiki/Deep_Learning_(South_Park)
1
well worth the read just for the term "broligarch"
There is a tremendous lack of understandings between the genx and millennial teachers and the way they see and use AI, and how younger people are using it.
Kids use AI like an operating system, seamlessly integrated into their workflows, their thinking, their lives. It’s not a tool they pick up and put down; it’s the environment they navigate, as natural as air. To them, AI isn’t cheating—it’s just how you get things done in a world that’s always been wired, always been instant. They do not make major life decisions without consulting their systems. They use them like therapists. It’s is far more than a Google replacement or a writing tool already.
This author’s fixation on “desirable difficulty” feels like a sermon from a bygone era, steeped in romanticized notions of struggle as the only path to growth. It’s yet another “you can’t use a calculator because you won’t always have one” — the same tired dogma that once insisted pen-and-paper arithmetic was the pinnacle of intellectual rigor (even after calculators arrived: they have in fact always been with us every day since).
The Butlerian Jihad metaphor is clever but deeply misguided casting AI as some profane mimicry of the human mind ignores how it’s already reshaping cognition, not replacing it.
The author laments students bypassing the grind of traditional learning, but what if that grind isn’t the sacred rite they think it is? What if “desirable difficulty” is just a fetishized relic of an agrarian education system designed to churn out obedient workers, not creative thinkers?
The reality is, AI’s not going away, and clutching pearls about its “grotesque” nature won’t change that. Full stop.
Students aren’t “cheating” when they use it… they’re adapting to a world where information is abundant and synthesis is king. The author’s horror at AI-generated essays misses the point: the problem isn’t the tech, it’s the assignments (and maybe your entire approach).
If a chatbot can ace your rhetorical analysis, maybe the task itself is outdated, testing rote skills instead of real creativity or critical thinking.
Why are we still grading students on formulaic outputs when AI can do that faster?
The classroom should be a lab for experimentation, not a shrine to 19th century pedagogy, which is most definitely is. I was recently lectured by a teacher about how he tries to make every one of his students a mathematician, and became enraged when I gently asked him how he’s dealing with the disruption to mathematicians as a profession that AI systems are currently doing. There is an adversarial response underneath a lot of teacher’s thin veneers of “dealing with the problem of AI” that is just wrong and such a cope.
That obvious projection leads directly to this “adversarial” grading dynamic. The author’s chasing a ghost, trying to police AI use with Google Docs surveillance or handwritten assignments. That’s not teaching. What it is standing in the way of civilization Al progress because it doesn’t fit your ideas. I know there are a lot of passionate teachers out there, and some even get it, but most definitely do not.
Kids will find workarounds, just like they always have, because they’re not the problem; the system is. If students feel compelled to “cheat” with AI, it’s because the stakes (GPAs, scholarships, future prospects) are so punishingly high that efficiency becomes survival.
Instead of vilifying them, why not redesign assessments to reward originality, process, and collaboration over polished products? AI could be a partner in that, not an enemy.
The author’s call for a return to pen and paper feels like surrender dressed up as principle and it’s rediculously out of touch.
It’s not about fostering “humanity” in the classroom; it’s about clinging to a nostalgic ideal of education that never served everyone equally anyway.
Meanwhile, students are already living in the future, where AI is as foundational as electricity.
The real challenge isn’t banning the “likeness bots” but teaching kids how to wield them critically, ethically, and creatively.
Change isn’t coming. It is already here. Resisting it won’t make us more human; it’ll just leave us behind.
Edit: sorry for so many edits. Many typos.
ChatGPT is only 2.5 years old. How are kids using AI like it's always been around? I really hope they aren't making major life decisions consulting chatbots from big tech companies, instead of their relatives, teachers and friends. I'm old enough to recall when social media was viewed as this incredibly positive tech for humanity. How things have changed. One wonders how we'll view the impact AIs in a few years.
I teach Enterpise Architecture on graduate level. I would absolutely not mind people using AI as an OS or an information source or a therapist. I would not mind them looking things up in an encyclopedia, so why mind them using AI.
What I do mind is: - the incredible generic slop AI generates. Let’s improve communication, make a better strategy, improve culture. - the unwavering belief in AI. I tell my students, why using AI will not give them a good grade. They get a case solved by all major LLMs, graded, with thorough feedback and a bad grade. I tell them, that literally writing anything at all as the answer would not give a much worse answer. And still they go and use AI and get bad grades. - the incredible intellectual laziness it seems to foster. I criticize TOGAF in my course (let’s not get into that) and explicitly state it to be outside of the course material. Repeatedly, in writing and verbally. And what do the students do? They ask a LLM, that inevitably starts referring to TOGAF. And the answer is copied in the case analysis without even an attempt to actually utilize TOGAF or to justify the choice made
My students actually get worse grades and are worse off in terms of being able to solve actual real-life problems, because they use AI. Getting a degree should increase their intellectual capabilities but people actively choose not to, thus wasting their time. And that’s I’m not OK with.
How do you test "real creativity" and "critical thinking" in a way that is both scalable and reliably tells apart those who get it and those who don't?
It's interesting to note that your comment and my comment ended up right at the end, having been downvoted, with no downvoters commenting on why they disagree with you, or my, points.
I assume it's because many of the commenters of this post are skewed towards academia, and perhaps view the disruption by AI to the traditional methods of grading student work as a challenge to their profession.
As we have seen many times throughout history, when disruptive forces of technical or demographic changes or a new set of market forces occurs, incumbents often struggle to adapt to the new situation.
Established traditional education is a massive ship to turn around.
Your comments contain much food for thought and deserve to be debated. I agree with you that educators should not be branding students as cheaters. Using AI in an educational context is a rational and natural thing to do, especially for younger students.
> ... AI as some profane mimicry of the human mind ignores how it’s already reshaping cognition, not replacing it.
- Yes, this is such an important point and it's why we need enlightened policy making leading to meaningful education reform.
I do disagree with you about incorporating more pen and paper activities - I think this would provide balance and some important key skills.
No doubt AI is challenging to many areas of society, especially education. I'm not saying it's a wonderful thing that we don't need to worry about, but we do need to think deeply about its impacts and how we can harness its positive strengths and radically improve teaching and learning outcomes. It's not about locking students in exam rooms with high tech surveillance.
With AI it's disappointing that the prevalent opinions of many educators are seemingly stuck and struggling to adapt.
Meanwhile society will move on.
Edit: good to see you got a response!
Decades of research into learning shows that "desirable difficulty" is not, as you put it, "just a fetishized relic of an agrarian education system designed to churn out obedient workers, not creative thinkers." Rather, difficulty means you are encountering things you do not already understand. If you are not facing difficulties then your time is being wasted. The issue is that AI allows people to avoid facing difficulties and thus allows them to waste their time.
You think we will make progress by learning to use AI in certain ways, and that assignments can be crafted to inculcate this. But a moment's acquaintance with people who use AI will show you that there is a huge divide between some uses of AI and others, and that some people use AI in ways which is not creative and so on. Ideally this would prompt you to reflect on what characteristics of people incline them towards using AI in certain ways, and what we can do to promote the characteristics that incline people to use AI in productive and interesting ways, etc. The end result of such an inquiry will be something like what the author of this piece has arrived at, unfortunately. Any assignment you think is immune to lazy AI use is probably not. The only real solution is the adversarial approach the author adopts.
[dead]
[dead]