← Back to context

Comment by jacobedawson

5 days ago

An underrated quality of LLMs as study partner is that you can ask "stupid" questions without fear of embarrassment. Adding in a mode that doesn't just dump an answer but works to take you through the material step-by-step is magical. A tireless, capable, well-versed assistant on call 24/7 is an autodidact's dream.

I'm puzzled (but not surprised) by the standard HN resistance & skepticism. Learning something online 5 years ago often involved trawling incorrect, outdated or hostile content and attempting to piece together mental models without the chance to receive immediate feedback on intuition or ask follow up questions. This is leaps and bounds ahead of that experience.

Should we trust the information at face value without verifying from other sources? Of course not, that's part of the learning process. Will some (most?) people rely on it lazily without using it effectively? Certainly, and this technology won't help or hinder them any more than a good old fashioned textbook.

Personally I'm over the moon to be living at a time where we have access to incredible tools like this, and I'm impressed with the speed at which they're improving.

> Learning something online 5 years ago often involved trawling incorrect, outdated or hostile content and attempting to piece together mental models without the chance to receive immediate feedback on intuition or ask follow up questions. This is leaps and bounds ahead of that experience.

But now, you're wondering if the answer the AI gave you is correct or something it hallucinated. Every time I find myself putting factual questions to AIs, it doesn't take long for it to give me a wrong answer. And inevitably, when one raises this, one is told that the newest, super-duper, just released model addresses this, for the low-low cost of $EYEWATERINGSUM per month.

But worse than this, if you push back on an AI, it will fold faster than a used tissue in a puddle. It won't defend an answer it gave. This isn't a quality that you want in a teacher.

So, while AIs are useful tools in guiding learning, they're not magical, and a healthy dose of scepticism is essential. Arguably, that applies to traditional learning methods too, but that's another story.

  • > But now, you're wondering if the answer the AI gave you is correct

    > a healthy dose of scepticism is essential. Arguably, that applies to traditional learning methods too, but that's another story.

    I don't think that is another story. This is the story of learning, no matter whether your teacher is a person or an AI.

    My high school science teacher routinely mispoke inadvertently while lecturing. The students who were tracking could spot the issue and, usually, could correct for it. Sometimes asking a clarifying question was necessary. And we learned quickly that that should only be done if you absolutely could not guess the correction yourself, and you had to phrase the question in a very non-accusatory way, because she had a really defensive temper about being corrected that would rear its head in that situation.

    And as a reader of math textbooks, both in college and afterward, I can tell you you should absolutely expect errors. The errata are typically published online later, as the reports come in from readers. And they're not just typos. Sometimes it can be as bad as missing terms in equations, missing premises in theorems, missing cases in proofs.

    A student of an AI teacher should be as engaged in spotting errors as a student of a human teacher. Part of the learning process is reaching the point where you can and do find fault with the teacher. If you can't do that, your trust in the teacher may be unfounded, whether they are human or not.

    • >I don't think that is another story. This is the story of learning, no matter whether your teacher is a person or an AI.

      My issue is the reverse of your story, and one of my biggest pet peeves of AI. AI as this business construct is very bad at correcting the user. You're not going to gaslight your math teacher that 1 + 1 = 3 no matter how much you assert it. an AI will quickly relent. That's not learning, that's coddling. Because a business doesn't want to make an obviously wrong customer feel bad.

      >Part of the learning process is reaching the point where you can and do find fault with the teacher.

      And without correction, this will lead to turmoil. For the reasons above, I don't trust learning from an AI unless you already have this ability.

      8 replies →

  • My favourite story of that involved attempting to use LLM to figure out whether it was true or my hallucination that the tidal waves were higher in Canary Islands than in Caribbean, and why; it spewed several paragraphs of plausibly sounding prose, and finished with “because Canary Islands are to the west of the equator”.

    This phrase is now an inner joke used as a reply to someone quoting LLMs info as “facts”.

    • This is meaningless without knowing which model, size, version and if they had access to search tools. Results and reliability vary wildly.

      In my case I can’t even remember last time Claude 3.7/4 has given me wrong info as it seems very intent on always doing a web search to verify.

      8 replies →

  • Please check this excellent LLM-RAG AI-driven course assistant at UIUC for an example of university course [1]. It provide citations and references mainly for the course notes so the students can verify the answers and further study the course materials.

    [1] AI-driven chat assistant for ECE 120 course at UIUC (only 1 comment by the website creator):

    https://news.ycombinator.com/item?id=41431164

  • Despite the name of "Generative" AI, when you ask LLMs to generate things, they're dumb as bricks. You can test this by asking them anything you're an expert at - it would dazzle a novice, but you can see the gaps.

    What they are amazing at though is summarisation and rephrasing of content. Give them a long document and ask "where does this document assert X, Y and Z", and it can tell you without hallucinating. Try it.

    Not only does it make for an interesting time if you're in the World of intelligent document processing, it makes them perfect as teaching assistants.

  • I often ask first, "discuss what it is you think I am asking" after formulating my query. Very helpful for getting greater clarity and leads to fewer hallucinations.

  • > you're wondering if the answer the AI gave you is correct or something it hallucinated

    Worse, more insidious, and much more likely is the model is trained on or retrieves an answer that is incorrect, biased, or only conditionally correct for some seemingly relevant but different scenario.

    A nontrivial amount of content online is marketing material, that is designed to appear authoritative and which may read like (a real example) “basswood is renowned for its tonal qualities in guitars”, from a company making cheap guitars.

    If we were worried about a post-truth era before, at least we had human discernment. These new capabilities abstract away our discernment.

    • The sneaky thing is that the things we used to rely on as signals of verification and credibility can easily be imitated.

      This was always possible--an academic paper can already cite anything until someone tries to check it [1]. Now, something looking convincing can be generated more easily than something that was properly verified. The social conventions evaporate and we're left to check every reference individually.

      In academic publishing, this may lead to a revision of how citations are handled. That's changed before and might certainly change again. But for the moment, it is very easy to create something that looks like it has been verified but has not been.

      [1] And you can put anything you like in footnotes.

  • To be honest I now see more hallucinations from humans on online forums than I do from LLMs.

    A really great example of this is on twitter Grok constantly debunking human “hallucinations” all day.

    • Ah yes, like when Grok hallucinated Obama and Biden in a picture with two drunk dudes (both white, BTW).

  • Is this a fundamental issue with any LLM, or is it an artifact of how a model is trained, tuned and then configured or constrained?

    A model that I call through e.g. langchain with constraints, system prompts, embeddings and whatnot, will react very different from when I pose the same question through the AI-providers' public chat interface.

    Or, putting the question differently: could OpenAI not train, constrain, configure and tune models and combine them into a UI that then acts different from what you describe for another use case?

  • Lets not forget also the ecological impact and energy consumption.

    • Honestly, I think AI will eventually be a good thing for the environment. If ai companies are trying to expand renewables and nuclear to power their datacenters for training, well, that massive amount of renewables and battery storage becomes available when training is done and the main workload is inference. I know they are consistently training new stuff on small scale but from what I've read the big training batches only happen when they've proven out what works at small scale.

      Also, one has to imagine that all this compute will help us run bigger / more powerful climate models, and google's ai is already helping them identify changes to be more energy efficient.

      The need for more renewable power generation is also going to help us optimize the deployment process. I.e. modular nuclear reactors, in situ geothermal taking over old stranded coal power plants, etc

      1 reply →

  • The joke is on you, I was raised in Eastern Europe, where most of what history teachers told us was wrong

    That being said. as someone who worked in a library and bookstore 90% of workbooks and technical books are identical. NotebookLM's mindmap feature is such a time saver

  • I had teachers tell me all kinds of wrong things also. LLMs are amazing at the Socratic method because they never get bored.

  • > you're wondering if the answer the AI gave you is correct or something it hallucinated

    Regular research has the same problem finding bad forum posts and other bad sources by people who don't know what they're talking about, albeit usually to a far lesser degree depending on the subject.

    • Yes but that is generally public, with other people able to weigh in through various means like blog posts or their own paper.

      Results from the LLM are your eyes only.

    • The difference is that llms mess with our heuristics. They certainly aren’t infallible but over time we develop a sense for when someone is full of shit. The mix and match nature of llms hides that.

      1 reply →

  • I ask: What time is {unix timestamp}

    ChatGPT: a month in the future

    Deepseek: Today at 1:00

    What time is {unix timestamp2}

    ChatGPT: a month in the future +1min

    Deepseek: Today at 1:01, this time is 5min after your previous timestamp

    Sure let me trust these results...

    • Also since I was testing a weather API I was suspicious of ChatGPTs result. I would not expect weather data from a month in the future. That is why I asked Deepseek in the first place.

  • While true, trial and error is a great learning tool as well. I think in time we’ll get to an LLM model that is definitive in its answer.

  • >But now, you're wondering if ... hallucinated

    A simple solution is just to take <answer> and cut and paste it into Google and see if articles confirm it.

  • > for the low-low cost of $EYEWATERINGSUM per month.

    This part is the 2nd (or maybe 3rd) most annoying one to me. Did we learn absolutely nothing from the last few years of enshittification? Or Netflix? Do we want to run into a crisis in the 2030's where billionaires hold knowledge itself hostage as they jack up costs?

    Regardless of your stance, I'm surprised how little people are bringing this up.

  • Just have a second (cheap) model check if it can find any hallucinations. That should catch nearly all of them in my experience.

    • What is an efficient process for doing this? For each output from LLM1, you paste it into LLM2 and say "does this sound right?"?

      If it's that simple, is there a third system that can coordinate these two (and let you choose which two/three/n you want to use?

      2 replies →

    • I realized that this is something that someone with Claude Code could reasonably easily test (at least exploratively).

      Generate 100 prompts of "Famous (random name) did (random act) in the year (random). Research online and elaborate on (random name) historical significance in (randomName)historicalSignificance.md. Dont forget to list all your online references".

      Then create another 100 LLMs with some hallucination Checker claude.md that checks their corresponding md for hallucinations and write a report.md.

  • No you’re not, it’s right the vast, vast majority of the time. More than I would expect the average physics or chemistry teacher to be.

  • > But now, you're wondering if the answer the AI gave you is correct or something it hallucinated. Every time I find myself putting factual questions to AIs, it doesn't take long for it to give me a wrong answer.

    I know you'll probably think I'm being facetious, but have you tried Claude 4 Opus? It really is a game changer.

    • A game changer in which respect?

      Anyway, this makes me wonder if LLMs can be appropriately prompted to indicate whether the information given is speculative, inferred or factual. Whether they have the means to gauge the validity/reliability of their response and filter their response accordingly.

      I've seen prompts that instruct the LLM to make this transparent via annotations to their response, and of course they comply, but I strongly suspect that's just another form of hallucination.

  • What exactly did 2025 AI hallucinate for you? The last time I've seen a hallucination from these things was a year ago. For questions that a kid or a student is going to answer im not sure any reasonable person should be worried about this.

    • If the last time you saw a wrong answer was a year ago, then you are definitely regularly getting them and not noticing.

    • Just a couple of days ago, I submitted a few pages from the PDF of a PhD thesis written in French to ChatGPT, asking it to translate them into English. The first 2-3 pages were perfect, then the LLM started hallucinating, putting new sentences and removing parts. The interesting fact is that the added sentences were correct and generally on the spot: the result text sounded plausible, and only a careful comparison of each sentence revealed the truth. Near the end of the chapter, virtually nothing of what ChatGPT produced was directly related to the original text.

      1 reply →

    • I use it every day for work and every day it gets stuff wrong of the "that doesn't even exist" variety. Because I'm working on things that are complex + highly verifiable, I notice.

      Sure, Joe Average who's using it to look smart in Reddit or HN arguments or to find out how to install a mod for their favorite game isn't gonna notice anymore, because it's much more plausible much more often than two years ago, but if you're asking it things that aren't trivially easy for you to verify, you have no way of telling how frequently it hallucinates.

    • I had Google Gemini 2.5 Flash analyse a log file and it quoted content that simply didn't exist.

      It appears to me like a form of decoherence and very hard to predict when things break down.

      People tend to know when they are guessing. LLMs don't.

    • OpenAI's o3/40 models completely spun out when I was trying to write a tiny little TUI with ratatui, couldn't handle writing a render function. No idea why, spent like 15 minutes trying to get it to work, ended up pulling up the docs..

      I haven't spent any money with claude on this project and realistically it's not worth it, but I've run into little things like that a fair amount.

    • >Thanks all for the replies, we’re hardcoding fixes now

      -LLM devcos

      Jokes aside, get deep into the domains you know. Or ask to give movie titles based on specific parts of uncommon films. And definitely ask for instructions using specific software tools (“no actually Opus/o3/2.5, that menu isn’t available in this context” etc.).

    • Are you using them daily? I find that maybe 3 or 4 programming questions I ask per day, it simply cannot provide a correct answer even after hand holding. They often go to extreme gymnastics to try to gaslight you no matter how much proof you provide.

      For example, today I was asking a LLM about how to configure a GH action to install a SDK version that was just recently out of support. It kept hallucinating on my config saying that when you provide multiple SDK versions in the config, it only picks the most recent. This is false. It's also mentioned in the documentation specifically, which I linked the LLM, that it installs all versions you list. Explaining this to copilot, it keeps doubling down, ignoring the docs, and even going as far as asking me to have the action output the installed SDKs, seeing all the ones I requested as installed, then gaslighting me saying that it can print out the wrong SDKs with a `--list-sdks` command.

    • ChatGPT hallucinates things all the time. I will feed it info on something and have a conversation. At first it's mostly fine, but eventually it starts just making stuff up.

      2 replies →

    • For me, most commonly ChatGPT hallucinates configuration options and command line arguments for common tools and frameworks.

    • Two days ago when my boomer mother in law tried to justify her anti-cancer diet that killed Steve Jobs. On the bright side my partner will be inheriting soon by the looks of it.

      11 replies →

    • Last week I was playing with the jj VCS and it couldn't even understand my question (how to swap two commits).

  • If LLMs of today's quality were what was initially introduced, nobody would even know what your rebuttals are even about.

    So "risk of hallucination" as a rebuttal to anybody admitting to relying on AI is just not insightful. like, yeah ok we all heard of that and aren't changing our habits at all. Most of our teachers and books said objectively incorrect things too, and we are all carrying factually questionable knowledge we are completely blind to. Which makes LLMs "good enough" at the same standard as anything else.

    Don't let it cite case law? Most things don't need this stringent level of review

    • Agree, "hallucination" as an argument to not use LLMs for curiosity and other non-important situations is starting to seem more and more like tech luddism, similar to the people who told you to not read Wikipedia 5+ years after the rest of us realized it is a really useful resource despite occasional inaccuracies.

      2 replies →

The fear of asking stupid questions is real, especially if one has had a bad experience with humiliating teachers or professors. I just recently saw a video of a professor subtly shaming and humiliating his students for answering questions to his own online quiz. He teaches at a prestigious institution and has a book that has a very good reputation. I stopped watching his video lectures.

  • So instead of correcting the teachers with better training, we retreat from education and give it to technocrats? Why are we so afraid of punishing bad, unproductive, and even illegal behavior in 2025?

    • Looks like we were unable to correct them over the last 3k years. What has changes in 2025 that you think we will succeed in correcting that behavior?

      Not US based, Central/Eastern Europe: the selection to the teacher profession is negative, due to low salary compared to private sector; this means that the unproductive behaviors are likely going to increase. I'm not saying the AI is the solution here for low teacher salaries, but training is def not the right answer either, and it is a super simplistic argument: "just train them better".

      3 replies →

    • At a system level, this totally makes sense. But as an individual learner, what would be my motivation to do so, when I can "just" actually learn my subject and move on?

      1 reply →

  • You might also be working with very uncooperative coworkers, or impatient ones

    • I told u my word "I WILL NEVER BREAK" but this time im gonna be so patient so u will never see whats coming. and im just making my list so i can put a tick next to every name i do my project on. i made sure all addresses cars and members have been noted down. and i promise u that i wont give up till the job is finished. after all i seen and witnessed, u wont be able to even imagine what its gonna be like and then you will finially understand the meaning of DONT F>>>K WITH ME. u fat smelly DYKE!

> Adding in a mode that doesn't just dump an answer but works to take you through the material step-by-step is magical

Except these systems will still confidently lie to you.

The other day I noticed that DuckDuckGo has an Easter egg where it will change its logo based on what you've searched for. If you search for James Bond or Indiana Jones or Darth Vader or Shrek or Jack Sparrow, the logo will change to a version based on that character.

If I ask Copilot if DuckDuckGo changes its logo based on what you've searched for, Copilot tells me that no it doesn't. If I contradict Copilot and say that DuckDuckGo does indeed change its logo, Copilot tells me I'm absolutely right and that if I search for "cat" the DuckDuckGo logo will change to look like a cat. It doesn't.

Copilot clearly doesn't know the answer to this quite straightforward question. Instead of lying to me, it should simply say it doesn't know.

  • This is endlessly brought up as if the human operating the tool is an idiot.

    I agree that if the user is incompetent, cannot learn, and cannot learn to use a tool, then they're going to make a lot of mistakes from using GPTs.

    Yes, there are limitations to using GPTs. They are pre-trained, so of course they're not going to know about some easter egg in DDG. They are not an oracle. There is indeed skill to using them.

    They are not magic, so if that is the bar we expect them to hit, we will be disappointed.

    But neither are they useless, and it seems we constantly talk past one another because one side insists they're magic silicon gods, while the other says they're worthless because they are far short of that bar.

  • It certainly should be able to tell you it doesn't know. Until it can though, a trick that I have learned is to try to frame the question in different ways that suggest contradictory answers. For example, I'd ask something like these, in a fresh context for each:

    - Why does Duckduckgo change it's logo based on what you've searched?

    - Why doesn't Duckduckgo change it's logo based on what you've searched?

    - When did Duckduckgo add the current feature that will change the logo based on what you've searched?

    - When did Duckduckgo remove the feature that changes the logo based on what you've searched?

    This is similar to what you did, but it feels more natural when I genuinely don't know the answer myself. By asking loaded questions like this, you can get a sense of how strongly this information is encoded in the model. If the LLM comes up with an answer without contradicting any of the questions, it simply doesn't know. If it comes up with a reason for one of them, and contradicts the other matching loaded question, you know that information is encoded fairly strongly in the model (whether it is correct is a different matter).

    • I see these approaches a lot when I look over the shoulders of LLM users, and find it very funny :D you're spending the time, effort, bandwidth and energy for four carefully worded questions to try and get a sense of the likelihood of the LLM's output resembling facts, when just a single, basic query with simple terms in any traditional search engine would give you a much more reliable, more easily verifable/falsifiable answer. People seem so transfixed by the conversational interface smokeshow that they forgot we already have much better tools for all of these problems. (And yes, I understand that these were just toy examples.)

      2 replies →

Consider the adoption of conventional technology in the classroom. The US has spent billions on new hardware and software for education, and yet there has been no improvement in learning outcomes.

This is where the skepticism arises. Before we spend another $100 billion on something that ended up being worthless, we should first prove that it’s actually useful. So far, that hasn’t conclusively been demonstrated.

  • You appear to be implying that the $100 billion hardware and software must all be completely useless. I think the opposite conclusion is more likely: the structure of the education system actively hinders learning, so much so that even the hardware and software you talk about couldn't work against it.

  • The article states that Study Mode is free to use. Regardless of b2b costs, this is free for you as an individual.

  • billions on tech but not on making sure teachers can pay rent. Even the prestige or mission oriented structure of teaching has been weathered over the decades as we decided to shame teachers as government funded babysitters instead of the instructors of our future generations.

    Truly a mystery why America is falling behind.

I agree with all that you say. It’s an incredible time indeed. Just one thing I can’t wrap my mind around is privacy. We all seem to be asking sometimes stupid and some times incredibly personal questions to these llms. Questions that we may not even speak out loud from embarrassment or shame or other such emotions to even our closest people. How are these companies using our data ? More importantly what are you all doing to protect yourself from misuse of your information? Or is it if you want to use it you have to give up such privacy and uncomfortableness ?

  • People often bring up the incredible efficiency improvements of LLMs over the last few years, but I don't think people do a really good job of putting it into perspective just how much more efficient they have gotten. I have a machine in my home with a single RX 7900 XTX in it. On that machine, I am able to run language models that blow GPT-3.5 Turbo out of the water in terms of quality, knowledge, and even speed! That is crazy to think about when you consider how large and capable that model was.

    I can often get away with just using models locally in contexts that I care about privacy. Sometimes I will use more capable models through APIs to generate richer prompts than I could write myself to be able to better guide local models too.

> Learning something online 5 years ago often involved trawling incorrect, outdated or hostile content and attempting to piece together mental models without the chance to receive immediate feedback on intuition or ask follow up questions.

That trained and sharpened invaluable skills involving critical thinking and grit.

  • > [Trawling around online for information] trained and sharpened invaluable skills involving critical thinking and grit.

    Here's what Socrates had to say about the invention of writing.

    > "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

    https://www.historyofinformation.com/detail.php?id=3439

    I mean, he wasn't wrong! But nonetheless I think most of us communicating on an online forum would probably prefer not to go back to a world without writing. :)

    You could say similar things about the internet (getting your ass to the library taught the importance of learning), calculators (you'll be worse at doing arithmetic in your head), pencil erasers (https://www.theguardian.com/commentisfree/2015/may/28/pencil...), you name it.

    • >I mean, he wasn't wrong! But nonetheless I think most of us communicating on an online forum would probably prefer not to go back to a world without writing. :)

      What social value is an AI chatbot giving to us here, though?

      >You could say similar things about the internet (getting your ass to the library taught the importance of learning)

      Yes, and as we speak countries are determining how to handle the advent of social media as this centralized means of propaganda, abuse vector, and general way to disconnect local communities. It clearly has a different magnitude of impact than etching on a stone tablet. The UK made a particularly controversial decision recently.

      I see AI more in that camp than in the one of pencil erasers.

    • >Here's what Socrates had to say about the invention of writing.

      I think you mean to say, "Here's what Plato wrote down that Socrates said"...

  • And also taught people how to actually look for information online. The average person still does not know how to google, I still see people writing whole sentences in the search bar.

    • This is the "they're holding it wrong" of search engines. People want to use a search engine by querying with complete sentences. If search engines don't support such querying, it's the search engine that is wrong and should be updated, not the people.

      Search engines have gotten way better at handling complete sentences in recent years, to the point where I often catch myself deleting my keyword query and replacing it with a sentence before I even submit it, because I know I will be able to more accurately capture what it is I am searching for in a sentence.

      1 reply →

  • It didn’t. Only frustrated and slowed down students.

    • Sounds like somebody who disliked implementing QuickSort as a student because what's the point, there is a library for it, you'll never need to do that kind of thing "in the real world".

      Maybe someday an LLM will be able to explain to you the pedagogical value of an exercise.

LLMs, by design, are peak Duning-Kruegers, which means they can be any good of a study partner for basic introductory lessons and topics. Yet they still require handholding and thorough verification, because LLMs will spit out factually incorrect information with confidence and will fold on correct answers when prodded. Yet the novice does not posses the skill to handhold the LLM. I think there's a word for that, but chadgbt is down for me today.

Furthermore, forgetting curve is a thing and therefore having to piece information together repetitively, preferably in a structured manner, leads to a much better information retention. People love to claim how fast they are "learning" (more like consuming tiktoks) from podcasts at 2x speed and LLMs, but are unable to recite whatever was presented few hours later.

Third, there was a paper circulating even here on HN that showed that use of LLMs literally hinder brain activation.

In my experience asking questions to Claude, the amount of incorrect information it gives is on a completely different scale in comparison to traditional sources. And the information often sounds completely plausible too. When using a text book, I would usually not Google every single piece of new information to verify it independently, but with Claude, doing that is absolutely necessary. At this point I only use Claude as a stepping stone to get ideas on what to Google because it is giving me false information so often. That is the only "effective" usage I have found for it, which is obviously much less useful than a good old-fashioned textbook or online course.

Admittedly I have less experience with ChatGPT, but those experiences were equally bad.

>I'm puzzled (but not surprised) by the standard HN resistance & skepticism

The good: it can objectively help you to zoom forward in areas where you don’t have a quick way forward.

The bad: it can objectively give you terrible advice.

It depends on how you sum that up on balance.

Example: I wanted a way forward to program a chrome extension which I had zero knowledge of. It helped in an amazing way.

Example: I am keep trying to use it in work situations where I have lots of context already. It performs better than nothing but often worse than nothing.

Mixed bag, that’s all. Nothing to argue about.

HN is resistant because at the end of the day, these are LLMs. They cannot and do not think. They generate plausible responses. Try this in your favorite LLM: "Suppose you're on a game show trying to win a car. There are three doors, one with a car and two with goats. You pick a door. The host then gives you the option to switch doors. What is the best strategy in this situation?" The LLM will recognize this as SIMILAR to the Monty Hall problem and tell you to always switch. I just reproduced this with ChatGPT.

But this is completely wrong! In the Monty Hall problem, the host has to reveal a door with a goat behind it for you to gain the benefit of switching. I have to point this out for the LLM to get it right. It did not reason about the problem I gave it, it spat out the most likely response given the "shape" of the problem.

This is why shrugging and saying "well humans get things wrong too" is off base. The problem is that the LLM is not thinking, period. So it cannot create a mental model of your understanding of a subject, it is taking your text and generating the next message in a conversation. This means that the more niche the topic (or your particular misunderstanding), the less useful it will get.

  • That's because the LLM assumes you mean for the host to open the door.

    As an autistic person I can assure you that people filling in information you neither gave nor implied is the norm for human conversations. In that sense, the LLMs are being human-like.

    If you explicitly tell it you didn't forget to have the host open the door it gives you the right answer:

    > There is a game show with three closed doors. One has a car and two have goats. I pick a door. Then, without opening my door or any other door, the host asks me if I want to switch my choice to one of the other two doors. What is my best strategy and what are the probabilities?

    Gemini: This is a classic probability puzzle known as the Monty Hall problem, though your description presents a slight variation. The optimal strategy depends entirely on whether the host opens a door or not. Your Scenario: The Host Doesn't Open a Door In the situation you've described, where the host asks if you want to switch your choice to one of the other two doors without opening any doors, there is no advantage to switching. Here's a breakdown of the probabilities in your specific scenario: Your initial choice: When you first pick a door, you have a 1 in 3 chance of selecting the door with the car. This means you have a 2 in 3 chance of having picked a door with a goat. The other two doors: The combined probability that the car is behind one of the other two doors is 2/3. Switching your choice: If you decide to switch, you are essentially choosing one of the other two doors. Since there is no new information (a door with a goat has not been revealed), the 2/3 probability is evenly split between those two doors. Therefore, your chance of winning by switching to a specific one of the other doors is 1/3. In this case, your odds of winning are 1/3 whether you stay with your original choice or switch to one of the other doors.

    • > That's because the LLM assumes you mean for the host to open the door.

      LLM's cannot "assume". There is no thinking involved. It sees that the prompt looks like the monty hall problem and it just goes full steam ahead.

      >If you explicitly tell it you didn't forget to have the host open the door it gives you the right answer:

      That should not be necessary. I asked it a very clear question. I did not mention Monty Hall. This is the problem with LLM's: it did not analyze the problem I gave it, it produced content that is the likely response to my prompt. My prompt was Monty Hall-shaped, so it gave me the Monty Hall answer.

      You are saying "ah but then if you prepare for the LLM to get it wrong, then it gets it right!" as if that is supposed to be convincing! Consider the millions of other unique questions you can ask, each with their own nuances, that you don't know the answer to. How can you prevent the LLM from making these mistakes if you don't already know the mistakes it's going to make?

      1 reply →

  • Humans who have heard of Monty Hall might also say you should always switch without noticing that the situation is different. That's not evidence that they can't think, just that they're fallible.

    People on here always assert LLMs don't "really" think or don't "really" know without defining what all that even means, and to me it's getting pretty old. It feels like an escape hatch so we don't feel like our human special sauce is threatened, a bit like how people felt threatened by heliocentrism or evolution.

    • > Humans who have heard of Monty Hall might also say you should always switch without noticing that the situation is different. That's not evidence that they can't think, just that they're fallible.

      At some point we start playing a semantics game over the meaning of "thinking", right? Because if a human makes this mistake because they jumped to an already-known answer without noticing a changed detail, it's because (in the usage of the person you're replying to), the human is pattern matching, instead of thinking. I don't think is surprising. In fact I think much of what passes for thinking in casual conversation is really just applying heuristics we've trained in our own brains to give us the correct answer without having to think rigorously. We remember mental shortcuts.

      On the other hand, I don't think it's controversial that (some) people are capable of performing the rigorous analysis of the problem needed to give a correct answer in cases like this fake Monty Hall problem. And that's key... if you provide slightly more information and call out the changed nature of the problem to the LLM, it may give you the correct response, but it can't do the sort of reasoning that would reliably give you the correct answer the way a human can. I think that's why the GP doesn't want to call it "thinking" - they want to reserve that for a particular type of reflective process that can rigorously perform logical reasoning in a consistently valid way.

      1 reply →

    • On the other hand, computers are suppose to be both accurate and able to reproduce said accuracy.

      The failure of an LLM to reason this out is indicative that really, it isn’t reasoning at all. It’s a subtle but welcome reminder that it’s pattern matching

      7 replies →

    • >People on here always assert LLMs don't "really" think or don't "really" know without defining what all that even means,

      Sure.

      To Think: able to process information in a given context and arrive at an answer or analysis. an LLM only simulates this with pattern matching. It didn't really consider the problem, it did the equivalent of googling a lot of terms and then spat something that sounded like an answer

      To Know: To reproduce information based on past thinking, as well as to properly verify and reason about with the information. I know 1+1 = 2 because (I'm not a math major, feel free to inject number theory instead) I was taught that arithmatic is a form of counting, and I was taught the mechanics of counting to prove how to add. Most LLM models don't really "know" this to begin with for the reasons above. Maybe we'll see if this study mode is different.

      Somehow I am skeptical if this will really change minds, though. People making swipes at the community like this often are not really engaging in a conversation with ideas they oppose.

      5 replies →

  • LLMs are vulnerable to your input because they are still computers, but you're setting it up to fail with how you've given it the problems. Humans would fail in similar ways. The only thing you've proven with this reply is that you think you're clever, but really, you are not thinking, period.

    • And if a human failed on this question, that's because they weren't paying attention and made the same pattern matching mistake. But we're not paying the LLM to pattern match, we're paying them to answer correctly. Humans can think.

      1 reply →

  • I use the Monty Hall problem to test people in two steps. The second step is, after we discuss it and come up with a framing that they can understand, can they then explain it to a third person. The third person rarely understands, and the process of the explanation reveals how shallow the understanding of the second person is. The shallowest understanding of any similar process that I've usually experienced is an LLM.

    • I am not sure how good your test really is. Or at least how high your bar is.

      Paul Erdös was told about this problem with multiple explanations and just rejected the answer. He could not believe it until they ran a simulation.

      3 replies →

It's quite boring to listen to people praising AI (worshipping it, putting it on a pedastal, etc). Those who best understand the potential of it aren't doing that. Instead they're talking about various specific things that are good or bad, and they don't go out of the way to lick AI's boots, but when they're asked they acknowledge that they're fans of AI or bullish on it. You're probably misreading a lot of resistance & skepticism on HN.

> I'm puzzled (but not surprised) by the standard HN resistance & skepticism.

It happens with many technological advancements historically. And in this case there are people trying hard to manufacture outrage about LLMs.

  • Regardless of stance, I sure do hate being gaslit on how I'm supposed to think of content on any given topic. A disagreeable point of view is not equivalent to "manufacturing outrage".

Yeah, I've been a game-dev forever and had never built a web-app in my life (even in college) I recently completed my 1st web-app contract, and gpt was my teacher. I have no problem asking stupid questions, tbh asking stupid questions is a sign of intelligence imo. But where is there to even ask these days? Stack Overflow may as well not exist.

  • Right on. A sign of intelligence but more importantly of bravery, and generosity. A person that asks good questions in a class improves the class drastically, and usually learns more effectively than other students in the class.

  • >Stack Overflow may as well not exist.

    That mentality seems to be more to reinforce your insistance on ChatGPT, rather than an inquiry of communities to help you out.

  • > But where is there to even ask these days?

    Stack overflow?

    The IRC, Matrix or slack chats for the languages?

    • People like that never wanted to interact with anyone to begin with. And somehow they were too lazy to google the decades of articles until ChatGPT came in to save their lives.

The freedom to ask "dumb" questions without judgment is huge, and it's something even the best classrooms struggle to provide consistently

  • I sometimes intentionally ask naive questions, eve if I think I alredy know the answer. Sometimes the naive question provokes a revealing answer that I have not even considered. Asking naive questions is a learning hack!

20 years ago I used to hang out in IRC channels where I learnt so much. I wasn't afraid of asking stupid questions. These bots are pale imitation of that.

I've learnt a great many things online, but I've also learnt a great many more from books, other people and my own experience. You just have to be selective. Some online tutorials are excellent, for example the Golang and Rust tutorials. But for other things books are better.

What you are missing is the people. We used to have IRC and forums where you could discuss things in great depth. Now that's gone and the web is owned by big tech and governments you're happy to accept a bot instead. It's sad really.

I know some Spanish - close to B1. I find ChatGPT to be a much better way to study than the standard language apps. I can create custom lessons, ask questions about language nuances etc. I can also have it speak the sentences and practice pronunciation.

> Learning something online 5 years ago often involved trawling incorrect, outdated or hostile content

What's funny is tha LLMs got trained on datasets that includes all that incorrect, outdated or hostile content.

> Should we trust the information at face value without verifying from other sources? Of course not, that's part of the learning process.

It mostly isn't, the point of the good learning process is to invest time into verifying "once" and then add verified facts to the learning material so that learners can spend that time learning the material instead of verifying everything again.

Learning to verify is also important, but it's a different skill that doesn't need to be practiced literally every time you learn something else.

Otherwise you significantly increase the costs of the learning process.

> An underrated quality of LLMs as study partner is that you can ask "stupid" questions without fear of embarrassment.

Not underrated at all. Lots of people were happy to abandon Stack Overflow for this exact reason.

> Adding in a mode that doesn't just dump an answer but works to take you through the material step-by-step is magical

I'd be curious to know how much this significantly differs from just a custom academically minded GPT with an appropriately tuned system prompt.

https://chatgpt.com/gpts

>Should we trust the information at face value without verifying from other sources? Of course not, that's part of the learning process. Will some (most?) people rely on it lazily without using it effectively? Certainly, and this technology won't help or hinder them any more than a good old fashioned textbook.

Not true if we make the assumption that most books from publishing houses with good reputation are verified for errors. Good books maybe dated but they don't contain made up things.

Skepticism is great, it means less competition. I'm forcing everyone around me to use it.

>Learning something online 5 years ago often involved trawling incorrect, outdated or hostile content and attempting to piece together mental models without the chance to receive immediate feedback on intuition or ask follow up questions. This is leaps and bounds ahead of that experience.

Researching online properly requires cross referencing, seeing different approaches, and understanding various strenghts, weaknesses, and biases among such sources.

And that's for objective information, like math and science. I thought Grok's uhh... "update" shows enough of the dangers when we resort to a billionaire controlled oracle as a authoritative resource.

>Will some (most?) people rely on it lazily without using it effectively? Certainly, and this technology won't help or hinder them any more than a good old fashioned textbook.

I don't think facilitating bad habits like lazy study is an effective argument.And I don't really subscribe to this ineviability angle either: https://tomrenner.com/posts/llm-inevitabilism/

A lot of the comments have to do with how does one use these things to speed up learning. I've tried a few things. A couple of them are prompts: 1. Make me a tutorial on ... 2. Make probes to quiz me along the way ...

I think the trick is to look at the references that the model shows you. e.g. o3 with web search will give you lots of references. 90% of the time just reading those tells me of the model and I are aligned.

For example the other day I was figuring out why using SQL alchemy Sessions and PyTest async might I get the "Connection was attached to different loop" error. Now If you started using o3 to give you a solution you would take a long time because there would be small mistakes it would make in the code and You would spend a lot of time trying to fix it. Better way to use 03 then was to ask it to give you debugging statements (session listeners attached to Sqlalchemy sessions) and understand by reading code output, what was going on. Much faster.

Once it(and I) started looking at the debugging statements the error became clear: the session/connections where leaking to different event loop, a loop_scope= param needed to be specified for all fixtures. O3 did not provide a correct solution for the code but I could, but it's help.was crucial in writing a fuck ton of debugging code and getting clues.

I also asked o3 to make a bunch of probe questions to test me, for example it said something like: try changing the loop_scope module to function, what do you expect the loopid and transaction id to be for this test?

I learned More than I realized about ORMs and how it can be used to structure transactions and structuring async PyTest tests.

One thing I'm trying these days is to have it create a memory palace from all the stuff I have in my house and link it to a new concept I'm learning and put it into an anki decks.

Firstly, I think skepticism is a healthy trait. It's OK to be a skeptic. I'm glad there are a lot of skeptics because skepticism is the foundation of inquiry, including scientific inquiry. What if it's not actually Zeus throwing those lightning bolts at us? What if the heliocentric model is correct? What if you actually can't get AIDS by hugging someone who's HIV positive? All great questions, all in opposition to the conventional (and in some cases "expert") wisdom of their time.

Now in regards to LLMs, I use them almost every day, so does my team, and I also do a bit of postmortem and reflection on what was accomplished with them. So, skeptical in some regards, but certainly not behaving like a Luddite.

The main issue I have with all the proselytization about them, is that I think people compare getting answers from an LLM to getting answers from Google circa 2022-present. Everyone became so used to just asking Google questions, and then Google started getting worse every year; we have pretty solid evidence that Google's results have deteriorated significantly over time. So I think that when people say the LLM is amazing for getting info, they're comparing it to a low baseline. Yeah maybe the LLM's periodically incorrect answers are better than Google - but are you sure they're not better than just RTFM'ing? (Obviously, it all depends on the inquiry.)

The second, related issue I have is that we are starting to see evidence that the LLM inspires more trust than it deserves due to its humanlike interface. I recently started to track how often Github Copilot gives me a bad or wrong answer, and it's at least 50% of the time. It "feels" great though because I can tell it that it's wrong, give it half the answer, and then it often completes the rest and is very polite and nice in the process. So is this really a productivity win or is it just good feels? There was a study posted on HN recently where they found the LLM actually decreases the productivity of an expert developer.

So I mean I'll continue to use this thing but I'll also continue to be a skeptic, and this also feels like kinda where my head was with Meta's social media products 10 years ago, before I eventually realized the best thing for my mental health was to delete all of them. I don't question the potential of the tech, but I do question the direction that Big Tech may take it, because they're literal repeat offenders at this point.

  • >So is this really a productivity win or is it just good feels?

    Fairly recent study on this: LLM's made developers slightly less productive, but the developers themselves felt more productive with them: https://www.theregister.com/2025/07/11/ai_code_tools_slow_do...

    There is definitely this pain point that some people talk about (even in this thread) on how "well at least AI doesn't berate me or reject my answer for bureaucratic reasons". And I find that intriguing in a community like this. Even some extremely techy people (or especially?) just something just want to at best feel respected, or at worst want to have their own notions confirmed by someone they deem to be "smart".

    >I don't question the potential of the tech, but I do question the direction that Big Tech may take it, because they're literal repeat offenders at this point.

    And that indeed is my biggest reservation here. Even if AI can do great things, I don't trust the incentive models OpenAI has. Instead of potentially being this bastion of knowledge, it may be yet another vector of trying to sell you ads and steal your data. My BOTD is long gone now.

    • Yeah I mean at this point, the tech industry is not new, nor is its playbook. At least within B2C, sooner or later everything seems to degenerate into an adtech model. I think it's because the marginal cost of software distribution is so low - you may as well give it away for free all the way up to the 8 billion population cap, and then monetize them once they're hooked, which inevitably seems to mean showing them ads, reselling what you know about them, or both.

      What I have seen nobody come even NEAR to talking about is, why would OpenAI not follow this exact same direction? Sooner or later they will.

      Things might pan out differently if you're a business - OpenAI already doesn't train its models on enterprise accounts, I imagine enterprise will take a dim view to being shown ads constantly as well, but who knows.

      But B2C will be a cesspit. Just like it always ends up a cesspit.

> Certainly, and this technology won't help or hinder them any more than a good old fashioned textbook.

Except that the textbook was probably QA’d by a human for accuracy (at least any intro college textbook, more specialized texts may not have).

Matters less when you have background in the subject (which is why it’s often okay to use LLMs as a search replacement) but it’s nice not having a voice in the back of your head saying “yeah, but what if this is all nonsense”.

  • > Except that the textbook was probably QA’d by a human for accuracy

    Maybe it was not when printed in the first edition, but at least it was the same content shown to hundreds of people rather than something uniquely crafted for you.

    The many eyes looking at it will catch it and course correct, while the LLM output does not get the benefit of the error correction algorithm because someone who knows the answer probably won't ask and check it.

    I feel this way about reading maps vs following GPS navigation, the fact that Google asked me to take an exit here as a short-cut feels like it might trying to solve the Braess' paradox in real time.

    I wonder if this route was made for me to avoid my car adding to some congestion somewhere and whether if that actually benefits me or just the people already stuck in that road.

There is no skepticism. LLMs are fundamentally lossy and as a result they’ll always give some wrong result/response somewhere. If they are connected to a data source, this can reduce the error rate but not eliminate it.

I use LLMs but only for things that I have a good understanding of.

I think both sides seem to have the same issues with the other. One side is sceptical that the other is getting good use from LLMs, and the other suggests they're just not using it correctly.

Both sides think the other is either exaggerating or just not using the tool correctly.

What both sides should do is show evidence in the form of chat extracts or videos. There are a number from the pro-LLM side, but obviously selection bias applies here. It would be interesting if the anti-LLM side started to post more negative examples (real chat extracts or videos).

> we have access to incredible tools like this

At what cost? Are you considering all the externalities? What do you think will happen when Altman (and their investors) decides to start collecting their paychecks?

It's not just "stupid" questions.

In my experience, most educational resources are either slightly too basic or slightly too advanced, particularly when you're trying to understand some new and unfamiliar concept. Lecturers, Youtubers and textbook authors have to make something that works for everybody, which means they might omit information you don't yet know while teaching you things you already understand. This is where LLMs shine, if there's a particular gap in your knowledge, LLMs can help you fill it, getting you unstuck.

>I'm puzzled (but not surprised) by the standard HN resistance & skepticism

Thinking back, I believe the change from enthusiasm to misanthropy (mis[ai]thropy?) happened around the time, and in increasing proportion to, it became a viable replacement for some of the labor performed by software devs.

Before that, the tone was more like "The fact is, if 80% of your job or 80% of its quality can be automated, it shouldn't be a job anymore."

  • I think it's just that there's been enough time and the magic has worn off. People used it enough now and everybody has made their experiences. They initially were so transfixed that they didn't question the responses. Now people are doing that more often, and realising that likelihood of cooccurrence isn't a good measure for factuality. We've realised that the number of human jobs where it can reach 8%, let alone 80% of quality, is vanishingly small.

I am just surprised they used an example requiring calculation/math. In the field the results are very much mixed. Otherwise it of course is a big help.

Knowing myself it perhaps wasn't that bad that I didn't have such tools, depends on the topic. I couldn't imagine ever writing a thesis without an LLM anymore.

There might not be any stupid questions, but there's plenty of perfectly confident stupid answers.

https://www.reddit.com/r/LibreWolf/s/Wqc8XGKT5h

  • Yeah, this is why wikipedia is not a good resource and nobody should use it. Also why google is not a good resource, anybody can make a website.

    You should only trust going into a library and reading stuff from microfilm. That's the only real way people should be learning.

    /s

    • So, do you want to actually have a conversation comparing ChatGPT to Google and Wikipedia, or do you just want to strawman typical AI astroturfing arguments with no regard to the context above?

      Ironic as you are answering someone who talked about correcting a human who blindly pasted an answer to their question with no human verification.

      4 replies →

    • Ah yes, the thing that told people to administer insulin to someone experiencing hypoglycemia (likely fatal BTW) is nothing like a library or Google search, because people blindly believe the output because of the breathless hype.

      See Dunning-Kruger.

      1 reply →

Yeah. I’ll take this over the “you’re doing it wrong” condescension of comp.lang.lisp, or the Debian mailing list. Don’t even get me started on the systemd channels back in the day.

On the flip, I prefer the human touch of the Kotlin, Python, and Elixir channels.

>Learning something online 5 years ago often involved trawling incorrect, outdated or hostile content

Leanring what is like that? MIT open courseware has been available for like 10 years with anything you could want to learn in college

Textbooks are all easily pirated

> Should we trust the information at face value without verifying from other sources? Of course not, that's part of the learning process.

People who are learning a new topic are precisely the people least able to do this.

A friend of mine used chatgpt to try to learn calculus. It gave her an example...with constants changed in such a way that the problem was completely different (in the way that 1/x^2 is a totally different integration problem than 1/(x^2 + 1)). It then proceeded to work the problem incorrectly (ironically enough, in exactly the way that I'd expect a calculus student who doesn't really understand algebra to do it incorrectly), produced a wrong answer, and merrily went on to explain to her how to arrive at that wrong answer.

The last time I tried to use an LLM to analyze a question I didn't know the answer to (analyze a list of states to which I couldn't detect an obvious pattern), it gave me an incorrect answer that (a) did not apply to six of the listed states, (b) DID apply to six states that were NOT listed, even though I asked it for an exclusive property, (c) miscounted the elements of the list, and (d) provided no less than eight consecutive completely-false explanations on followup, only four of which it caught itself, before finally giving up.

I'm all for expanding your horizons and having new interfaces to information, but reliability is especially important when you're learning (because otherwise you build on broken foundations). If it fails at problems this simple, I certainly don't trust it to teach me anything in fields where I can't easily dissect bullshit. In principle, I don't think it's impossible for AI to get there; in practice, it doesn't seem to be.

Another quality is that everything is written. To me having a text support to discuss and the discussion recorded in text format is one of the strongest support someone can get when learning.

> Learning something online 5 years ago often involved trawling incorrect, outdated or hostile content

Also using OpenAI as a tutor means trawling incorrect content.

I'd share a little bit experience about learning from human teachers.

Here in my country, English is not you'll hear in everyday conversation. Native English speakers account to a tiny percentage of population. Our language doesn't resemble English at all. However, English is a required subject in our mandatory education system. I believe this situation is quite typical across many Asian countries.

As you might imagine, most English teachers in public schools are not native speakers. And they, just like other language learners, make mistakes that native speakers won't make without even realizing what's wrong. This creates a cycle enforcing non-standard English pragmatics in the classroom.

Teachers are not to blame. Becoming fluent and proficient enough in a second language to handle questions students spontaneously throw to you takes years, if not decades of immersion. It's an unrealistic expectation for an average public school teacher.

The result is rich parents either send their kids to private schools or have extra classes taught by native speakers after school. Poorer but smart kids realize the education system is broken and learn their second language from Youtube.

-

What's my point?

When it comes to math/science, in my experience, the current LLMs act similarly to the teachers in public school mentioned above. And they're worse in history/economics. If you're familiar with the subject already, it's easy to spot LLM's errors and gather the useful bits from their blather. But if you're just a student, it can easily become a case of blind-leading-the-blind.

It doesn't make LLMs completely useless in learning (just like I won't call public school teachers 'completely useless', that's rude!). But I believe in the current form they should only play a rather minor role in the student's learning journey.

HN’s fear is the same job security fear we’ve been seen since the beginning of all this. You’ll see this on programming subs on Reddit as well.

  • Can we not criticize tech without being considered luddites anymore? I don't fear for my job over AI replacement, it is just fundamentally wrong on many answers.

    In my field there is also the moral/legal implications of generative AI.

on hn i find most people here to be high iq low eq

high iq enough that they really find holes in the capabilities of LLMs in their industries

low eq enough that they only interpret it on their own experiences instead of seeing how other people's quality of life have improved

> A tireless, capable, well-versed assistant

Correction: a tireless, capable, well-versed, sycophantic assistant that is often prone to inventing absolute bullshit.

> ...is an autodidact's dream

Not so sure about that, see above.

It does go both ways. You can ask stupid questions without fear of embarrassment or ruined reputation, and it can respond with stupid answers without fear of embarrassment or ruined reputation.

It can confidently spew completely wrong information and there's no way to tell when it's doing that. There's a real risk that it will teach you a complete lie based on how it "thinks" something should work, and unlearning that lie will be much harder than just learning the truth initially

> Learning something online 5 years ago often involved trawling incorrect, outdated or hostile content

... your "AI" is also trained on the above incorrect, outdated or hostile content ...

Agrée, it would have been a godsend for those of us who were not as fast as the other and were eventually left over in usual schooling system.

Beside there isn’t any of the usual drawback with privacy because no one care if OpenAI learn about some bullshit you were told to learn.

  • >Beside there isn’t any of the usual drawback with privacy because no one care if OpenAI learn about some bullshit you were told to learn

    you didn't see the Hacker News threat talking about the ChatGPT subpeona, did you? I was a bit shocked that 1) a tech community didn't think a company would store data you submit to their servers and 2) that they felt like some lawyers and judges reading their chat logs was some intimate invasion of privacy.

    Let's just say I certainly cannot be arsed to read anyone else's stream of conscious without being paid like a lawyer. I deal with kids and it's a bit cute when they babble about semi-coherent topics. An adult clearly loses that cute appeal and just sounds like a madman.

    That's not even some dig, I sure suck at explaining my mindspace too. It's a genuinely hard skill to convert thoughts to interesting, or even sensible, communication.

> An underrated quality of LLMs as study partner is that you can ask "stupid" questions without fear of embarrassment.

Even more important for me, as someone who did ask questions but less and less over time, is this: with GPTs I no longer have to the see passive-aggressive banner saying

> This question exists for historical reasons, not because it’s a good question."

all the time on other peoples questions, and typically on the best questions with the most useful answers there were.

As much as I have mixed feelings about where AI is heading, I’ll say this: I’m genuinely relieved I don’t need to rely on Stack Overflow anymore.

It is also deeply ironic how stackoverflow alienated a lot of users in the name of inclusion (the Monica case) but all the time they themselves were the ones who really made people like me uncomfortable.