← Back to context

Comment by jimbokun

3 days ago

Best quote from the article:

> That’s also why I see no point in using AI to, say, write an essay, just like I see no point in bringing a forklift to the gym. Sure, it can lift the weights, but I’m not trying to suspend a barbell above the floor for the hell of it. I lift it because I want to become the kind of person who can lift it. Similarly, I write because I want to become the kind of person who can think.

I don't really like the assumption that anyone who uses AI to, say, write an essay, is not the "kind of person who can think."

And using AI to replace things you find recreational is not the point. If you got paid $100 each time you lifted a weight, would you see a point in bringing a forklift to the gym if it's allowed? Or will that make you a person who is so dumb that they cannot think, as the author is implying?

  • As capable as they get, I still don't see a lot of uses for these things, myself, still. Sometimes if I'm fundamentally uninspired I'll have a model roll the dice, decide what I do or don't like about where it went to create a sense of momentum, but that's the limit. There's never any of its output in my output, even in spirit unless it managed to go somewhere inspiring, it's just a way to let me warm up my generation and discrimination muscles. "Someone is wrong on the internet"-as-a-service, basically.

    Generally, if I come across an opportunity to produce ideas or output, I want to capitalize on it for growing my skills and produce an individual and authentic artistic expression where I want to have very fine control over the output in a way that prompt-tweak-verify simply cannot provide.

    I don't value the parts it fills in which weren't intentional on the part of the prompter, just send me your prompt instead. I'd rather have a crude sketch and a description than a high fidelity image that obscures them.

    But I'm also the kind of person that never enjoyed manufactured pop music or blockbusters unless there's a high concept or technical novelty in addition to the high budget, generally prefer experimental indie stuff, so maybe there's something I just can't see.

    • Yeah, that makes sense. If people don't see uses for AI, they shouldn't use it. But going out of the way to imply that people who use AI cannot think is pretty stupid in itself imo. I am not sure how to put this, but maybe to continue with your example, I like a lot of indie stuff as well, but I don't think anyone who watches, say, Fast and Furious, cannot think or is stupid, unless they explicitly make it the case by speaking, etc.

      So my issue is that you shouldn't dismiss AI use as trash just because AI has been used. You should dismiss it as trash because it is trash. But the post says is that you should dismiss it as trash because AI was involved in it somewhere so i feel that's a very shitty/wrong attitude to have.

      1 reply →

  • The same person could use a forklift at work, and lift weights manually at the gym.

    Just pick the right tool for the job: don't take the forklift into the gym, and don't try to overhead press thousands of pounds that would fracture your spine.

  • I notice you make no concrete defense of the value of having an AI write an essay for you.

    • I’m not trying to claim AI-written essays are inherently “valuable” in some grand philosophical sense... just that using a tool doesn’t automatically mean someone can’t think.

      People use calculators without being unable to do maths, and use spellcheck without being unable to spell.

      AI can help some get past the blank-page phase or organize thoughts they already have. For others, it’s just a way to offload the routine parts so they can focus on the substance.

      If someone only outsources everything to an AI, there’s not much growth there sure. But the existence of bad use cases doesn’t invalidate the reasonable ones.

If you're writing an essay to prove you can or to speak your words - then you should do it yourself - but sometimes you just need an essay to summarize a complex topic as a deliverable.

tough most people either don't get it or are lay people that do not want to become the kind of people who can think. I go with the second one

  • Russ Hanneman's thigh implants are a key example. Appearances are all to some people. Actual growth is meaningless to them.

    The problem with AI, is that they waste the time of dedicated, thinking humans which care to improve themselves. If I write a three paragraph email on a technical topic, and some yahoo responds with AI, I'm now responding to gibberish.

    The other side may not have read, may not understand, and is just interacting to save time. Now my generous nature, which is to help others and interact positively, is being wasted to reply to someone who seems to have put thought and care into a response, but instead was just copying and pasting what something else output.

    We have issues with crackers on the net. We have social media. We have political interference. Now we have humans pretending to interact, rendering online interactions even more silly and harmful.

    If this trend continues, we'll move back to live interaction just to reduce this time waste.

  • If the motivation structure is there I don’t see an inherent reason for people to refuse cultivating themselves. Going with the gym analogy lay people did not need gyms when physical work was the norm, cultivation was readily accomplished.

    If anything there is a competing motivational structure in which people are incentivized not to think but to consume, react, emote etc. Information processing skills of the individual being deliberately eroded/hijacked/bypassed is not a AI thing. The most obvious example is ads. Thinkers are simply not good for business.

    • Gym is a great analogy here since only a small fraction of population goes to gyms. Most people just came fat after work was no longer physical and mobility was achieved with cars.

Below is the worst quote... It is plain wrong to see an LLM as a bags of words. LLMs pre-trained on large datasets of text are world models. LLMs post-trained with RL are RL-agents that use these modeling capabilities.

> We are in dire need of a better metaphor. Here’s my suggestion: instead of seeing AI as a sort of silicon homunculus, we should see it as a bag of words.

  • LLMs aren't world models, they are language models. It will be interesting to see which of the LLM implementation techniques will be useful in building world models, but that's not what we have now.

    • Can you give an example of some part of the physical world or infosphere that an LLM can't model, at least approximately?

  • When you see a dog, or describe the entity, do you discuss the genetic makeup or the bone structure?

    No, you describe the bark.

    The end result is what counts. Training or not, it's just spewing predictive, relational text.