Comment by dual_dingo

2 years ago

The "I" in AI is just complete bullshit and I can't understand why so many people are in a awe of a bit of software that chains words to another based on some statistical model.

The sad truth is that ChatGPT is about as good an AI as ELIZA was in 1966, it's just better (granted: much better) at hiding its total lack of actual human understanding. It's nothing more than an expensive parlor trick, IMHO.

Github CoPilot? Great, now I have to perform the most mentally taxing part of developing software, namely understanding other people's code (or my own from 6 months ago...) while writing new code. I'm beyond thrilled ...

So, no, I don't have an AI fatigue, because we absolutely have no AI anywhere. But I have a massive bullshit and hype fatigue that is getting worse all the time.

I'm more fatigued by people denying the obvious that ChatGPT and similar models are revolutionary. People have been fantasizing about the dawn of AI for almost a century and none managed to predict the rampant denialism of the past few months.

I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

  • There's a fellow that kinda predicted it in 1950 [0]:

    > These arguments take the form, "I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X."

    > [...]

    > The criticisms that we are considering here are often disguised forms of the argument from consciousness, Usually if one maintains that a machine can do one of these things, and describes the kind of method that the machine could use, one will not make much of an impression.

    Every time "learning machines" are able to do a new thing, there's a "wait, it is just mechanical, _real_ intelligence is the goalpost".

    [0] https://www.espace-turing.fr/IMG/pdf/Computing_Machinery_and...

    • >Every time "learning machines" are able to do a new thing, there's a "wait, it is just mechanical, _real_ intelligence is the goalpost".

      Just because people shift the goalposts doesn't mean that the new position of the goalposts isn't closer to being correct than the old position. You can criticise the people for being inconsistent or failing to anticipate certain developments, but that doesn't tell you anything about where the goalposts should be.

  • > I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

    It's important to note that this is your assumption which I believe to be wrong (for most people here).

  • > I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

    Respectfully, that reads as needlessly combative within the context. It sounds like the blockchain proponents who say that the only people who are against cryptocurrencies are the ones who are “bitter for having missed the boat”.¹

    It is possible and perfectly reasonable to identify problems in ChatGPT and similar technologies without feeling threatened. Simple example: someone who is retired and monetarily well off, whose way of living and sense of self worth are in no way affected by developments in AI, can still be critical and express valid concerns when these models tell you that it’s safe to boil a baby² or give other confident but absurdly wrong answers to important questions.

    ¹ I’m not saying that’s your intention, but consider that type of rhetoric may be counterproductive if you’re trying to make another understand your point of view.

    ² I passed by that specific example on Mastodon but I’m not finding it now.

  • > ChatGPT and similar models are revolutionary

    For _what purpose_, tho? It's a good party trick, but its tendency to be confidently wrong makes using it for anything important a bit fraught.

    • If you're the type of person that struggles to ramp up production of a knowledge product, but has great success in improving a knowledge product through an iterative review process, then these generative pre-trained transformers are fantastic tools in your toolbox.

      That's about the only purpose I've found so far, but it seems a big one?

    • It seems to me that the tendency to be confidently wrong is entirely baked into intelligence of all kinds. In terms of actual philosophical rationality, human reasoning is also much closer to cargo cults than to cogito ergo sum, and I think we're better for it.

      I cannot but think that this approach of "Strong Opinions, Weakly Held" is a much stronger path forward towards AGI than what we had before.

    • If you work at a computer, it will increase your productivity. Revolutionary is not the word I'd use, but finding use cases isn't hard.

      8 replies →

  • So, to you, ChatGPT is approaching AGI?

    • I do believe if we are going to get AGI without some random revolutionary breakthrough, to achieve it iteratively, It's going to come through language models.

      Think about it.

      What's the most expressive medium we have which is also absolutely inundated with data?

      To broadly be able to predict human speech you need to broadly be able to predict the human mind. To broadly predict a human mind requires you build a model of it, and to have a model of a human mind? Welcome to general intelligence.

      We won't realize we've created an AGI until someone makes a text model, starts throwing random problems at it, and discovers that it's able to solve them.

      8 replies →

    • Perhaps a more interesting question is "how much better do we understand what characteristics AGI will have due to ChatGPT?"

      We don't really understand what intelligence means -- in humans or our creations -- but ChatGPT gives us a little more insight (just like ELIZA, and the psychological research behind it, did).

      At the very least, ChatGPT helps us build increasingly better Turing tests.

    • Yes. It is obviously already weak AGI (obvious to anyone if they saw it 20 years ago).

      It is also obvious that we are in the middle of a shift of some kind. Very hard to see from within, but clearly we will look back at 2022 as the beginning of something

  • The problem is that ChatGPT is about as useful as all the other dilettantes claiming to be polymaths. Shallow, unreliable knowledge on lots of things only gets you so far. Might be impressive at parties, but once there's real, hard work to do, these things fall apart.

    • Even if ChatGPT could only make us 10% better at solving the "easy" things but on a global scale, that is already a colossal benefit to society.

As much as I’m sick of AI products, I’m even more sick of the “ChatGPT is bullshit” argument.

  • It can be both bullshit and utterly astounding.

    In terms of closing the gap between AI hype and useful general purpose AI tools, no one can reasonably deny that it's an absolute quantum leap.

    It's just not a daily driver for technical experts yet.

  • The biggest thing I’ve learned from chatGPT is that real people struggle with the difference between intelligence, understanding, and consciousness / sentience.

    • Because they are all ill defined in the manner they are used in common language. Hell, we have trouble describing what they are, especially in a scientific fact based setting.

      Before this point in history we accepted 'I am that I am' because there wasn't any challenger to the title. Now that we are putting this to question we realize our definitions may not work well.

    • >The biggest thing I’ve learned from chatGPT is that real people struggle with the difference between intelligence, understanding, and consciousness / sentience.

      Well, I'm no fan of chatGPT. But it appears most people are worse than chatGPT, because just regurgitate what they hear with no thought or contemplation. So you can't really blame average folks who struggle with the concepts of intelligence/understanding that you mention.

    • Which should be no surprise, as people have been grappling with these ideas for centuries, and we still don't have any definitive idea of what consciousness/sentience truly is. What I find interesting is that at one point the Turing test seemed to be the gold standard for intelligence, but chatGPT could pass that with flying colors. So how exactly will we know if/when true intelligence does emerge?

      1 reply →

    • The most annoying thing to me is people thinking AI wants things and gets happy and sad. It doesn't have a mamailian or reptilian brain. It just holds a mirror up to humanity generally via matrix math and probability.

      1 reply →

  • The only problem with the “ChatGPT is bullshit” argument is that it is only half true.

    ChatGPT, when provided with a synthetic prompt is reliably a synthesizer, or to use the loaded term, a bullshiter.

    When provided with an analytic prompt, it is reliably a translator.

    Terms, etc: https://www.williamcotton.com/articles/chatgpt-and-the-analy...

    • > ChatGPT, when provided with a synthetic prompt is reliably a synthesizer, or to use the loaded term, a bullshiter.

      sounds like most people tbf

      2 replies →

  • I like this take. It has many clear applications already and LLM's are still only in their infancy. I both criticize and use ChatGPT at work. It has flaws and it has advantages. That it's bullshit or "ELIZA" is a short-sighted view that overvalues the importance of AGI and misses what we're already getting.

    But yes indeed, there are many, many AI products launched during this era of rapid progress. Even kind of shoddy products can be monetized if they provide value over what we had before. I think the crowded market and all the bullshit and all the awesome, all at once, is a sign of very rapid progress in this space. It will probably not always be like this and who knows what we are approaching.

I can say with a certain degree of confidence that you haven't actually used CoPilot daily.

  • I've worked with teams that used Copilot. They claim it's great "Hey, now I don't have to actually spend any time writing all this boilerplate!" while for me, the person who has to review their code before releasing stuff, easier ways of writing boilerplate is not a positive, it's a negative.

    If writing boilerplate becomes effortless, then you'll write more of it, instead of feeling the pain of writing it and then trying to reduce it, because you don't want to spend time writing it.

    And since Copilot was accepted as a way to help the developers on the teams, the increase of boilerplate have been immersive.

    I'm borderline pissed, but mostly at our own development processes, not at Copilot per se. But damn if I didn't wish it existed somehow, although it was inevitable it would at one point.

    • I feel ya. If your job is to kick back bad code, and now there is a tool that generates bad code, how does this not make your job more important?

      Why not get some of the freed up, Copilot augmented developer labor budget moved to testing and do more there or build more tools to make your personal, boilerplate, repetitive tasks more efficient?

      If the coders are truly just dumping bad code your way, that's an externality and the cost should be called out.

    • I use github copilot on a daily basis and it improves my time from thinking to code.

      Often I have times where I'm think about a specific piece of code that I need and I have it partially in my head and github copilot "just completes" it. I press tab and that's it.

      I'm not talking about writing entire functions where you have to mentally strain yourself to understand what it wrote.

      But I've never seen any autocompleter do it so good then github copilot. Even for documentation purposes like JSdoc and related commenting system it's amazing.

      It's a tool I pay for now since it's proven to be a tool that increases my productivity.

      Is it gonna replace us? I hope not, but it does look promising as one of those tools people will talk about in the future.

    • It would be helpful if people could include in their assessment roughly how much time they've personally spent using these tools.

      Helping write boilerplate is to Copilot what cropping is to Photoshop.

      Some of the ways I've found Copilot a powerful tool in my toolbox: Writing missing comments (especially unfamiliar code bases), "translating" parts of unfamiliar code to a more familiar language, suggesting ideas for how to implement a feature (!) in comments.

    • >the increase of boilerplate have been immersive

      Has it really? Or are you worried that this is something that will happen?

      Of course I don't know how other people use it but I find that it's very much like having a fairly skilled pair programmer on board. I still need to do a lot of work but I get genuine help. I don't find that I personally write more boilerplate code than before, every programming principle applies as it always has.

      1 reply →

  • I haven't. Now you know for a fact :)

    What I have seen about it ranged from things that can be nearly just as well handled by your $EDITOR's snippet functionality to things where my argument kicked in - I have to verify this generated code does what I want, ergo I have to read and understand something not written by me. Paired with the at least somewhat legally and ethical questionable source of the training data, this is not for me.

    • So stop evangelizing about stuff you haven’t used. Understanding code is easier than writing it from the scratch. That’s why code review doesn’t take as much time as writing code and you still need to prove your code works, even if you wrote it yourself.

      6 replies →

  • I've used it quite a lot and I agree with the original post. It seemed really useful at first but then it started introducing several bugs in large blocks of code. I've stopped using it in the end since the small snippets on the one line size is trivial enough to write myself (with just vim proficiency) and the larger blocks on the order of a function autocomplete is too bug prone (and kills too much willpower budget to fix).

  • Yep. I’m personally skeptical of so many other use cases for LLMs but CoPilot is fantastic and basically just autocomplete on rocket fuel. If you can use autocomplete, you can use CoPilot super effectively.

    • I almost always turn autocomplete off except in circumstances where the API has bad documentation. I also found that copilot was an aggravation more than a help after using it for a couple weeks.

      2 replies →

> The "I" in AI is just complete bullshit and I can't understand why so many people are in a awe

I agree.

And the worst thing is that the bullshit hype comes round every decade or so, and people run around like headless chickens insisting that "this time its different", and "this time its the REAL THING".

As you say, first(ish) there was ELIZA. Than this that and everything else. Then Autonomy and all that dot-com era jazz. Now with compute becoming more powerful and more compact, any man and his dog can stuff some AI bullshit where it doesn't belong.

I have seen comments below on this thread where people talk about "well, it's closing the gap". The thing you have to understand is that the gap will always exist. Ultimately you will always be asking a computer to do something. And computers are dumb. They are and will always be beholden to the humans that program them and the information that you feed them. The human will always have the upper hand at any tasks that require actual intelligence (i.e. thoughtful reasoning, adapting to rapidly changing events etc.).

  • > And the worst thing is that the bullshit hype comes round every decade or so, and people run around like headless chickens insisting that "this time its different", and "this time its the REAL THING".

    This. To answer the OPs question, this is what I'm fatigued about.

    I'm glad we're making progress. It's a hell of a parlor trick. But the hype around it is astounding considering how often it's answers are completely wrong. People think computers are magic boxes, and so we must be just a few lever pulls away from making it correct all the time.

    Or maybe my problem is that I've overestimated the average human's intelligence. If you can't tell ChatGPT apart from a good con-man, can we consider the Turing test passed? It's likely time for a redefinition of the Turing test.

    Instead of AI making machines smarter, it seems that computers are making humans dumber. Perhaps the AI revolution is about dropping the level of average human intelligence to match the level of a computer. A mental race to the bottom?

    I'm reminded of the old Rod Serling quote: We're developing a new citizenry. One that will be very selective about cereals and automobiles, but won't be able to think.

    • I'm having a really hard time following your argument. But absolutely agree we need to redefine the Turing test. Only problem is that I can no longer come up with any reasonable time-limited cognitive task that next year's AI would fail at, but a "typical human" would pass.

      1 reply →

    • >>Instead of AI making machines smarter, it seems that computers are making humans dumber. Perhaps the AI revolution is about dropping the level of average human intelligence to match the level of a computer. A mental race to the bottom?

      I came here to make this comment. Thank you for doing it for me.

      I remember feeling shocked when this article appeared in the Atlantic in 2008, "Is Google Making Us Stupid?": https://www.theatlantic.com/magazine/archive/2008/07/is-goog...

      The existence of the article broke Betteridge's law for me. The fact that this phenomenon it is not more widely discussed describes the limit of human intelligence. Which brings me back around to the other side... perhaps we were never as intelligent as we suspected?

      2 replies →

  • Man, if this were 1800 you'd be stating that man would never fly and the horse would never be supplanted by the engine. I honestly don't believe you have any scientific or rational reasoning for the point you are attempting to make in your post, because if you were you'd be stating that animal intelligence is magical.

    • > Man, if this were 1800 you'd be stating that man would never fly and the horse would never be supplanted by the engine.

      I'm sorry, what sort of bullshit argument is that ?

      Flight and engines are both natural evolution using natural physics and mechanics.

      Artificial Intelligence is nothing but a square-peg-round-hole, when you have a sledgehammer everything looks like a nut scenario.

      2 replies →

“AI” isn’t bull shit, it’s correctly labeled. It’s intelligence which is artificial: i.e. fake, ersatz, specious, not genuine… It’s our fault for not just reading the label. (I absolutely agree with your post and your viewpoint, just to be clear!)

  • Artifical means "not human" in this context for me, but I understand "Intelligence" as the abiltiy to actual reason about something based on things you learned and/or experienced, and these "AI" tools don't do this at all.

    But defining "intelligence" is a philosopical question that doesn't necessarily have one answer for everything and everyone.

    • Personally, I try to take a more inductive approach. We don’t know what intelligence is, but we assume it’s something we exhibit. We also clearly recognize other animals as possessing the same trait to varying degrees. Since we don’t know what it is, and since (I would argue) we can only convincingly claim that exists in other biological organisms without meeting a high burden of proof, to claim that it exists in an inorganic substrate requires a VERY large burden of proof to be met, similar to what would be met if you were claiming that magic existed. In my view, calling computers “intelligent” is in the same league as claiming that crystals are magic. Of course, this depends on my own philosophical interpretation of what intelligence is, as you say.

      3 replies →

  • The intention of the "artificial" in "AI" is not that particular meaning of "artificial", but the one for "constructed, man-made"—see meaning #1 in the Wiktionary definition[0]; the one you are using is #2.

    It is often frustrating that English has words with such different (but clearly related) definitions, as it can make it far too easy to end up talking past each other.

    [0] https://en.wiktionary.org/wiki/artificial

  • "Artificial" is not synonymous with "fake". "Fake" implies a level of deception.

    • Not necessarily true. People talk about “fake meat” all the time but it’s clear there’s no level of fraudulence implied by this usage. It’s meant in the sense of “artificial meat”. There are multiple ways the word “fake” is used, and one is as a synonym for “artificial”.

      However, in this case, it does seem that there is a level of fraudulence and deception. Given that “fake” often is used exactly the way you say, maybe “fake intelligence” would indeed be a more appropriate term.

      9 replies →

I agree with you completely. I work in the field and I think your sentiment is way more common amongst people who know about the technology, vs the fair weather fans who have all jumped on the hype bandwagon recently. I actually posted the same thing (that it's no different than Eliza) a month or so ago, and got at least one hilarious dismissal, like the "I bet you make widgets" person that replied to you.

  • If you believe that ChatGPT is similar to Eliza, then I can guarantee that you have no rigorous no-wriggle-room definition of what intelligence is. Maybe you think you understand it, or have defined it, but I'm 100% certain any such definition is not 100% reductive and instead relies on other ill-defined works like "reasoning" etc etc.

“It’s just statistics” is an evergreen way to dismiss AI. The problem is you’re also just statistics.

  • Source for consciousness / intelligence to be "statistics"?

    I don't think there is any because there is no functional model for what organic intelligence is or how it operates. There are plethora of fascinating attempts / models but only a subset implore that it is solely "statistical". And even if it was statistical, the implementation of the wet system is absolutely not like a gigantic list of vectorized (stripped of their essence) tokens

    • That's like saying that airplanes aren't flying since they're not flapping their wings. Intelligence is a capability - not a specific mechanism.

      Consciousness is a subjective experience (regardless of what you believe/understand to be responsible for that experience), so discussing "consciousness/intelligence" is rather like discussing "cabbages/automobiles".

    • Sources for intelligence to be magic? I mean we know it's complicated but intelligence also spans the smallest creatures on the planet to humans. This points at intelligence being a reduceable problem that is layered. On top of that it's unlikely we need to model nerve behavior to get something intelligence like output.

    • There’s a man who claims to have solved consciousness as a multilayered Bayesian prediction system.

      See Scott Alexander for attempts to explain what is apparently impenetrable papers on it.

> Github CoPilot? Great, now I have to perform the most mentally taxing part of developing software, namely understanding other people's code (or my own from 6 months ago...) while writing new code. I'm beyond thrilled ...

I think there's an argument to be made that AI is being used here to help you tackle the more trivial tasks so you have more time to focus on the more important, and challenging tasks. Albeit I recognise GitHub CoPilot is legally questionable.

But yes, I agree with your overall point that AI has still not been able to 'think' like a human but rather can only still pretend to think like a human, and history has shown that users are often fooled by this.

  • I think the parent’s comment is probably referring to the fact if you use Copilot to write code then you have to go through and try to understand what it wrote and possibly debug it. And you don’t have the opportunity to ask it why it wrote it the way it did when reviewing its code.

    • I think you’re right, but that just means parent doesn’t understand copilot and is off tilting at windmills.

      Copilot is amazing for reducing the tedium of typing obvious but lengthy code (and strings!). And it’s inline and passive; it’s not like you go edit -> insert -> copilot function and it dumps in 100 lines of code you have to debug. Which is what it sounds like parent is mistaking it for.

      I’m reminded of 1995, when an elderly relative told me everything wrong with the internet based on TV news and not having ever actually seen the internet.

      4 replies →

    • But it's trickling in small chunks at a time unless you are just smashing tab repeatedly and don't look at what it did until the very end. You can also not accept what it offers and just continue writing code for yourself. If a dev submitted a bunch of Copilot code they don't understand and can't answer questions about you reject the PR outright and they eventually realize it didn't save them any time or effort. Copilot isn't the employee.

As soon as I open a fresh IDE these days I immediately miss CoPilot and it's the first thing I install.

Hype or not, it's incredibly useful and has increased my productivity by at least 20%. Worth every penny.

I agree. I didn't understand the big deal that it passed a google interview either. IMO, that said more about the uselessness of the interview than the 'AI'.

Co-pilot has been semi-useful. It's faster than search SO, but like you said, I still have to review all the code and it's often wrong in subtle ways.

  • This is the meat of the issue - ChatGPT is exposing certain things a susceptible to bullshit attacks; humans have just been relatively bad at those.

    It will turn out to be a useful tool for those who know what they’re asking about so they can check the answer quickly; but it will be USED by tons of people who don’t have a way of verifying the answers given.

ChatGPT is of actual help for me in various daily tasks, which was never the case with ELIZA or earlier chatbots which were only good as a curiosity or to have some fun.

Lack of actual human understanding? Of course, by definition a machine will always lack human understanding. Why does that matter so much if it's a helpful tool?

For what it's worth, I do agree that there is a lot of hype. But contrary to blockchain, NFTs, web3, etc., this is actually useful for many people in many everyday use cases.

I see it as more similar to the dot com hype - buying a domain and creating a silly generic website didn't really multiply the value of your company as some people thought in that era, but that doesn't mean that websites weren't a useful technology with staying power, as time has shown.

I'm sorry I don't want it to get much smarter.

It you ask it to go through and comment code it does a pretty good job of that.

some things better than others(not that great at CSS)

need a basic definition of something. got it.

tell it to write a function it's not bad.

As a BA just tell it what your trying to do and what questions it should ask users. It will get some good ideas for you.

Want it to be a PM have create a loop asking every 10 minutes if your done yet.

Is it a senior engineer? no. can it pass a senior engineering interview? quite possibly.

debug code hit or miss.

I think the big thing it's not that great at front end code. It can't see so that probably makes sense. a fine-tuned version of clip that interacted with a browser would probably be pretty scary.

I wonder if we will look back at this comment (and others like it) as similar to the infamous “takedown” of Dropbox when it was first posted on HN.

Time will tell, I certainly can’t predict.

The "I" in AI is just complete bullshit

We're about six minutes away from "AI bros" becoming a thing.

The same kind of grifters who always latch onto the latest thing and hype it up in order to make a quick buck are already knocking on AI's door.

See also: Cryptocurrency, and Beanie Babies.