Ask HN: Is anyone else getting AI fatigue?
2 years ago
AI is great. ChatGPT is incredible. But I feel tired when I see so many new products being built that incorporate AI in some way, like "AI for this..." "AI for that..." I think it misapplies AI. But more than that, it's just too much. Right? Right? Anyone else feel like this? Everything is about ChatGPT, AI, prompts or startups we can build with that. It's like the crypto craze all over again, and I'm a little in dread of the shysters again, the waste, the opportunity cost of folks pursuing this like a mad crowd rather than being a little more thoughtful about where to go next. Not a great look for the "scene" methinks. Am I alone in this view?
Engineers are always building things that are incredible, then turning their back on it as ordinary once the problem is solved, “oh that’s so normal, it was just a little math, a little tweak, no big deal”.
AI has gone through a lot of stages of “only X can be done by a human”-> “X is done by AI” -> “oh, that’s just some engineering, that’s not really human” or “no longer in the category of mystical things we can’t explain that a human can do”.
LLM is just the latest iteration of, “wow it can do this amazing human only thing X (write a paper indistinguishable from a human)” -> “doh, it’s just some engineering (it’s just a fancy auto complete)”.
Just because AI is a bunch of linear algebra and statistics does not mean the brain isn’t doing something similar. You don’t like terminology, but how is re-enforcement “Learning”, not exactly the same as reading books to a toddler and pointing at a picture and having them repeat what it is?
Start digging into the human with the same engineering view, and suddenly it also just become a bunch of parts. Where is the human in the human once all the human parts are explained like an engineer would. What would be left? The human is computation also, unless you believe in souls or other worldly mysticism. So why not think eventually AI as computation can be equal to human.
Just because Github CoPilot can write bad code, isn't a knock on AI, it's real, a lot of humans write bad code.
The problem with LLM in my view is that they're capped at what already exists.
Using them for "creative" things, is that they can parrot things back in the statistically average way, or maybe attempt to echo it in an existing style.
Copilot cannot use something because it prefers it, or thinks it's better than what's common. It can only repeat what is currently popular (and will likely be self reenforced over time)
When you write prose or code you develop preferences and opinions. "Everyone does it this way, but I think X is important."
You can take your learning and create a new language or framework based on your experiences and opinions working in another.
You develop your own writing style.
LLM cuts out this chance to develop.
---
Images, prose, (maybe) code are not the result of computation.
Two different people compute the same thing they get the same answer. When I ask different people to write the same thing I get wildly different answers.
Sure ChatGPT may give different answers, but they will always be in the ChatGPT style (or parroting the style of an existing someone).
"ChatGPT will get started and I'll edit my voice into what it generated" is not how writing works.
It's difficult for me to see how a world where people are communicating back and forth with the most statistically likely manner is good
All artists of every stripe have studied other art, have practiced what has come before, and have influences. What do you think they do in art school; they copy what came before. The old masters had understudies, that learned a style. Is it not an old saying in art that ‘there is nothing original’. Everything was based on something.
Humans are also regurgitating what they ‘inputted’ to their brain. For programming, isn’t it an old joke that everyone just copy/paste's from stack overflow?
Why if an AI does it (copy paste), it is somehow now a lesser accomplishment than when a human does it.
2 replies →
The style can be influenced, however. It isn't unreasonable to suggest an AI that fine tunes the style of the LLM output to meet whatever metric you're after.
As far as creativity goes, human creativity is also a product of life experiences. Artistic styles are always influenced by others, etc.
I generally agree that we quickly adjust to new tech and forget how impactful it is.
But I can’t fully get on board with this:
> but how is re-enforcement “Learning”, not exactly the same as reading books to a toddler and pointing at a picture and having them repeat what it is? Start digging into the human with the same engineering view, and suddenly it also just become a bunch of parts. Where is the human in the human once all the human parts are explained like an engineer would.
The parent teaching a toddler bears some vague resemblance to machine learning, but the underlying results of that learning (and the process of learning itself) could not be any more different.
More problematic than this, while you may be correct that we will eventually be able to explain human biology with the precision of an engineer, these recent AI advances have not made meaningful progress towards that goal, and such an achievement is arguably many decades away.
It seems you are concluding that because we might eventually explain human biology, we can draw conclusions now about AI as if such an explanation had already happened.
This seems deeply problematic.
AI is “real” in the sense that we are making good progress on advancing the capabilities of AI software. This does not imply we’ve meaningfully closed the gap with human intelligence.
I think the point is that we have been “meaningfully closing” the gap rapidly, and at this point it is only a matter of time, the end can be seen, even if currently not completely written out in some equations.
It does seem like on HN, the audience is heavily weighted towards software developers that are not biologist, and often cannot see the forest for the trees. They know enough about AI programming to dismiss the hype, and not enough about biology, and miss that this is pretty amazing.
The understanding of the human ‘parts’ are being chipped away, just as quickly as we have had breakthroughs in AI. These fields are starting to converge and inform each other. I’m saying this is happening fast enough that the end game is in sight, humans are just made of parts, an engineering problem that will be solved.
Free will and consciousness are overrated, we think of ourselves as having some mystically exceptional consciousness, which clouds the credit we give advancements in AI. ‘AI will never be able to equal a human’, when humans just want lunch, and our ‘free will’ is based on how much sleep we got. DNA is a program; it builds the brain that is just responding to inputs. Read some Robert Sapolsky, human reactions are just hormones, chemicals, responding to inputs. We will eventually have an AI that mimics a human because humans aren’t that special. Even if the function of every single molecule in the body, or every equation in AI, is all fully mapped out, enough is to stop claiming 'specialness'.
4 replies →
> the underlying results of that learning (and the process of learning itself) could not be any more different
To drill down a bit, I think the difference is that the child is trying to build a model - their own model - of the world, and how symbols describe or relate to it. Eventually they start to plan their own way through life using that model. Even though we use the term "model" that's not at all what a neural-net/LLM type "AI" is doing. It's just adjusting weight to maximize correlation between outputs and scores. Any internal model is vague at best, and planning (the also-incomplete core of "classical" AI before the winter) is totally absent. That's a huge difference.
ChadGPT is really not much more than ELIZA (1966) on fancy hardware, and it's worth noting that Eliza's was specifically written to illustrate superficiality of (some) conversation. Its best known DOCTOR script was intentionally a parody of Rogerian therapy. Plus ça change, plus c'est la même chose.
2 replies →
First off, I'm not sure why this is the most upvoted comment. The OP explicitly praises AI, he just smells the same grifters gathering around like they did to crypto and he's absolutely right, it is the exact same folks. He isn't claiming the mind is metaphysical or whatever.
On your claim that the mind is metaphysical OR it is a NN, you have to understand that this extremely false dichotomy is quite the stretch itself, as if there are no other possibilities, that it isn't even a range or it could be something else entirely. One of the critiques people have of NN from the "old guard" is the lack of symbolic intelligence. Claiming you don't need it and fitting is merely enough is suspect because even with OpenAI tier training, just the grammar is there, some of the semantic understanding is lacking. Appealing to the god of the gaps is a fallacy for a reason, although it may in fact turn out to be true, potentially that just more training might be all that is needed. EDIT: Anyway, the point is assuming symbolic reasoning is a part of intelligence (hell, it's how we discuss things) doesn't require mysticism, it just is an aspect that NNs currently don't have, or very charitably do not appear to have quite yet.
Regardless, there isn't really evidence that "what brains do is what NNs do" or vice versa. The argument as many times as it has been pushed has been primarily driven by analogy. But just because a painting looks like an apple doesn't mean you can eat the canvas. Similarities might betray some underlying relationship (an artist who made the painting took reference from an actual apple you can eat), but assuming an equivalence without evidence just strange behavior, and I'm not sure for what purpose.
The main post was about burnout and hype. And I was just trying point out that things really are advancing fast and we are producing amazing things, despite the hype.
Like maybe the hype is not misplaced. There are grifters, and there are companies with products that are basically "IF" statement, and the hype is pretty nutz.
On other hand, some of this stuff is amazing. Don't let the hype and smarmy sales people take away from the amazing advancements that are happening. Just a few years ago some of this would have been considered impossible, only possible in the province of the 'mystery of the human mind'. And yet, here we are, and what it is to be human is being chipped away more every month, and yeah a lot of people want to profit.
Or more to the my main thought, a lot of heads down engineers that are cranking out solutions, do loose sight of how far they are moving. So don't get discouraged by the hype, marketing is in every industry, so why not stay in this cool one that is doing all the amazing things.
> The human is computation also, unless you believe in souls or other worldly mysticism.
I think it is incredibly sad that a person can be reduced to believing humans don't have souls. Do something different with yourself so you can discover the miracle of life. If you don't believe there is anything more to people and to the world than mechanical processes, I would challenge you to do a powerful spiritual activity.
By spiritual practice, do you mean something like studying the Skanda's or the 5 Aggregates? Or do you mean to open myself to the love of our lord and savior? It does make a difference in how you approach the world if your spiritual practice encourages insight, or if you are blinded by faith in a spiritual entity that is directing you.
7 replies →
What is a powerful spiritual activity you’d recommend?
1 reply →
What's sad about it?
I'm working on a project that uses GPT-3 and similar stuff, even before the hype. I think the overhype is really tiring.
Just like with most of these hype cycles there is an actual useful interesting technology, but the hype beasts take it way overboard and present it as if it's the holy grail or whatever. It's not.
That's tiring, and really annoying.
It's incredibly cool technology, it is great at certain use cases, but those use cases are somewhat limited. In case of GPT-3 it's good at generative writing, summarization, information search and extraction, and similar things.
It also has plenty of issues and limitations. Lets just be realistic about it, apply it where it works, and let everything else be. Now it's becoming a joke.
Also, a lot of products I've seen in the space are really really bad and I'm kinda worried AI will get a scam/shitty product connotation.
Finally, a take on chatgpt and similar LLMs I agree with!
I've criticized it whenever it gets brought up as an alternative for academic research, coding, math, other more sophisticated knowledge based stuff. In my experience at least, it falls apart at reliably dealing with these and I haven't gone back.
But man, is it ever revolutionary at actually dealing with language and text.
As an example, I have a bunch of boring drama going on right now with my family, endless fucking emails while I'm trying to work.
I just paste them into chat gpt and get it to summarize them, and then I get it to write a response. The onerous safeguards make it so I don't have to worry about it being a dick.
Family has texted me about how kind and diplomatic I'm being and I honestly don't even really know what they're squabbling about, it's so nice!
Haha that's amazing. This is exactly what I mean, for the right use cases it's absolutely amazing.
Good luck with the drama! Make sure to read a summary for the next family meeting haha.
1 reply →
> I'm kinda worried AI will get a scam/shitty product connotation.
Which has happened before. The original semantic/heuristic AI, most notably expert systems, over-promised and ultimately under-delivered. This led directly to the so-called "AI winter" which lasted more than two decades and didn't end until quite recently. It's a very real concern, especially among people who want to push the technology forward and not just profit from it.
> I'm kinda worried AI will get a scam/shitty product connotation
I think we're already there. A legion of AI based startups seem to be coming out daily (https://www.futuretools.io/) that offer little more than gimmicks.
You are probably right, kinda sad.
My last resort is to just remove all AI references from my marketing and just deliver the product.
> Just like with most of these hype cycles there is an actual useful interesting technology, but the hype beasts take it way overboard and present it as if it's the holy grail or whatever. It's not.
See also: Gartner hype cycle
Thanks! Very interesting. I guess our challenge will be surviving the inevitable dip.
> Also, a lot of products I've seen in the space are really really bad and I'm kinda worried AI will get a scam/shitty product connotation.
I agree with this, I feel like I've seen a lot of really cool technology get swept up in a hype storm and get carried away into oblivion.
I wonder what ways there are for the people who put out these innovations to shield them/their products from it?
Luckily I have a lot of faith in the OpenAI people - I hope their shielding themselves from the technological form of audience capture.
I think the hope is that unlike crypto, as others have said, clearly AI will have actual good applications, so the hope is the hype beasts, after they've burned through all the grifting they can, they can go off back to selling penis pills or whatever they were into last year (or may be pre 2020).
> GPT-3 it's good at generative writing
made up bullshit
> summarization
except you can't possibly know the output has any relation whatsoever to the text being summarized
> information search and extraction
except you can't possibly know the output has any relation whatsoever to the information being extracted
people still fall for this crap?
Agreed. Been testing out responses to parsing complex genomics papers (say, a methodology section describing parameters of some algorithm) and its mostly rephrasing rather than digesting and responding with useful information / interpretation. And it will use so many words and imbue so little to the conversation, yet appear like it's helping because ... words.
2 replies →
I think this is sort of the other side of the hype, totally dismissing it is also incorrect imo.
Yes, it's overhyped, but it's not useless, it actually does work quite well if you apply it to the right use cases in a correct way.
In terms of accuracy, in ChatGPT the hallucination issue is quite bad, for GPT3 it's a lot less and you can reduce it even further by good prompt writing, fine tuning, and settings.
Can we just recognize it for what it is?
1 reply →
I had that since I was doing my masters in data science (5 years ago?). I love the models, the statistics and just the cleverness of everything but I just can't stand the "scene" anymore and moved almost entirely away from it. It's not as exciting as it was anymore.
When I started with the topic I watched a documentary with Joseph Weizenbaum ([1]) and felt weirded out that someone would step away from such an interesting and future-shaping topic. But the older I get, the more I feel that technology is not the solution to everything and AI might actually make more problems than it solves. I still think Bostrom's paperclip maximizer ([2]) is lacking fundamental understandings of the status quo and just generated unnecessary commotion.
[1] http://www.plugandpray-film.de/en/ [2] https://www.lesswrong.com/tag/paperclip-maximizer
Yes, PoW crypto is now a much more concrete example of the potential damage from poorly aligned utility functions, as well as the challenges in containing a system once it is released.
I'm finding the current hype cycle very frustrating from both sides. On one side there is frequent overplaying current capabilities, and cherry picked examples given as it they're representative. On the other side there is an over simplistic "AI is evil" reaction. It's hard to deny that progress in the past few years greatly exceeds expectations and could make a significant improvement to individual creativity and learning, as well as how we cooperate but so much of the discussions are fear based.
Same here, didn’t do a masters, but worked as a data scientist for a good while.
> I love the models, the statistics and just the cleverness of everything but I just can't stand the "scene" anymore
This really sums up my feelings too.
you mentioned you moved almost entirely from the AI / data science "scene." Where did you move to?
Being a CTO (doing manager stuff), regular coding. By moving away I also meant I don't follow along anymore and don't contribute to the projects I did so in the past. I just lost interest.
Its the next hype train since the blockchain/crypto/nft's hype train. The crypto train has arrived at its overheated, decentralized set of train stations that all have remnants of fraudsters, high end GPU boxes scattered about, torn up flyers with the promise of untold riches, people scampering about in the shadows muttering "defi" to themselves, people getting carted off in handcuffs.
Where will the AI hype train go? The internet as we know it already has so much SEO engineered content and content producers chasing that sweet, sweet advertising money that they could all be replaced by mediocre, half-true, outdated content created by bots. So do we have to wait until our refrigerators are "AI powered, predicts your groceries for you!" in order to see the usefulness?
>Its the next hype train since the blockchain/crypto/nft's hype train.
It really isn't. The business use cases even with current tech are pretty obvious. The problem with crypto/blockchain stuff was that it was useless. An emperor with no clothes.
Is there a more legitimate argument for why they're similar other than "hype" or am I missing something?
> Is there a more legitimate argument for why they're similar other than "hype" or am I missing something?
The tech industry runs on hype, so much so that analysts are told to evaluate them separately. Growth now, profit later, here's $2bn from Softbank, yada yada yada.
Companies like Theranos specifically positioned themselves as 'tech' so as to escape press scrutiny, particularly in sensitive industries like healthcare.
Emperors with no clothes can get very far; see Brian Armstrong and SBF (pre-collapse, but still not in jail). Can you imagine how far a well-funded AI hustler could get?
> The business cases are pretty obvious
??? What are they?
- bad code, with non obvious bugs? I would prefer the original slashdot/GitHub/blog post. Google used to do that.
- chat bots? The customer service will still be shit. Your problem will still not be solved. But I guess some call center staff can be fired. Customers will be very happy to never be able to speak to a human.
- Writing mediocre overlong content for google to place ads in? Just what the internet needs. It’s already day time tv.
Any more?
4 replies →
Its the hype cycle. Other hype cycles were things like the dot com boom (and bust). I don't mean it as a comparison of technology merits just that we sometimes have to live through a hype cycle to get to a real understanding of where the technology might be actually useful. My snarky comment implies that we will have to wait until someone is advertising their AI powered refrigerator (i.e. get out of the hype cycle) to understand what real use cases are out there.
It is not about whether or not there's viable use cases or not. It is the hype added on top. Hype cycles are as old as IT. XML, Semantic Web, SOAP, Service Oriented Architecture, Enterprise Service Buses, Big Data, Serverless .. they all got their hype phase where you are bombarded with them to death, and then finally when that dies down some good applications remain.
2 replies →
A symptom of our bubble-powered economy in general.
There is a lot of negativity towards the idea of AI in this thread, and I feel like someone has to say it: it is quite likely that in the near future computers will be better than almost all humans at almost all cognitive tasks.
If you have a task or are trying to accomplish something, and the way you do it is by moving a mouse around or typing on a keyboard then it is very likely that an AI will be able to do that task. Doing so is a more or less straightforward extension of existing techniques in AI. All that is necessary is to record you performing the task and then an AI will be able to imitate your behavior. GPT3 can already do this for text, and doing it instead with trajectories of screen, mouse and keyboard is not fundamentally different.
So yes, it is true that there is a lot of hype right now, but I suspect it is a small fraction of what we will see in the near future. I also expect there will be an enormous backlash at some point.
I think this sentiment that this will happen in the "near future" is the cause for exactly the sort of fatigue the author is talking about.
If you mean in the next year or two, I hate to disappoint you, but barring some massive leap forward, you are going to be wrong.
If you mean in the next hundred years, or maybe sometime in our lifetimes, sure. The chances it looks anything like chatGPT or GPT3 now though is laughable.
This isn't the future. This is a small glimpse into a potential future down the line, but everyone is talking like developers/designers/creatives/humans are already obsolete.
> If you have a task or are trying to accomplish something, and the way you do it is by moving a mouse around or typing on a keyboard then it is very likely that an AI will be able to do that task.
You don't need AI to move a mouse around or type on a keyboard. A simple automation is enough.
The value is not in moving a mouse or typing on a keyboard. The value is in knowing when and where to move the mouse and when and what to write on the keyboard.
> GPT3 can already do this for text
Kind of, it isn't fool proof. I use GPT3 and Chat GPT (not the same thing) almost daily, and there is quite a bit of error correction that I am doing. Still, it is really helpful.
AI leaves a bad taste in my mouth but I think it is because we have moved away from ML/Vision problems with a strong background in academic research, and high impact and purposeful development of these into products.
We are now exposed to companies hyping huge general purpose models with whatever tech is the latest fad, which resonates with the average person who wants to generate memes, etc.
This is impressive only at the surface level. Take a specific application: prompting it to write you an algorithm, outside of any copying-and-pasting from a textbook these models will generate bad/incorrect code and then explain why it works.
It's like having an incompetent junior on your team who has the bravado of a senior 10x-er.
That's not to say "AI" doesn't have a purpose, but currently it seems just hyped up by sales people looking for Series-A funding or an IPO cash-out. I want to see the models developed for specific tasks that will have a big impact, rather than the slight-of-hand or circus tricks we currently get.
Maybe that time is passed, and general models are the future and we will just have to wait until they're as good as any specific model that was built for any task you can ask of it.
It will be interesting what happens when these "general" models are used without much thought and their unchecked results lead to harm. Will we still find companies culpable?
I think you hit on some good points. It seems like in common language, AI has taken the meaning “general purpose”, rather than satisfying some criterion of the futurist definition.
Personally, I care very little about whether the machine is intelligent or not. If it actually happens in my lifetime, I believe it will be unmistakable.
I am interested in how people solve problems. If you built and trained a model that solves a challenging task, THAT is something I find noteworthy and what I want to read about.
Apparently utility is boring, and “just ML” now. There’s tons of academic papers I see fly under the radar probably because they solve specific problems that the average person doesn’t know exists. Much of ML doesn’t foray into “popular science” enough to hold general public interest.
I dread the coming "age of buggyness" when imprecise LLMs pervade UIs and make everything always a little broken.
I don't deny that LLM represent a coming revolution in computer interaction. But as someone who's already mastered the command line, programming, etc. I already know how to use computers. LLMs will actually be slower for me for a huge variety of tasks like finding information. English is so clumsy compared to programming languages.
I feel like for nerds like me "user friendlyness" is often just a hindrance. For me this has been the case with GUI in general, touch GUI especially, and probably will be for most LLM applications that don't fundamentally do something I cannot(like stable diffusion).
> AI is great. ChatGPT is incredible.
Imagine how the HN users who disagree with that feel. It is beyond fatiguing. I’m frequently reminded of the companies who added “blockchain” to their name and saw massive jumps in their stock price, despite having noting to do with blockchains¹.
¹ https://www.theverge.com/2017/12/21/16805598/companies-block...
I don't mind the companies that got a temporary bump from adding blockchain to the name.
I'm more concerned about the Twitter hype-men and women adding '.eth' to their name and singing DeFI praises all day long....and then quietly removing it without so much as a word, once the hype is dead and keeping the '.eth' makes you look like a sucker.
BTW a lot of influential people were on that train, current CEO of YCom being one of them.
I think this time is the good one. ChatGPT has reached a level where we finally can think of building actually useful products on top of "AI".
Note that nobody is pretending that ChatGPT is "true" intelligence (whatever that means), but i believe the excitement comes from seeing something that could have real application (and so, yes, everybody is going to pretend to have incorporated "AI" in their product for the next 2 years probably). After 50 years of unfulfilled hopes from the AI field, i don't think it's totally unfair to see a bit of (over)hype.
I really don't understand how engineers are having good experiences with it; a lot of the stuff I've seen it output w.r.t. swe is only correct if you're very generous with your interpretation of it (re: dangerous if you use it as anything more than a casual glance at the tech). W.r.t. anything else it outputs, it's either so generic that I could do it better, outright wrong (e.g. cannot handle something as simple as tic tac toe), or functions as an unreliable source (in cases where I simply don't have the background).
I wish I could derive as much utility as everyone else that's praising it. I mean, it's great fun but it doesn't wow me in the slightest when it comes to augmenting anything beyond my pleasure.
I'm a Civil Engineer with a modest background including some work in AI. I'm pretty impressed with it. It's about as good or better than an average new intern and it's nearly instant.
I think a big part of my success with it is that I'm used to providing good specifications for tasks. This is, apparently, non-trivial for people to the point where it drives the existence of many middle-management or high-level engineering roles whose primary job is translating between business people / clients / and the technical staff.
I thought of a basic chess position with a mate in 1 and described it to chatGPT, and it correctly found the mate. I don't expect much in chess skill from it, but by god it has learned a LOT about chess for an AI that was never explicitly trained in chess itself with positions as input and moves as output.
I asked it to write a brief summary of the area, climate, geology, and geography of a location I'm doing a project in for an engineering report. These are trivial, but fairly tedious to write, and new interns are very marginal at this task without a template to go off of. I have to lookup at least 2 or 3 different maps, annual rainfall averages over the last 30 years, general effects of the geography on the climate, average & range of elevations, names of all the jurisdictions & other things, population estimates, zoning and land-use stats, etc, etc. And it instantly produced 3 or 4 paragraphs with well-worded and correct descriptions. I had already done this task and it was eerily similar to what I'd already written a few months earlier. The downside is, it can't (or rather won't) give me a confidence value for each figure or phrase it produces. ...So given it's prone to hallucinations, I'd presumably still have to go pull all the same information anyway to double check. But nevertheless, I was pretty impressed. It's also frankly probably better than I am at bringing in all that information and figuring out how to phrase it all. (And certainly MUCH more time efficient)
I think it's evident that the intelligence of these systems is indeed evolving very rapidly. The difference in ChatGPT 2 vs 3 is substantial. With the current level of interest and investment I think we're going to see continued rapid development here for at least the near future.
6 replies →
The fact that i can use this tool as a source of inspiration, or a first opinion on any kind of problem on earth is totally incredible. Now whenever i'm stuck on a problem, chatgpt has become an option.
And this happens in the artistic world as well with the other branch of NN : "mood boards" can now be generated from prompts infinitely.
I don't understand how some engineers still fail to see that a threshold was passed.
2 replies →
I agree. Even understanding its limitations as essentially a really good bullshit generator, I have yet to find a good use for it in my life. I've tried using it for brainstorming on creative activities and it consistently disappoints, it frequently spouts utter nonsense if asked to explain something, code it produces is questionable at best, and it is even a very boring conversation partner.
I think it’s less like the crypto craze than the PC, web, or smartphone “crazes”, where businesses starting incorporating each of the above into everything.
In other words, if you’re fatigued already, I have some bad news regarding the rest of your life.
If you're tired of AI now you're gonna hate where we are going. Strap in!
(…or take a good step back from the news cycle, check in once or twice a week instead of several times daily. News consumption reduction is good for mental health.)
This is something any crypto-bro would have told you in 2017.
Really don't understand the constant crypto comparisons. We have one technology that hasn't provided any benefits whatsoever in 10 years and one that has provided real utility from day one. One deserves the hype, the other doesn't.
1 reply →
Haha! Yeah good idea.
Its part of overall "tech hype fatigue", think of all the waves upon waves, big data, social apps, crypto/web3, self-driving cars, virtual reality, AI etc,
At the same time people's actual quality of life or economic standing is going nowhere, there is fragility that bursts in the open with every stress, politics has become toxic and the environment gets degraded irreversibly.
Yet people simply refuse to see and they keep chasing unicorns.
Everyone is a temporarily inconvenienced multi-millionaire. They only need to get in on the next big thing and ride their way to a comfortable life at the top.
It's called being transfinancial. You feel like a rich person but are born in the body of a poor one.
Yes, many people are feeling AI fatigue. AI can be overwhelming and many people feel like they are being bombarded with information that they don't understand. People are also concerned about how AI is being used and its potential implications for privacy and security.
Sorry, I couldn't help; that is the ChatGPT response to your question. More informatively, AI is clearly at the height of inflated expectations. It will provide a helpful tool. However, it will not push people out of jobs. Furthermore, right now it gives a much better search experience than Google, as it is not yet filled with ads or has been gamed extensively by SEO. It is doubtful this will stay like this in the future.
I could tell by the start of the second sentence.
ChatGPT overuse of the word "overwhelming" and a couple other similar words is very characteristic. I think it comes from the "political correctness"/"provide kind answers" prompts it is bombarded with during training
That first paragraph. It is big thing that machine can generate something like that but in reality it feel like it brings just noise. Not sure why anyone expect this to improve SEO noise.
https://en.wikipedia.org/wiki/Gartner_hype_cycle
I think Machine learning already went through the trough of disillusion around 2016-2018 in computer vision and around 2018-2020 for voice assistants.
I think we're now past that and people can see that tools like ChatGPT are powerful enough to be applied in many pre existing contexts and industries in unpredictable and inventive ways without huge amounts of manual configuration, which makes it more exciting.
ML/AI is a repeat offender (for that matter, so is The Almighty Blockchain; it managed a few hype cycles under slightly different identities; blockchain, ICO, NFTs, and so on). Remember in the late 90s when Microsoft and Apple both appeared fully convinced that voice would be the primary interface with computers imminently? There was also a large brief chat agent bubble a few years back.
Machine learning is way too generic of a term. Everything from linear regressions to neural models is technically "machine learning".
Language models are right now at the very top of the peak of inflated expectations. It's still too early to tell what the real impact will be, but it won't be even remotely close to what you read on the headlines.
Far more impressive technology (like Wolfram Alpha) has existed for almost a decade now, and it's directly comparable to language models for many applications.
My guess is they will end up being something like Rust. Very cool to look at, little impact on your day-to-day.
If you can jump around without prediction which point is next, the hype cycle is useless. There are terms ppl use for things that are en vogue. There is no hype cycle.
ChatGPT is, of course, a great piece of software, but the huge hype is probably what it will be best remembered for. Also, since currently, AI is the exclusive playground of big corporations, to me it's a bit puzzling how some people can get so excited (and maintain that excitement) over something that they cannot control and have little hope of building it by themselves. I guess some are just more in love with technology, than with other things in life. Because, as everyone is probably well aware by now, more technology is the solution to every problem that ever faced mankind and will finally fix everything. :)
I assume that there will be an open and freely available model as large as ChatGPT within a year or so. Training costs are prohibitive but what about NSF grants?
I don't know about the NSF, or when will governments get in on AI, but you're probably right, the technology will become open source in a while, as it has happened in the past.
It looks much less likely for the cost of developing and training an AI system to come down for the time being, making it out of reach for most individuals.
When the PC revolution was happening, everyone interested had a good chance of getting in, they just needed some money to buy/rent a computer and learn to use it or program it.
Compared to that, the AI revolution doesn't seem to have the same quality.
The barrier to entry seems much much higher this time.
I do think that governments will have an interest in keeping around models that they're in control of, just like there is publicly funded boradcasting, you may want to be able to control all the biases of a widely-used model and not just import it from somewhere.
The HN crowd would do well to take a step back and look at this from a little different perspective. We are able to see "AI" and machine learning as the very young and imperfect technologies that they are. That said, ChatGPT is the first time a technology like this has been even REMOTELY available to the vast majority of the public, and democratizing this capability even in the small box of a chat window is wildly disruptive. Even after explaining that it probably isn't a great idea to use it for generating 100% factually reliable content, everyone I've shown it to has come up with ways they would use it to make small-but-meaningful improvements in some area of their lives. Consider an immigrant owner of a landscaping company who isn't super confident in their English but needed to respond to a customer to clarify exactly what they need to do on a job site. Did they close the deal solely because of ChatGPT? Hard to say for sure, but it saved an hour of productive time and likely made it easier for the customer to be a reference/word-of-mouth fan.
ChatGPT is great, but it's being hyped up so much right now. We've got AI bros coming in the scene trying to sell everybody a new product. Before the crypto craze, it was big data. I probably missed something in between.
ChatGPT has certainly made a splash, but it's part of a larger trend. I started following developments in modern AI when Kevin Kelly tweeted[1] this in 2016:
> The business plans of the next 10,000 startups are easy to forecast: Take X and add AI.
I think the AI hype cycle isn't done building. A few days ago, Paul Graham tweeted[2] this:
> One of the differences between the AI boom and previous tech booms is that AI is technically more difficult. That combined with VC funds' shift toward earlier stage investing with less analysis will mean that, for a while, money will be thrown at any AI startup.
[1]: https://twitter.com/kevin2kelly/status/718166465216512001
[2]: https://twitter.com/paulg/status/1623060319403905026
It was actually briefly ML again; there was a chat agent VC funding bubble before the main crypto VC funding bubble.
I thought you were going to use "AI Fatigue" in the same sense as "JS Fatigue", and I was going to agree a lot.
I've got "AI Fatigue" not in the sense that it is overhyped, but just like "JS Fatigue": It is all very exciting, and new genuinely useful and impressive things are coming up all the time, but it's too much to deal with. I feel like it's difficult to start a product based on AI these days due to the feeling that it will become obsolete next week when something 10x better will come out.
Just like with JS Fatigue back in the days, the reasonable solution for me is something like "Let the dust settle a bit before going in the latest cool thing"
I’m not. I’ve been using OpenAI’s API a lot for work and it’s made tasks easy that would previously have been very challenging. Some examples:
- Embedding free text data on safety observations, clustering them together, using text completion to automatically label the clusters, and identifying trends
- Embedding free text data on equipment failures. Some of our equipment failures have been classified manually by humans into various categories. I use the embeddings to train a model to predict those categories for uncategorized failures.
- Analyzing employee development goals and locating common themes. Then using this to identify where there are gaps we can fill in training offerings.
I'm not at all tired of AI, but I am tired of all the sales/marketing/business people taking our AI, misunderstanding it, pretending it does all sorts of things that it absolutely does not do and then also not being willing to being educated about how things _really_ work under the hood.
It's the same kind of people that were hyping cryptocurrencies in the past. People who understand nothing about the technology, but shout the loudest about how amazing it is (probably to make money off of it). Those are also the kind of people that will be the cause of the next AI winter.
With all respect for good salesman who can understand a customer and recommend the right solution: Those other types, which just verbally hype something until they get their lead, those may be replaceable by said ai some day...
We're in the middle of an AI hype. Much like with previous hypes (crypto etc), time will tell whether it was worth it. Unless you're chasing gold or selling shovels the only thing to do is just to wait it out.
I'm tired of people saying they're tired. I use ChatGPT every day and it provides results of a quality that has little to do with Google, even though I'm aware that it sometimes fibs to me, makes up names for functions that don't exist, or makes mistakes. There's hype, but I think it's much more deserved than the hype around cryptocurrencies
There are plenty of people who live their entire life with bitcoin - they get paid in bitcoin, pay their bills in bitcoin, and send each-other money in bitcoin.
I think it's safe to say your experience is an outlier, just like theirs are.
I'm happy it's working for you, but if you really do use it every day, you surely can understand the points where it doesn't live up to the hype -- or at the very least, how it is not for everyone.
I would say, at least it’s not as predatory and unethical as crypto, where the people involved are knowingly harming others.
But it seems like the current trendline for “AI” is going to be worse. Why be excited about building tools that will undermine democracy and cast doubt on the authenticity of every single photo, video, and audio clip. Now it can be done cheaply, by anyone. It will become good enough that we cannot believe any form of media. And also make it impossible to determine if the written word is coming from an actual person. This is going to be weaponized against us.
And at the very least, if you think blogspam sucks now, wait until this becomes 99.9999% of all indexed content. It’s going to jam all of our comms with noise.
But hey it looks great on your resume, right?
Maybe I’m too cynical, would love for someone to change my mind. But you are not alone in your unease.
To be honest, I see some positivity in that.
> Now it can be done cheaply, by anyone. It will become good enough that we cannot believe any form of media. And also make it impossible to determine if the written word is coming from an actual person. This is going to be weaponized against us.
We shouldn't believe any form of media straight away. We only do so because we think faking it is hard and why should one do. Being able to produce it cheaply could make people more attentative and skeptical of things around them. Blogspam sucks mostly out of consumers belief that this is something that was written by a person who deeply cares about them. Average internet consumer consumes shitty internet not because he is ignorant, but because he or she doesn't know enough to care.
But maybe I'm to optimistic, I just think people are not aware of stuff around them
Let's imagine a scenario:
There is a state of emergency presidential address. In Video A, the politician says X Y Z. In Video B, the politician says A B C. Both videos have equal credibility. The videos show no artifacts from tampering. The alteration is undetectable by experts. The broadcast has dire consequences in a divided country.
50% of channels are pushing Video A, 50% of channels are pushing Video B.
We are now in a position where the public actually cannot determine which video is authentic. The politician could broadcast a new statement, to clarify the validity of the first video. But, you could just as easily fake that too, to publish a statement that declares the opposite.
So, then you load up Hacker News or wherever, to determine for yourself what the hell is going on. But someone spins up 1,000 bots to flood the comments in favor of Video A, and someone else spins up 1,000 bots to flood the comments in favor of Video B. These comments are all in natural language, all with their own individual idiosyncrasies. It's impossible to determine if the comment is a bot. And because the cost is essentially free, these bots can be primed for years, making mundane comments on trivial topics to build social credibility. Actual humans only account for maybe 1% of this discourse.
Now imagine: our entire world operates like this, on a daily basis, ranging from the mundane to the dramatic. How about broadcasting a deepfake press statement from a CEO to make a shorted meme stock crash. If there are financial/political incentives to do so, and the technological hurdle is insignificant, these tools will be weaponized.
So how do we "not believe the media", do we all have to be standing in the same room together where something notable happens?
I understand that there could be upsides, the world isn't all doom and gloom. But, I think engineers get myopic, and do not heed the warnings.
If some person A decides to pay person B in crypto money instead of dollars or pesos, because it's more convenient/cheaper/faster, how is that harming you? That's their business. Nobody is forcing you to use crypto.
The world has some cool new toys and I'm glad people are playing with them. I hope it only becomes increasingly accessible with time. Yes, there's going to be a ton of snake oil and disappointments if you listen to the people looking to get rich quick, but I'm excited for what might come from it in the end.
In the meantime, all the attention and media is easing people into thinking about some difficult questions that we may end up having to deal with sooner than we'd like.
The hype can be annoying, and I'm sure they'll be suckers who lose a lot of money chasing it, but I'm also sure AI will get better, and be better understood too, as a result of all of the attention and attempts to shoehorn it into new roles and environments.
Not really AI itself but I am already sick of people asking ChatGPT then posting it in the comments of HN/Reddit.
It just feels like a waste of time having read the comment. Even if the information is there I don't trust the user to be able to distinguish between true or confident false. If it's not my skillset or knowledgebase I assume it's wrong because I can't tell and can't ask followup questions.
Me using it as an assistant? Love it. Others using it as an assistant? I don't trust them to be doing it right.
In any case I want to read your opinion, copy paster, not a robot I could just ask in my own time! Just don't post if you've got no thoughts lol
In the early 2000s people were also annoyed about all the internet hype. "DotCom this and DotCom that" and it was stupid that a company could announce they were adding a DotCom to their name and the stock would go up a bunch. So yes, it is annoying that all these crypto scammers and entrepreneurs have put a wrapper around GTP API call and hype it up.
BUT the rate of change in AI is enormous and it will be a much bigger deal than the internet over the next 10 years. Not because of API wrappers, but because the cost of many types of labor will effectively go to zero.
>In the early 2000s people were also annoyed about all the internet hype.
Well, they were right...
They were right? Even if we only consider the ability of the internet to enable remote work, the internet's impact to human life is astronomic.
1 reply →
I never even fully recovered from the "Facebook, but for X" fad.
At least all the previous crazes didn't threaten to replace humans, so I suppose this tech hype bubble is arguably even more irritating.
The hype will die down fairly quickly. But this technology is obviously a huge deal. We've found a practical algorithm to turn more powerful hardware into better results. And hardware was still ramping up at an astonishing rate last time I checked.
It seems more likely that we'll surpass the hype than not in the next few decades. I think people have forgotten how quickly technology can move after the last 20 years of relative stability where more powerful hardware didn't really change what a computer can do.
It's been happening for a decade and I've learned to ignore it.
Cloud for this cloud for that! Blockchain for this blockchain for that! Big Data for this, big data for that! Web scale all the things!
The marketing driven development is exhausting and has done nothing to improve technology. This happened because of 0% interest rates and free money. People have been vying for all the VC money by creation solutions looking for problems, which end up being useless solutions for which no problems exist
100% this. This is marketing to the highest level. Heck, look at MS now. They knew this was the opportunity to associate bing to the hype and with Google making a mistake with its "infamous" video, they won round 1.
Let's wait until the end of the year and see how much will this wave will hold up.
It's just the regular wantrepreneurs wave when a new shiny thing is released, we had the same with crypto, give it a few months they'll crawl back from where they came
I have a bit of AI fatigue around this wave of tools, but also understand why they are garnering so much attention. Many of the innovation hype categories of the past decade have appeared stuck in the 'early days but just wait...' phase. Self-driving cars, drone delivery, crypto as a currency, crypto as a(n) _____, plant-based meats, virtual reality, etc. While there has been great progress in each of these areas, not one has yet matched market demand with current capabilities in a way that enables it to become a 'game changer.'
To the general public, ChatGPT and the Image Generators 'just appeared,' and appeared in a very impressive and usable form. Of course there were many waves of ML advances leading up to these models, but for many people these tools are their first opportunity to play with ML models in a meaningful way that is easy to incorporate into daily life and with very little barrier to entry.
While impressive and there are many applications, my questions surrounding the new AI tools relate to the volume of information they are capable of producing and our capacity to consume it. Tools can be used to synthesize the information, tools can act on it, but there is already too much 'noise.' There is a market for entertainment tailored to exact preferences, but it won't provide the shared cultural connection mass media provides. In the workplace, e-mails and documents can be quickly drafted. This is a valuable use case, but it augments and increases productivity. It will lower the bar necessary for certain jobs, and it will increase productivity expectations, but it will become a tool like Excel rather than a replacement like a factory robot (for now).
The Art of Worldly Wisdom #231 - Never show half-finished things to others. <- ChatGPT managed it's release perfectly in this regard.
I think with any technology, there will always be individuals looking to make a quick buck. Whether that's a fledgling startup trying to woo investors, big tech cos looking to pump their share price, or your average Twitter/LinkedIn influencer peddling engagement bait.
IMO AI has reached this stage of its lifecycle. There have always been, and still are, valid use cases for AI, but I think the GPT-3 inspired applications we've been seeing as of late are no more than impressive tech demos. It's the first time the general public has seen a glimmer of where AI can go, but it really is just a glimmer at this point.
My advice is to keep your head down and try to be selective with the content you engage with on AI. It seems like every feed refresh I have some unknown Twitter Verified account telling me why swaths of the population will be out of a job soon. The best heuristic I have so far is to ignore AI-related posts/reshares from names I haven't heard of before, but of course that has obvious drawbacks.
What winds me up is the mis-branding, sometimes deliberate sometimes not (which one is worse?!), of basic computer processing as "AI".
It's not AI it's an IF statement for crying out loud :-(
But this is the industry we're in, and buzzword-driven headlines and investment are how it goes.
Actual proper AI getting some attention makes a pleasant change tbh :-)
I disagree; consider the use of the term "video game AI", which historically at least has just been a bunch of _if_ statements chained together. This is totally valid, it's an example of AI without machine learning.
The thing is that AI is just about the most general term for the type of computing that gives the illusion of intelligence. Machine learning is a more specific region of the space of AI, and generally is made of statistical models that lead to algorithms that can train and modify their behavior based on data. But this includes "mundane" algorithms like k-means clustering or line-fitting. Deep learning (aka neural networks) is yet a more specific subfield of ML.
I think the term AI just has more "sex appeal" because people confuse it with the concept of AGI, which is the holy grail of machine intelligence. But we don't even know if this is achievable, or what technology it will use.
So in terms of conceptual spaces, we can say that AI > ML > DL, and we can say (by definition) that AI > AGI. And it seems very likely that AGI > ML. But it's not known, for instance, whether AGI > DL, ie, we don't know for sure that deep learning/neural networks are sufficient to obtain AGI.
In any case, people should put less weight on the term AI, as it's a pretty low bar. But also yes, the term is way over hyped.
I'm thinking of cases such as colleagues selling as "ML" something they were then forced to admit as "we use SQL to pick out instances of this specific behaviour we knew was happening". Embarrassing all round.
As folks that work in tech we can tell the difference between stuff that's got some form of depth to it in "proper" AI: ML, DL, AGI as you suggest, vs the over-hyped basic computation stuff. And the selling of the latter as the former can rankle.
I feel like AI-scientists themselves are partially to blame for this. For starters, AI does not 'learn' like a human learns. But still many of the main terms of the field are based on learning: terms like 'learning rate', 'neural networks', or 'deep learning' are implying that there's some kind of being which learns, not just a very complicated decision tree. It's not all the fault of hype marketing people!
> AI-scientists themselves are partially to blame for this
They are not addressing the public or swaying opinion
> It's not AI it's an IF statement for crying out loud :-(
https://arxiv.org/abs/2210.05189 but all NNs _are_ if statements!
From my side there are three feelings that fight in me: 1. it's incredible how this AI performs drawing and writing 2. will it take my programmers work? 3. what if this AI makes a mistake?
I think putting AI inside everything will give us opportunity to experience first-hand what is a local extremum of multidimensional function and how it differs from global extremum. Our paper gets eliminated because some AI-based curriculum vitae review glitch. Our car lost a wheel because computer vision failed (or lose our heads like that one owner of Tesla )... Most scary for me is that we are starting to build more and more things of which we wouldn't be able to understand the inner workings. Hence there might be an intelligence crisis creeping slowly into our civilisation, and then bam... like in Andrzej Zajdel's Van Troff's Cylinder
You are hardly alone, but you are also likely in the midst of an actual paradigm shift, so the "ecosystem" does what ecosystems do. ie: herds stampede, flocks flock, scavengers scavenge, parasites uhh parasite.
It will be increasingly tiresome until it becomes commonplace, then the disastrous consequences will become the next tedium.
I’m very excited about the methods themselves, but I’m so tired of the manic “vibes” around them——-and how they’re starting to affect related fields too. I work at the border between neuroscience and AI, and there are some undeniably cool developments but there’s also so much hype and overkill.
In a better world, it’d be possible to occasionally pause, take a breath and think about what the models are actually doing, how they’re doing it, and if that’s what we want something to do. However, it’s hard to find space to do so without getting run over by people “moving fast” and breaking things and feels like doing the hard corrective work is so much less rewarded.
I feel like there will be some lucrative opportunities somewhere here to exploit the fallout from this hype. I get the fatigue, but our incomes are basically critically dependent on investor FOMO.
I'd rather we have bitcoin crazes, scaling crazes, nosql crazes and GPT crazes than this industry commoditizes itself to hell and I have to spend the rest of my career gluing AWS cognito to AWS lambdas for $55k / year.
At the same time I'm pretty sure that it will wildly change any industry where creativity is critically important and quality control either isn't that important or can be done by amateurs. There is substance at the core of the hype.
Absolutely not.
It seems too exciting to me and I am eager to see more AI. It's fascinating stuff.
Happy to find a fellow soul. I'm fatigued of the complaints of AI fatigue - especially where the complaints aren't based on recent (last year or so) first hand use.
It's bold (to put kindly) how lengthy some of these critical comments are from folks who later in the thread admit to not personally used Copilot (for example) much themselves.
The quality of LLM output can wildly vary based on what prompts (or series of prompts) are used.
There probably a koan for it, but separate the commercial from the noncommercial.
I'm excited for these emerging technologies, but I don't care about any of the products people want to sell based on them. I've spent the past 27 years developing zero-effort self-filtering against spam and hucksters, so I'm not even aware of any AI startups, just as I can't tell you the names of any Bitcoin exchanges. That's just not in my sphere, and I'm not missing out.
Hunker down and have fun. It's incredibly accessible, and you likely have more than you need to get started making it work for you.
I too feel overwhelmed by this sudden rush towards *GPT. The content generated by AI is slowly erasing the line between the creative content and computer generated content. I remember last year when so many people earning or trying to earn by creating art forms to sell as NFT. Once Dall-E landed, then the originality quotient of any creative content is lost. Likewise, ChatGPT is going to erase the originality in text content. Once internet is mixed with AI generated content, then there is no going back. We can't find what's real work, what's AI generated.
Yes but there's nothing new in what you say. Whenever something new comes out, people try to capitalize on the buzzword, even if what they're doing has zero relevance in practice. The whole "X but in Y" thing reinvents itself all the time. "X but in Rust". "X but on the blockchain". "X but with Neural Networks". "X but with nanobots". "X but quantum".
Best you can hope if you're a "Y" person is for the marketers to get bored of the current Y and jump to the next one, leaving yours alone.
If you are getting AI fatigue, you never really scratched the surface and limited yourself to to the hype-train of AI, and not actual AI.
AI is wide and deep, and its proper uses are so so far removed from mainstream media and the hype-train.
AI still has too many undiscovered areas of usefulness to the degree that it will nothing short of transform those areas.
But you hear most of the times about Stable Diffusion, see melted faces and weird fingers, and screenshots of ChatGPT.
These, wrt area and width, are nothing compared to what is possible.
So, no, I am not AI fatigued as I don't pay much attention to these hypes at all.
Although I understand your sentiment, I think it’s an inevitable phase in the development of any new groundbreaking technology.
People are still trying to figure out what the new AIs can and can’t be used for.
Some people will try to build ridiculous products that don’t work, but that’s just part of the learning process and those things will be weeded out over time.
There’s no ‘clean’ path to finding all the useful applications of these new models, so be prepared to be bombarded with AI powered tools for a few more years until the most useful ones have been figured out.
This happens during all hype cycles, with one big difference:
While crypto or VR tech still hasn't arrived in our daily lives, most of my friends are already using tools like ChatGPT on a regular basis.
It's very much the new blockchain. For the next year or so everything will have "AI" in the description because it produces a pavlovian response in VCs, then it'll move onto something else. So goes the tech industry.
None of this is new; there's a special magic phrase to attract VCs that changes every few years, and for now AI is it (we've actually been here before; there was a year or so a while back when everything was an "intelligent agent"/chatbot).
AI for actual applications, where mistakes are costly, is becoming an idiot marker. As "crypto" for projects other than non-decentralized finance or "smart" contracts, has already become. FOMO is great on this though, so every major investor/consultant is no starting to to tell you to get shitcoin.
AGI could be ML driven, most likely it is not. Neuronal nets are still AI tech. Even Bayesian inference is weakly AI tech.
The public always misuses words. Words change to match that meaning.
In the last ~15 years AI has been on the hype train but also had moments when it started to die down ( remember chat bots? ).
ChatGPT is the "new" booster shot, it's a hell of a boost and this one might stick. What will not stick is the copious amount of wishful thinking and bullshit the usual suspects are bringing in. ChatGPT is a godsend after crypto went bust and the locusts had to go somewhere else.
I suspect we will have to endure a crypto-craze like environment for a couple years at least..
From an engineering perspective ChatGPT makes some ridiculous errors and at times seems to fabricate random guesses. When these are pointed out it simply apologises and says "oh you right", even if it is being fed a lie.
When asked for references it cannot refer to any. Scientifically useless?
Until AI can filter out fact from fiction, it will continue frustrate the technical people who rely on absolute truths keep important systems running smoothly.
Me too. Especially since ChatGPT, the overhype has gone wild. I think it's because it's the first Chatbot, that repeatedly solves the Turing test for the masses. What annoys me the most, is that people fall for the illusion of talking to an intelligent being and the media (i've read) does not seem to hesitate to debunk it.
That uncritical handling along with a growing offer can lead to the next big bullshit bubble.
I think any research project, when it gets successful enough, and complex enough, we stop seeing the future potential, and are annoyed we're not getting what we want from it.
Machine Learning research isn't "for us." Let the researchers do what they do, and toil away in boring rooms, and eventually, like the internet slowly did, it will be all around us and will be useful.
While I agree the name AI is pretentious, I personally and professionally embrace the technology.
Personally I enjoy creating language models and agent networks, at work I make predictive models so.. :)
Even if I didn't find the tech fascinating and especially the new emergent features of the big LMs, I would be left in the dust professionally if I ignored it. The tech really works for a lot of stuff.
I understand the sentiment, but I'm trying actively to get away from all the hype and focus on the capabilities it has today. It's being useful for a bunch of people, there are some threads on it such as: https://news.ycombinator.com/item?id=34589001
Every time I read people dismissing AI for the same tired old reasons, I am reminded of this line from the Psalms: “the Lord knows the thoughts of men, that they are vanity”.
The only thing we can definitely do better than machines is sad, proud sophistry. “Not real understanding, not real intelligence, just a stochastic parrot”. Sure, keep telling yourself that.
You may want to try to mentally reframe it so it doesn't bother you as much because it's not going away any time soon.
People are loud, that is the nature of public discourse. Ignore the noise. Focus on the things you find personally interesting - if they are lindy proof it does not hurt (this includes maths, science, any fundamental computational problem like ray tracing for example). Use new stuff you personally find interesting or usefull.
AI fatigue? Nope, far from it.
I asked some general questions to ChatGPT, and it gave me pretty coherent answers. But when I asked really specific question like "How to rewrite Linux kernel in Lisp", then it gave me seemingly gibberish answer.
This was about 2 months ago, BTW. Maybe ChatGPT already learn more stuffs and are smarter. Let's see...
> Maybe ChatGPT already learn more stuffs and are smarter. Let's see...
LLMs don’t have a mechanism to learn from interaction, their models are simply fed more data and luckily you’ll get better results, but you might as well get worse results if said data isn’t well curated.
Maybe this will help https://chrome.google.com/webstore/detail/ai-just-some-if-st...
And you're not alone, I feel the same since ~2015
I feel the same way about the rapid proliferation of this latest generation of AI tech as I did when crypto began to take off: "this is interesting, but it will not deliver on the promises or live up to the hype, and there will be a lot of grifters and bad actors who will give the tech a bad name".
It seems to part of a trend to prompt us and in some ways mold us into giving responses, under the probably sincere guise of convenience, like the famous Word paper clip. I think some find that useful, but it does tend to take away agency. This may be the slide to the AI's taking over :-)
Well most products today need to be adaptive and handle complex states, even if there's a few if sentences in there handling something in an intelligent way that's AI, it's a broad term after all. The problem is that it's also become a marketing buzzword.
There's much more than LLMs -- RL, Robotics and graphs. AI hasn't reached mainstream gamedev yet, and general adoption among devs is low. I find it hard to convince my fellow devs from old time (web, startups) to invest their time to learn AI.
All things built on top of ChatGPT, "seems to me" as bullshit which simply are created to generate click bait, or with no future whatsoever. The next AI big thing will be ChatGPT 5 or a competing model with less memory requirements.
Before AI it was Crpyto. Before Crypto it was Quantum computing. Fusion pops up from time to time with "huge breakthroughs" that mean working products could be just 20 years away.
People love the optimism and the paranoia and uncertainty.
It’s the usual AI boom and bust cycles. The term is just too good to let go for marketers. It’s instantly evocative for the general public.
Just wait for it to underdeliver. Investors will get scared and we will be back to calling it machine learning.
AI is a hype but still I am jumping on this horse. It is the game everyone has agreed on playing and I think I can build and sell a compelling product. Yes it is herd behaviour but don't think you'll be safer on your own.
AI is the new blockchain. Fortunately for AI, it will have its use, it will just be more mundane than everyone thinks it will be, which is better than the crypto stuff.
Or who knows, may be there will be an application for block chains too.
Yes and it is sad because I see people trying to find problems for ChatGPT and say this is the future.. but the industries they are targeting have such a wide variety of concrete problems for a startup to solve.
It's important for the marketing machine to keep things going until Microsoft is able to transfer enough money from GOOG to MSFT and, if possible, Bing marketshare to be responsible for that move.
Yeah, this is the new gold rush, all the sharks and all the kids are joining the chase.
We've seen this pattern many times. And there is money to be made, for sure, but the value might not be there yet.
AI/ML in general seems good for situations when being wrong, indeed very wrong, occasionally is ok. So yes to AI driven recommendation engines, no to AI driven cars.
I haven't actually tried ChatGPT yet so it's just like the hundreds of other things I vaguely know of but don't really engage with. Not too bothersome.
Yes, so much this. This seems to be the same type of hype as blockchain was a couple years ago when everyone said that will solve all our problems.
The "I" in AI is just complete bullshit and I can't understand why so many people are in a awe of a bit of software that chains words to another based on some statistical model.
The sad truth is that ChatGPT is about as good an AI as ELIZA was in 1966, it's just better (granted: much better) at hiding its total lack of actual human understanding. It's nothing more than an expensive parlor trick, IMHO.
Github CoPilot? Great, now I have to perform the most mentally taxing part of developing software, namely understanding other people's code (or my own from 6 months ago...) while writing new code. I'm beyond thrilled ...
So, no, I don't have an AI fatigue, because we absolutely have no AI anywhere. But I have a massive bullshit and hype fatigue that is getting worse all the time.
I'm more fatigued by people denying the obvious that ChatGPT and similar models are revolutionary. People have been fantasizing about the dawn of AI for almost a century and none managed to predict the rampant denialism of the past few months.
I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.
There's a fellow that kinda predicted it in 1950 [0]:
> These arguments take the form, "I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X."
> [...]
> The criticisms that we are considering here are often disguised forms of the argument from consciousness, Usually if one maintains that a machine can do one of these things, and describes the kind of method that the machine could use, one will not make much of an impression.
Every time "learning machines" are able to do a new thing, there's a "wait, it is just mechanical, _real_ intelligence is the goalpost".
[0] https://www.espace-turing.fr/IMG/pdf/Computing_Machinery_and...
1 reply →
> I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.
It's important to note that this is your assumption which I believe to be wrong (for most people here).
> I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.
Respectfully, that reads as needlessly combative within the context. It sounds like the blockchain proponents who say that the only people who are against cryptocurrencies are the ones who are “bitter for having missed the boat”.¹
It is possible and perfectly reasonable to identify problems in ChatGPT and similar technologies without feeling threatened. Simple example: someone who is retired and monetarily well off, whose way of living and sense of self worth are in no way affected by developments in AI, can still be critical and express valid concerns when these models tell you that it’s safe to boil a baby² or give other confident but absurdly wrong answers to important questions.
¹ I’m not saying that’s your intention, but consider that type of rhetoric may be counterproductive if you’re trying to make another understand your point of view.
² I passed by that specific example on Mastodon but I’m not finding it now.
> ChatGPT and similar models are revolutionary
For _what purpose_, tho? It's a good party trick, but its tendency to be confidently wrong makes using it for anything important a bit fraught.
11 replies →
So, to you, ChatGPT is approaching AGI?
14 replies →
The problem is that ChatGPT is about as useful as all the other dilettantes claiming to be polymaths. Shallow, unreliable knowledge on lots of things only gets you so far. Might be impressive at parties, but once there's real, hard work to do, these things fall apart.
1 reply →
As much as I’m sick of AI products, I’m even more sick of the “ChatGPT is bullshit” argument.
It can be both bullshit and utterly astounding.
In terms of closing the gap between AI hype and useful general purpose AI tools, no one can reasonably deny that it's an absolute quantum leap.
It's just not a daily driver for technical experts yet.
3 replies →
The biggest thing I’ve learned from chatGPT is that real people struggle with the difference between intelligence, understanding, and consciousness / sentience.
6 replies →
The only problem with the “ChatGPT is bullshit” argument is that it is only half true.
ChatGPT, when provided with a synthetic prompt is reliably a synthesizer, or to use the loaded term, a bullshiter.
When provided with an analytic prompt, it is reliably a translator.
Terms, etc: https://www.williamcotton.com/articles/chatgpt-and-the-analy...
3 replies →
I like this take. It has many clear applications already and LLM's are still only in their infancy. I both criticize and use ChatGPT at work. It has flaws and it has advantages. That it's bullshit or "ELIZA" is a short-sighted view that overvalues the importance of AGI and misses what we're already getting.
But yes indeed, there are many, many AI products launched during this era of rapid progress. Even kind of shoddy products can be monetized if they provide value over what we had before. I think the crowded market and all the bullshit and all the awesome, all at once, is a sign of very rapid progress in this space. It will probably not always be like this and who knows what we are approaching.
3 replies →
I can say with a certain degree of confidence that you haven't actually used CoPilot daily.
I've worked with teams that used Copilot. They claim it's great "Hey, now I don't have to actually spend any time writing all this boilerplate!" while for me, the person who has to review their code before releasing stuff, easier ways of writing boilerplate is not a positive, it's a negative.
If writing boilerplate becomes effortless, then you'll write more of it, instead of feeling the pain of writing it and then trying to reduce it, because you don't want to spend time writing it.
And since Copilot was accepted as a way to help the developers on the teams, the increase of boilerplate have been immersive.
I'm borderline pissed, but mostly at our own development processes, not at Copilot per se. But damn if I didn't wish it existed somehow, although it was inevitable it would at one point.
6 replies →
I haven't. Now you know for a fact :)
What I have seen about it ranged from things that can be nearly just as well handled by your $EDITOR's snippet functionality to things where my argument kicked in - I have to verify this generated code does what I want, ergo I have to read and understand something not written by me. Paired with the at least somewhat legally and ethical questionable source of the training data, this is not for me.
7 replies →
I've used it quite a lot and I agree with the original post. It seemed really useful at first but then it started introducing several bugs in large blocks of code. I've stopped using it in the end since the small snippets on the one line size is trivial enough to write myself (with just vim proficiency) and the larger blocks on the order of a function autocomplete is too bug prone (and kills too much willpower budget to fix).
Yep. I’m personally skeptical of so many other use cases for LLMs but CoPilot is fantastic and basically just autocomplete on rocket fuel. If you can use autocomplete, you can use CoPilot super effectively.
3 replies →
This is such a bullshit answer. No, I don't use it daily because I tried it for a couple hours and it suggested nothing useful and several harmful. Why would I keep using it?
6 replies →
I can say with a higher degree of confidence that you haven't actually used CoPilot daily for any respectably sized project.
1 reply →
> The "I" in AI is just complete bullshit and I can't understand why so many people are in a awe
I agree.
And the worst thing is that the bullshit hype comes round every decade or so, and people run around like headless chickens insisting that "this time its different", and "this time its the REAL THING".
As you say, first(ish) there was ELIZA. Than this that and everything else. Then Autonomy and all that dot-com era jazz. Now with compute becoming more powerful and more compact, any man and his dog can stuff some AI bullshit where it doesn't belong.
I have seen comments below on this thread where people talk about "well, it's closing the gap". The thing you have to understand is that the gap will always exist. Ultimately you will always be asking a computer to do something. And computers are dumb. They are and will always be beholden to the humans that program them and the information that you feed them. The human will always have the upper hand at any tasks that require actual intelligence (i.e. thoughtful reasoning, adapting to rapidly changing events etc.).
> And the worst thing is that the bullshit hype comes round every decade or so, and people run around like headless chickens insisting that "this time its different", and "this time its the REAL THING".
This. To answer the OPs question, this is what I'm fatigued about.
I'm glad we're making progress. It's a hell of a parlor trick. But the hype around it is astounding considering how often it's answers are completely wrong. People think computers are magic boxes, and so we must be just a few lever pulls away from making it correct all the time.
Or maybe my problem is that I've overestimated the average human's intelligence. If you can't tell ChatGPT apart from a good con-man, can we consider the Turing test passed? It's likely time for a redefinition of the Turing test.
Instead of AI making machines smarter, it seems that computers are making humans dumber. Perhaps the AI revolution is about dropping the level of average human intelligence to match the level of a computer. A mental race to the bottom?
I'm reminded of the old Rod Serling quote: We're developing a new citizenry. One that will be very selective about cereals and automobiles, but won't be able to think.
5 replies →
This is not always true, see Chess.
1 reply →
Man, if this were 1800 you'd be stating that man would never fly and the horse would never be supplanted by the engine. I honestly don't believe you have any scientific or rational reasoning for the point you are attempting to make in your post, because if you were you'd be stating that animal intelligence is magical.
3 replies →
“AI” isn’t bull shit, it’s correctly labeled. It’s intelligence which is artificial: i.e. fake, ersatz, specious, not genuine… It’s our fault for not just reading the label. (I absolutely agree with your post and your viewpoint, just to be clear!)
Artifical means "not human" in this context for me, but I understand "Intelligence" as the abiltiy to actual reason about something based on things you learned and/or experienced, and these "AI" tools don't do this at all.
But defining "intelligence" is a philosopical question that doesn't necessarily have one answer for everything and everyone.
4 replies →
The intention of the "artificial" in "AI" is not that particular meaning of "artificial", but the one for "constructed, man-made"—see meaning #1 in the Wiktionary definition[0]; the one you are using is #2.
It is often frustrating that English has words with such different (but clearly related) definitions, as it can make it far too easy to end up talking past each other.
[0] https://en.wiktionary.org/wiki/artificial
2 replies →
"Artificial" is not synonymous with "fake". "Fake" implies a level of deception.
12 replies →
I agree with you completely. I work in the field and I think your sentiment is way more common amongst people who know about the technology, vs the fair weather fans who have all jumped on the hype bandwagon recently. I actually posted the same thing (that it's no different than Eliza) a month or so ago, and got at least one hilarious dismissal, like the "I bet you make widgets" person that replied to you.
If you believe that ChatGPT is similar to Eliza, then I can guarantee that you have no rigorous no-wriggle-room definition of what intelligence is. Maybe you think you understand it, or have defined it, but I'm 100% certain any such definition is not 100% reductive and instead relies on other ill-defined works like "reasoning" etc etc.
Thank you!
“It’s just statistics” is an evergreen way to dismiss AI. The problem is you’re also just statistics.
Source for consciousness / intelligence to be "statistics"?
I don't think there is any because there is no functional model for what organic intelligence is or how it operates. There are plethora of fascinating attempts / models but only a subset implore that it is solely "statistical". And even if it was statistical, the implementation of the wet system is absolutely not like a gigantic list of vectorized (stripped of their essence) tokens
4 replies →
Shh. The models don’t like hearing that.
> Github CoPilot? Great, now I have to perform the most mentally taxing part of developing software, namely understanding other people's code (or my own from 6 months ago...) while writing new code. I'm beyond thrilled ...
I think there's an argument to be made that AI is being used here to help you tackle the more trivial tasks so you have more time to focus on the more important, and challenging tasks. Albeit I recognise GitHub CoPilot is legally questionable.
But yes, I agree with your overall point that AI has still not been able to 'think' like a human but rather can only still pretend to think like a human, and history has shown that users are often fooled by this.
I think the parent’s comment is probably referring to the fact if you use Copilot to write code then you have to go through and try to understand what it wrote and possibly debug it. And you don’t have the opportunity to ask it why it wrote it the way it did when reviewing its code.
6 replies →
As soon as I open a fresh IDE these days I immediately miss CoPilot and it's the first thing I install.
Hype or not, it's incredibly useful and has increased my productivity by at least 20%. Worth every penny.
I agree. I didn't understand the big deal that it passed a google interview either. IMO, that said more about the uselessness of the interview than the 'AI'.
Co-pilot has been semi-useful. It's faster than search SO, but like you said, I still have to review all the code and it's often wrong in subtle ways.
This is the meat of the issue - ChatGPT is exposing certain things a susceptible to bullshit attacks; humans have just been relatively bad at those.
It will turn out to be a useful tool for those who know what they’re asking about so they can check the answer quickly; but it will be USED by tons of people who don’t have a way of verifying the answers given.
ChatGPT is of actual help for me in various daily tasks, which was never the case with ELIZA or earlier chatbots which were only good as a curiosity or to have some fun.
Lack of actual human understanding? Of course, by definition a machine will always lack human understanding. Why does that matter so much if it's a helpful tool?
For what it's worth, I do agree that there is a lot of hype. But contrary to blockchain, NFTs, web3, etc., this is actually useful for many people in many everyday use cases.
I see it as more similar to the dot com hype - buying a domain and creating a silly generic website didn't really multiply the value of your company as some people thought in that era, but that doesn't mean that websites weren't a useful technology with staying power, as time has shown.
I'm sorry I don't want it to get much smarter.
It you ask it to go through and comment code it does a pretty good job of that.
some things better than others(not that great at CSS)
need a basic definition of something. got it.
tell it to write a function it's not bad.
As a BA just tell it what your trying to do and what questions it should ask users. It will get some good ideas for you.
Want it to be a PM have create a loop asking every 10 minutes if your done yet.
Is it a senior engineer? no. can it pass a senior engineering interview? quite possibly.
debug code hit or miss.
I think the big thing it's not that great at front end code. It can't see so that probably makes sense. a fine-tuned version of clip that interacted with a browser would probably be pretty scary.
What's the point of letting it comment code? The programmer who reads the code can run it as well.
I don't really think of ChatGPT as AI at this point, just an incredibly useful tool.
I wonder if we will look back at this comment (and others like it) as similar to the infamous “takedown” of Dropbox when it was first posted on HN.
Time will tell, I certainly can’t predict.
[dead]
[flagged]
We've banned this account for repeatedly breaking the site guidelines and ignoring our request to stop.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
[flagged]
The "I" in AI is just complete bullshit
We're about six minutes away from "AI bros" becoming a thing.
The same kind of grifters who always latch onto the latest thing and hype it up in order to make a quick buck are already knocking on AI's door.
See also: Cryptocurrency, and Beanie Babies.
No. I’ve ignored the whole thing so far.
I just use the features in the iphone where some photos get enhanced or i can detect and copy text from images.
So far it’s going very well.
No, it is just that you are browsing the news that only follow the trends I think. Follow more independent thinkers and things would get better.
I kinda like how the tech bubble feels when "everyone" is excited about the same thing... (be it Lisp, a text editor or AI)
Not really. Deep learning had continued to deliver since 2012. You can't say that about crypto. Or any other new tech.
No, I absolutely love it and can't get enough of it. I'm really happy that AI tools are going mainstream.
I told ChatGPT to implement itself. It's a WOW moment for me. It's like i was reborn for next century.
As tired as of Cryptocoins/Nfts/whatever... but it keeps popping to the top of HN anyway
I've been using the hide button here for weeks because there are so many ai/gpt posts... so, yes.
There is no AI. At best it's diffusive sequence generation, and at worst it's just noise.
The analogue version was academic institutions that paid gifted people with intelligence to have research assistants that went to the library. But we defunded that model of natural intelligence because it wasn't equitable for the ungifted.
As an operation, it's a big mechanical turk that is sped up with huge amounts of server spend. The utility of any output is a lossy derivative of curated knowledge and IP, trained and censored by some of the poorest people on the planet.
https://en.wikipedia.org/wiki/AI_effect
This is a sign that you should spend less time on your computer and go out a bit more.
No, just AI fatigue fatigue.
still way below js/py fatigue or internet fatigue; actually i find the recent GPT a bit different in their impact (even though I'm not thinking it's the fatigue)
bring me npmGPT
Sriracha is great, but we don't need Sriracha Cheerios.
Sick of hype? Yes. Excited about the future of AI? Also yes.
Singularity, my man. You’re tired. But theme world isn’t.
I work in the field, so: yes, since about 2015, heh.
There’s some very good stuff going on but no question the hype cycle is currently shifting. Crypto is dead, AI is the new crypto.
With that, all the hype-sters and shady folks rush in and it can quickly become hard to differentiate between what’s good, what’s misplaced enthusiasm, and what’s just a scam.
These scenarios are also a big case study in the Dunning-Kruger effect. I’ve already got folks that haven’t written a line of code in their life trying to “explain” to me why I’m not understanding why some random junk AI thing that’s not really AI is the next big thing. Sometimes you gotta just sit there and be like “thanks for your perspective”.
That’s Capitalism. AI is the new growth frontier, so it’s all you are hearing about. Seems like LLMs and generators are genuine innovations. But don’t lose sight that these innovations are driven by the Capitalist need to concentrate more surplus value into fewer hands. This is no different than programable looms, etc. of the past, except now they will try to automate immaterial/“intellectual” work. It remains to be seen if these technologies will succeed at that, but the Capitalists are compelled to try, and we will be forced to live with the wreckage.
after 10+ years of stagnation or increments in general tech, this feels really novel
ofc HN over-analysing is killing the fun
You're absolutely right. And if this is only your second ride on the hype cycle, they come around often. Gartner publishes a list of them.
Try to focus on the bright side - now that you've seen behind the curtain, you can more easily avoid the hacks and shysters. They will try to cast the "ML/AI" spell on you and it won't take.
AI is the NFT of 2023.
Like I said some days ago, I really wished that the hype would die or dwindle a bit. I'm working on my own AI side-projects, but the amount of BS and misinformation being put out everyday by new "AI experts" is fatiguing, yes.
Speaking about opportunity cost of folks pursuing AI like a mad crowd... I started a ChatGPT competitor https://text-generator.io let me know what you think .. or if it's too much...
We are not there yet. And crypto is not dead, too.
How is crypto not dead?
Well, I am old enough to remember the cryptowinters of 2015, 2018, and early 2020. Not the first, not the last. BTC/USD is still around 22,000, which looks very much not zero to me.
Total Cryptocurrency Market Cap is over a trillion dollars
1 reply →
the same way stock markets arent dead.
If crypto is dead, then I'm sure you wouldn't mind gifting me several bitcoin :D
1 reply →
* not yet
Well, eventually either the governments will survive, or crypto will. Governments (as nation states) exist longer than crypto, but they have their share of problems as of lately.
2 replies →