← Back to context

Comment by Joel_Mckay

5 days ago

I asked for bobs proof, and was given marketing.

Then provided instructions on how to present facts, and still await the data.

Then immature folks showed up to try to cow people with troll content.

I don't have to prove anything, as the evidence was already collected and reported in peer-reviewed journals. People just prefer to ignore the cited evidence that proves they are full of steaming piles of fictional hype. =3

Proof of what exactly are you asking for?

Certainly there is zero question that today's "AI" systems are making progress on a wide array of benchmarks. You can see that here:

https://aiindex.stanford.edu/wp-content/uploads/2024/05/HAI_...

Look in the section starting on page 73. Or just examine this image:

https://aiindex.stanford.edu/wp-content/uploads/2024/04/fig_...

There's your "speculative fiction" you seem so fond of.

Now, if your argument is no more than "who cares about benchmarks, 'AI' still isn't 'Real AI'" then all you're doing is repeating the 'AI Effect'[1] thing over and over again and refusing to acknowledge the obvious reality around you.

The AI's we have today, whether "real AI" or not, are highly capable in many important areas (and yes, far from perfect in others). But there is a starting point to talk about, and yes there is reason to think in terms of "rate of change" unless you have some evidence to support a belief that AI progress has reached a maximum and will progress no further.

I don't have to prove anything, as the evidence was already collected and reported in peer-reviewed journals.

Again, evidence for what exactly? What are you even claiming? All I see Bob claiming, and what I support him(?) in is the idea that there is legit reason to worry about the economic impact of AI in the near('ish?) future.

[1]; https://en.wikipedia.org/wiki/AI_effect

  • Indeed, I gather you did not comprehend the threads topics, and instead introduced a straw-man arguing at some point in the future LLM proponents will be less full of steaming piles of fictional hype.

    Assertion:

    1.) LLMs mislabeled “AI” is dangerous to students due to biasing them with subtle nonsense, and a VR girlfriend convincing a kid to kill themselves. Again the self referential nature of arguing the trends will continue toward AGI is nonscientific reasoning (a.k.a. moving the goal post because the “AI” buzzword lost its meaning due to marketing hype), but this is economically reasonable nonsense rhetoric.

    2.) Software developers can be improved or replaced with LLMs. This was proven to be demonstrably false for experienced people.

    3.) LLMs are intelligent or will become intelligent. This set of arguments show a fundamental misunderstanding of what LLMs are, and how they function.

    4.) Joel may be a USB connected turnip. While I can’t disprove this self presented insight, it may be true at some point in the future.

    I still like you, even if at some point you were reprogrammed to fear your imagination. =3