Comment by threethirtytwo

1 month ago

>I'm confused. So you're agreeing with me, up until the very last part of the last sentence...? If the "noise overwhelms the signal", why are "trendlines the best approximation we have"? We have reliable data of past outcomes in similar scenarios, yet the most recent noisy data is the most valuable? Huh?

Let me help you untangle the confusion. Historical data on other phenomenons is not a trendline for AI taking over your job. It's a typical logical mistake people make. It's reasoning via analogy. Because this trend happened for A, and A fits B like an analogy therefore what happened to A must happen to B.

Why is that stupid logic? Because there are thousands of things that fit B as an analogy. And out of those thousands of things that fit, some failed and some succeeded. What you're doing and not realizin is you are SELECTIVELY picking the analogy you like to use as evidence.

When I speak of a trendline. It's deadly simple. Literally look at AI as it is now, as it is in the past and use that to project into the future. Look at exact data of the very thing you are measuring rather then trying to graft some analogous thing onto the current thing and make a claim from that.

>What you're doing is similar to speculative takes during the early days of the internet and WWW. How it would transform politics, end authoritarianism and disinformation, and bring the world together. When the dust settled after the dot-com crash, actual value of the technology became evident, and it turns out that none of the promises of social media became true. Quite the opposite, in fact. That early optimism vanished along the way.

Again same thing. The early days of the internet is not what's happening to AI currently. You need to look at what happened to AI and software from the beginning to now. Observe the trendline of the topic being examined.

>I think neither of these viewpoints are worth paying attention to. As usual, the truth is somewhere in the middle. I'm leaning towards the skeptic side simply because the believers are far louder, more obnoxious, and have more to gain from pushing their agenda. The only sane position at this point is to evaluate the technology based on personal use, discuss your experience with other rational individuals, and wait for the hype to die down.

Well if you look at the pace and progress of AI, the quantitative evidence points against your middle ground opinion here. It's fashionable to take the middle ground because moderates and grey areas seem more level headed and reasonable than extremism. But this isn't really applicable to reality is it? Extreme events that overload systems happen in nature all the time, taking the middle ground without evidence pointing to the middle ground is pure stupidity.

So all you need to look at is this, in the past decade look at the progress we've made until now. A decade ago AI via ML was non-existent. Now AI generates movies, music and code, and unlike AI in music and movies, code is being in actuality used by engineers.

That's ZERO to coding in a decade. What do you think the next decade will bring. Coding to what? That is reality and the most logical analysis. Sure it's ok to be a skeptic, but to ignore the trendline is ignorance.