Comment by stego-tech
6 days ago
I don’t consider that “new” research, personally - because AI boosters don’t consider that “new”. The future they hype is one where these LLMs can magic up entirely new fields of research and study without human input, which isn’t how these models are trained in the first place.
That said, yes, it could be highly beneficial for identifying patterns in existing research that allows for new discoveries - provided we don’t trust it blindly and actually validate it with science. Though I question its value to society in burning up fossil fuels, polluting the atmosphere, and draining freshwater supplies compared to doing the same work with Grad Students and Scientists with the associated societal feedback involved in said employment activities.
> Though I question its value to society in burning up fossil fuels, polluting the atmosphere, and draining freshwater supplies compared to doing the same work with Grad Students and Scientists with the associated societal feedback involved in said employment activities.
I'd imagine AI is much cheaper on that front than grad students, whether you count marginal contribution, or total costs of building and utilization. Humans are damn expensive and environmentally intensive to rear and keep around.
You really should read the papers and reporting coming out about the sheer cost of these AI models and their operation. It might seem significantly cheaper in the context of immediate impact, but those humans provide knock-on impacts that can decrease their environmental impact (especially if done in concert), while the current crop of AI is content burning NatGas turbines and guzzling up groundwater just so a human isn’t tasked with reading a full paragraph of information, or a white paper of important content - and that’s the most optimistic view, at present.
Evaluating a technology in a vacuum does not work when trying to assess its impact, and in that wider context I don’t see the value-add of these models deployed at scale, especially when their marketing continues focusing on synthetic benchmarks and lofty future-hype instead of immediately practicable applications (like this one was).
I'm not evaluating it in a vacuum.
> You really should read the papers and reporting coming out about the sheer cost of these AI models and their operation.
Unless I've missed something big, they're still showing what I said.
Obviously, AI has its cost. And it's going to be big, because the whole world is using it, and trying to develop better models.
> those humans provide knock-on impacts that can decrease their environmental impact (especially if done in concert)
Can you name three? As far as I know, humans are energy intensive and strongly carbon-negative in general - and there's only so much they can do to decrease it; otherwise we wouldn't be facing a climate crisis.
> the current crop of AI is content burning NatGas turbines
That's a misleading statement, not an argument. AI is powered by electricity, not natural gas. Electricity is fungible, and how it's generated is not relevant to to how it's used. Even if you can point at a data center that gets power directly and exclusively from a fossil fuel generator, the problem has nothing to do with AI, and the solution is not "less AI", but "power the data center from renewables or nuclear instead".
> I don’t see the value-add of these models deployed at scale, especially when their marketing continues focusing on synthetic benchmarks and lofty future-hype instead of immediately practicable applications (like this one was)
That's the crux of the issue. You don't see the value-add. I respectfully suggest to stop looking at benchmarks, to stop reading marketing materials and taking it seriously (always a good idea, regardless of the topic), to stop listening to linkedin "thought leaders". Instead, just look at it. Try using it, see how others are using it.
The value-add is real, substantial, and blindingly obvious. To me, it's one of the best uses of electricity today, in terms of value-add per kilowatt hour.