Comment by TeMPOraL

7 days ago

> where I remain a skeptic is this constant banging-on that somehow this will translate into entirely new things - research, materials science, economies, inventions, etc - because that requires learning “in real time” from information sources you’re literally generating in that moment, not decades of Stack Overflow responses without context.

Personally I hope this will materialize, at the very least because there's plenty of discoveries to be made by cross-correlating discoveries already made; the necessary information should be there, but reasoning capability (both that of the model and that added by orchestration) seems to be lacking. I'm not sure if pure chat is the best way to access it, either. We need better, more hands-on tools to explore the latent spaces of LLMs.

I don’t consider that “new” research, personally - because AI boosters don’t consider that “new”. The future they hype is one where these LLMs can magic up entirely new fields of research and study without human input, which isn’t how these models are trained in the first place.

That said, yes, it could be highly beneficial for identifying patterns in existing research that allows for new discoveries - provided we don’t trust it blindly and actually validate it with science. Though I question its value to society in burning up fossil fuels, polluting the atmosphere, and draining freshwater supplies compared to doing the same work with Grad Students and Scientists with the associated societal feedback involved in said employment activities.

  • > Though I question its value to society in burning up fossil fuels, polluting the atmosphere, and draining freshwater supplies compared to doing the same work with Grad Students and Scientists with the associated societal feedback involved in said employment activities.

    I'd imagine AI is much cheaper on that front than grad students, whether you count marginal contribution, or total costs of building and utilization. Humans are damn expensive and environmentally intensive to rear and keep around.

    • You really should read the papers and reporting coming out about the sheer cost of these AI models and their operation. It might seem significantly cheaper in the context of immediate impact, but those humans provide knock-on impacts that can decrease their environmental impact (especially if done in concert), while the current crop of AI is content burning NatGas turbines and guzzling up groundwater just so a human isn’t tasked with reading a full paragraph of information, or a white paper of important content - and that’s the most optimistic view, at present.

      Evaluating a technology in a vacuum does not work when trying to assess its impact, and in that wider context I don’t see the value-add of these models deployed at scale, especially when their marketing continues focusing on synthetic benchmarks and lofty future-hype instead of immediately practicable applications (like this one was).

      1 reply →