Comment by intoXbox
12 hours ago
They used a custom neural net with autoencoders, which contain convolutional layers. They trained it on previous experiment data.
https://arxiv.org/html/2411.19506v1
Why is it so hard to elaborate what AI algorithm / technique they integrate? Would have made this article much better
I'm half expecting to see "AI model" appearing as stand-in for "linear regression" at this point in the cycle.
> I'm half expecting to see "AI model" appearing as stand-in for "linear regression" at this point in the cycle.
Already the case with consulting companies, have seen it myself
Some career do-nothing-but-make-noise in my organization hired a firm to 'Do AI' on some shitty data and the outcome was basically linear regression. It turns out that you can impressive executives with linear regression if you deliver it enthusiastically enough.
3 replies →
I'm half expecting to see "AI model" appearing as stand-in for "if > 0" at this point in the cycle.
This is why I am programming now in Ocaml, files themselves are AI ( ml ).
1 reply →
This is essentially what any relu based neural network approximately looks like (smoother variants have replaced the original ramp function). AI, even LLMs, essentially reduce to a bunch of code like
4 replies →
I'm sure I've seen basic hill climbing (and other optimisation algorithms) described as AI, and then used evidence of AI solving real-world science/engineering problems.
Historically this was very much in the field of AI, which is such a massive field that saying something uses AI is about as useful as saying it uses mathematics. Since the term was first coined it's been constantly misused to refer to much more specific things.
From around when the term was first coined: "artificial intelligence research is concerned with constructing machines (usually programs for general-purpose computers) which exhibit behavior such that, if it were observed in human activity, we would deign to label the behavior 'intelligent.'" [1]
[1]: https://doi.org/10.1109/TIT.1963.1057864
2 replies →
I am somewhat cynically waiting for the AI community to rediscover the last half a century of linear algebra and optimisation techniques.
At some point someone will realise that backpropagation and adjoint solves are the same thing.
2 replies →
There is an HIGGS dataset [1]. As name suggest, it is designed to apply machine learning to recognize Higgs bozon.
[1] https://archive.ics.uci.edu/ml/datasets/HIGGS
In my experiments, linear regression with extended (addition of squared values) attributes is very much competitive in accuracy terms with reported MLP accuracy.
The LHC has moved on a bit since then. Here's an open dataset that one collaboration used to train a transformer:
https://opendata-qa.cern.ch/record/93940
if you can beat it with linear regression we'd be happy to know.
And why not, when linear regression works, it works so well it's basically magic, better than intelligence, artificial or otherwise
Having work with people who do that, I can guarantee that’s not the case. See https://ssummers.web.cern.ch/conifer/ and HSL4ML, these run BDT and CNN
That works well to get around patents btw :)
It seems like most of the implementation is FPGA, which I wouldn’t call “physically burned into silicon.” That’s quite a stretch of language
Because if it’s not an LLM it’s not good for the current hype cycle. Calling everything AI makes the line go up.
LLMs also make the cynicism go up among the HN crowd.
Hm. Is HN starting to become more skeptical of LLMs? For the past couple of years, HN has seemed worryingly enthusiastic about LLMs.
How so? Half the people here have LLM delusion in every thread posted here; more than half of the things going to the frontpage are AI. Just look at hours where Americans are awake.
1 reply →
Thanks for tracking this down. I too am annoyed when so-called technical articles omit the actual techniques.
Because it does not align with LLM Uber Alles.