Comment by intoXbox

14 hours ago

They used a custom neural net with autoencoders, which contain convolutional layers. They trained it on previous experiment data.

https://arxiv.org/html/2411.19506v1

Why is it so hard to elaborate what AI algorithm / technique they integrate? Would have made this article much better

I'm half expecting to see "AI model" appearing as stand-in for "linear regression" at this point in the cycle.

  • > I'm half expecting to see "AI model" appearing as stand-in for "linear regression" at this point in the cycle.

    Already the case with consulting companies, have seen it myself

    • Some career do-nothing-but-make-noise in my organization hired a firm to 'Do AI' on some shitty data and the outcome was basically linear regression. It turns out that you can impressive executives with linear regression if you deliver it enthusiastically enough.

      3 replies →

  • I'm half expecting to see "AI model" appearing as stand-in for "if > 0" at this point in the cycle.

    • This is essentially what any relu based neural network approximately looks like (smoother variants have replaced the original ramp function). AI, even LLMs, essentially reduce to a bunch of code like

          let v0 = 0
          let v1 = 0.40978399*(0.616*u + 0.291*v)
          let v2 = if 0 > v1 then 0 else v1
      
          let v3 = 0
          let v4 = 0.377928*(0.261*u + 0.468*v)
          let v5 = if 0 > v4 then 0 else v4...

      4 replies →

  • I'm sure I've seen basic hill climbing (and other optimisation algorithms) described as AI, and then used evidence of AI solving real-world science/engineering problems.

    • Historically this was very much in the field of AI, which is such a massive field that saying something uses AI is about as useful as saying it uses mathematics. Since the term was first coined it's been constantly misused to refer to much more specific things.

      From around when the term was first coined: "artificial intelligence research is concerned with constructing machines (usually programs for general-purpose computers) which exhibit behavior such that, if it were observed in human activity, we would deign to label the behavior 'intelligent.'" [1]

      [1]: https://doi.org/10.1109/TIT.1963.1057864

      2 replies →

    • I am somewhat cynically waiting for the AI community to rediscover the last half a century of linear algebra and optimisation techniques.

      At some point someone will realise that backpropagation and adjoint solves are the same thing.

      2 replies →

  • There is an HIGGS dataset [1]. As name suggest, it is designed to apply machine learning to recognize Higgs bozon.

    [1] https://archive.ics.uci.edu/ml/datasets/HIGGS

    In my experiments, linear regression with extended (addition of squared values) attributes is very much competitive in accuracy terms with reported MLP accuracy.

  • And why not, when linear regression works, it works so well it's basically magic, better than intelligence, artificial or otherwise

It seems like most of the implementation is FPGA, which I wouldn’t call “physically burned into silicon.” That’s quite a stretch of language

Because if it’s not an LLM it’s not good for the current hype cycle. Calling everything AI makes the line go up.

  • LLMs also make the cynicism go up among the HN crowd.

    • Hm. Is HN starting to become more skeptical of LLMs? For the past couple of years, HN has seemed worryingly enthusiastic about LLMs.

    • How so? Half the people here have LLM delusion in every thread posted here; more than half of the things going to the frontpage are AI. Just look at hours where Americans are awake.

      1 reply →

Thanks for tracking this down. I too am annoyed when so-called technical articles omit the actual techniques.