← Back to context

Comment by AtlasBarfed

3 days ago

I think on year 2001 GHz CPU should be a performance benchmark that every piece of basic non-high performance software should execute acceptably on.

This is kind of been a disappointment to me of AI when I've tried it. This has kind of been a disappointment to me of AI when I've tried it. Llm should be able to Port things. It should be able to rewrite things with the same interface. It should be able to translate from inefficient languages to more efficient ones.

It should even be able to optimize existing code bases automatically, or at least diagnose or point out poor algorithms, cache optimization, etc.

Heck I remember powerbuilder in the mid 90s running pretty well on 200 mhz CPUs. It doesn't even really interpreted stuff. It's just amazing how slow stuff is. Do rounded corners and CSS really consume that much CPU power?

My limited experience was trying to take the unix sed source code and have AI port it into a jvm language, and it could do the most basic operations, but utterly failed at even the intermediate sed capabilities. And then optimize? Nope

Of course there's no desire for something like that. Which really shows what the purpose of all this is. It's to kill jobs. It's not to make better software. And it means AI is going to produce a flood of bad software. Really bad software.

I've pondered this myself without digging into the specifics. The phrase "sufficiently smart compiler" sticks in my head.

Shower thoughts include whether there are languages that have features, other than through their popularity and representation in training corpuses, help us get from natural language to efficient code?

I was recently playing around with a digital audio workstation (DAW) software package called Reaper that honestly surprised me with its feature set, portability (Linux, macOS, Windows), snappiness etc. The whole download was ~12 megabytes. It felt like a total throwback to the 1990s in a good way.

It feels like AI should be able to help us get back to small snappy software, and in so doing maybe "pay its own way" with respect to CPU and energy requirements. Spending compute cycles to optimize software deployed millions of times seems intuitively like a good bargain.