Comment by nickpsecurity
5 days ago
"pretty much every metric we have shows basically linear improvement of these models over time."
They're also trained on random data scraped off the Internet which might include benchmarks, code that looks like them, and AI articles with things like chain of thought. There's been some effort to filter obvious benchmarks but is that enough? I cant know if the AI's are getting smarter on their own or more cheat sheets are in the training data.
Just brainstorming, one thing I came up with is training them on datasets from before the benchmarks or much AI-generated material existed. Keep testing algorithmic improvements on that in addition to models trained on up to date data. That might be a more accurate assessment.
thats not a bad idea, very expensive though, and you end up with a pretty useless model in most regards.
A lot of the trusted benchmarks today are somewhat dynamic or have a hidden set.
That could happen. One would need to risk it to take the approach. However, if it was trained on legal data, then there might be a market for it among those not risking copyright infringement. Think FairlyTrained.org.
"somewhat dynamic or have a hidden set"
Are there example inputs and outputs for the dynamic ones online? And are the hidden sets online? (I haven't looked at benchmark internals in a while.)