← Back to context

Comment by hahaxdxd123

1 day ago

A lot of people have pointed out a reproducibility crisis in social sciences, but I think it's interesting to point out this happens in CompSci as well when verifying results is hard.

Reproducing ML Robotics papers requires the exact robot/environment/objects/etc -> people fudge their numbers and have strawman implementation of benchmarks.

LLMs are so expensive to train + the datasets are non-public -> Meta trained on the test set for Llama4 (and we wouldn't have known if not for some forum leak).

In some way it's no different than startups or salesmen overpromising - it's just lying for personal gain. The truth usually wins in the end though.