Comment by achierius
6 months ago
> Validation is not training, period.
Sure, but what we care about isn't the semantics of the words, its the effects of what they're doing. Iterated validation plus humans doing hyperparameter tuning will go a long way towards making a model fit the data, even if you never technically run backprop with the validation set as input.
> OpenAI is not doing science; they are doing business.
Are you implying these are orthogonal? OpenAI is a business centered on an ML research lab, which does research, and which people in the research community have generally come to respect.
> at this point, the argument OpenAI did something rests on unfalsifiable claims about the industry as a whole, claiming insider knowledge, while avoiding any verifiable evidence.
No, it doesn't. What OP is doing is critiquing OpenAI for their misbehavior. This is one of the few levers we (who do not have ownership or a seat on their board) have to actually influence their future decisionmaking -- well-reasoned critiques can convince people here (including some people who decide whether their company uses ChatGPT vs. Gemini vs. Claude vs. ...) that ChatGPT is not as good as benchmarks might claim, which in effect makes it more expensive for OpenAI to condone this kind of misbehavior going forward.
The argument that "no companies are moral, so critiquing them is pointless" is just an indirect way of running cover for those same immoral companies.
No comments yet
Contribute on Hacker News ↗