← Back to context

Comment by aurbano

1 year ago

My point was that instead of blaming ML - or optimisation tools really - for gaming objective functions and coming up with non-solutions that do maximise reward, AI could instead be used to measure the reward/fitness of the solution.

So to the OP's example "optimise a bike wheel", technically an AI should be able to understand whether a proposed wheel is good or not, in a similar way to a human.