Comment by whatever1

3 days ago

But Humans generalize very well across tasks. You can have an employee driving a forklift, then stop pick-up a pallet that blocks his way and continue.

And robots will not do that either, what if the employee used hearing to determine if there is a hazard (another moving vehicle around) before jumping to pick a pallet? How would the robot know by just “looking”? How to prioritise visuals, audio, sense … etc?

  • There's no reason to expect models won't be able to handle this even better than humans.