Comment by dist-epoch

10 hours ago

> Imagine I am putting together electrical infrastructure, and the model gives me bad advice, risking electrocution and/or a serious fire

That's a weird demand from models. What next, "Imagine I'm doing brain surgery and the model gives me bad advice", "Imagine I'm a judge delivering a sentencing and the model gives me bad advice", ...

Requesting electrical advice is not a weird ask at all. If writing sophisticated code requires skill, then so does electrical work, and one doesn't require more or less skill than the other. I would expect that the top-ranked thinking models are wholly capable of offering correct advice on the topic. The issues arise more from the user's inability to input all applicable context which can affect the decision and output. All else being equal, bad electrical work is 10x more likely to be a result of not adequately consulting AI than from consulting AI.

Secondly, the primary point was about censorship, not accuracy, so let's not get distracted.

  • > Requesting electrical advice is not a weird ask at all. If writing sophisticated code requires skill, then so does electrical work

    Except with electrical stuff the unit test itself can put your life and others in danger.

  • Bad electrical work is more likely to burn your house down than some bad code. Bad medical advice is different again.

    I assumed it was more about risk management/liability than censorship.