← Back to context

Comment by julienchastang

1 year ago

Well actually: "In a twist, Microsoft is experimenting with generative artificial intelligence to see if AI could help streamline the [nuclear power] approval process, according to Microsoft executives." [0]

[0] https://www.wsj.com/tech/ai/microsoft-targets-nuclear-to-pow...

That's edging closer and closer to Douglas Adams 'Reason' software application.

  • can't read the article but if it's roughly about what I would expect (and have seen in other fields) the idea is roughly:

    The final proof reading and decision still is done by humans, but before that automatic feedback AI is used to find things which need to be changed during planing, certification and highlight potential problem. I.e. eliminate as many rounds of back and force between agency and plant builder and in turn save a lot of time.

    Through if the final proof reading is also done by AI or relies to much on AI this is a huge issue. LLMs fundamentally to their design sometimes (in the future maybe rarely) make mistakes in forms which for humans seem arbitrary, random and absurd (in turn they tend to also not do some of the mistakes humans tend to do). The problem is even if this mistakes become rare they always can be fundamental safety issues.