Comment by cphoover
3 years ago
This is interesting, and not to criticize, but I wonder if transformer model's accuracy in interpreting law will obviate the need for something like this.
It would be interesting to train Large Language Transformer Models to generate this code for you based on the text in the laws. This way you have a deterministic testable output, without risk of hallucinations.
Even LLMs would be better off with a clear unambiguous syntax to define human laws and rules.