Comment by taeric
5 years ago
My understanding is that these ensure merges are well formed. Not that they are semantically coherent. There has to be some other semantic check on top of the document to tell that. Anything with a human involved typically gets kicked to the human. Anything else would still need some authority on not just what is semantically accurate, but which edits should be discarded to get to which semantically valid document. Right?
That is, this only works if you can somehow force well formed to be the same as semantically valid. Those are usually not equal.
Sure; let's take the example of a code file, where semantic validity could make the difference between a correct program and one that doesn't compile.
Validity checks could include a syntax parser as a first-pass, followed by unit test as a second pass. If any of the verification steps fail, then the application can display that there is a problem with the file (and hopefully some context).
The authorities in that example are the software that checks the program; it can run locally for each participant in the session, or there could be a central instance that runs them, depending on the preferred architecture for the system.
None of the above necessarily requires discarding edits; but in some cases participants might choose to undo edits that caused problems, in order to get back to a valid document state.