Comment by 1st1

4 hours ago

I don't think this is related in any way.

> Having a CI job that identifies places where the docs have drifted from the implementation seems pretty valuable.

https://docs.python.org/3/library/doctest.html

> To check that a module’s docstrings are up-to-date by verifying that all interactive examples still work as documented. To perform regression testing by verifying that interactive examples from a test file or a test object work as expected. To write tutorial documentation for a package, liberally illustrated with input-output examples. Depending on whether the examples or the expository text are emphasized, this has the flavor of “literate testing” or “executable documentation”.

Seems pretty related to me.

  • Not really.

    > Having a CI job that identifies places where the docs have drifted from the implementation seems pretty valuable.

    Testing with lat isn't about ensuring consistency of code with public API documentation. It is about:

    * ensuring you can quickly analyze what tests were added / changed by looking at the English description

    * ensuring you spot when an agent randomly drops or alters an important functional/regression tests

    The problem with coding agents is that they produce enormous diffs, and while reading tests code is very important in practice your focus and attention drifts and you can't do thorough analysis.

    This isn't a new problem though, the same thing applies to classic code reviews -- rarely coding is a bottle neck, it's getting all reviews from humans to vet the change.

    Lat shifts the focus from reading test code to understanding the semantics of the test. And because instead of reviewing 2000 lines of code you can focus on reviewing only 100 lines change in lat.md you'll be able to control your tests and implementation more tightly.

    For projects where code quality isn't paramount I now just glance over the code to spot anti-pattern and models failing to DRY and resorting to duplicating large swaths of code.