← Back to context

Comment by 7373737373

4 days ago

One problem I have with learning Lean is that tactics - like rfl in the example - are overloaded, and their full semantics not completely explained/understandable from the tutorials. Unlike, say, C programming where one may understand what happens to the program state down to the bit, it feels too fuzzy. And the rewrite (rw) tactic syntax doesn't feel natural either.

Yeah, I've similarly found the tactics in Coq (now Rocq) difficult to internalize. E.g., I might have "A = B" and "P(A,A)" available, and I want to conclude "P(A,B)", but the rewrite will fail for some arcane reason. (Issues with the definition of some of the intermediate structures, I'd imagine.)

On the other end of the spectrum, I've recently been playing with Metamath and its set.mm database, which has no programmable tactics at all, only concrete inferences that can be used in a proof. (E.g., the modus ponens inference ax-mp says that "|- ph" and "|- ( ph -> ps )" prove "|- ps", where "ph" and "ps" are variables that can be substituted.) Alas, it's not much better, since now you have to memorize all the utility lemmas you might need!

  • Agreed - the Metamath base language (and its verifier) seem to be the most tractable of all I've seen, although it is probably still quite far away from the complexity of the high level language(s) that compile to it.

    Derived from it, the currently best attempt to achieve an unambiguous and secure language seems to be Metamath Zero: https://github.com/digama0/mm0

This is one of the reasons I prefer Agda. It is usually written without tactics, you just write the proof term in a functional programming language via the Curry–Howard correspondence. The trade off is that you must be more disciplined with creating useful abstractions and functions, otherwise proving even routine stuff becomes really tedious.

  • I'm not sure you prefer Agda so much as you prefer providing proof terms functionally rather than imperatively. Then again I've never used Agda; how does it differ from Coq/Rocq minus Ltac or Lean minus tactics?

    • Coq has much less support for working with dependent types; you need lots of annotations compared to Agda (which has superior dependent pattern matching support). Lean is somewhere between the two.

At least you can 'go to definition' on the tactics and see what they're doing. It's a lot to take in at the beginning, but it can all be inspected and understood. (At least until you get to the fundamental type theory; the reduction rules are a lot harder to get into.)

> the rewrite (rw) tactic syntax doesn't feel natural either.

Do you have any thoughts on what a natural rewrite syntax would be?

  • > Do you have any thoughts on what a natural rewrite syntax would be?

    Not yet, but I'd probably prefer something that more explicitly indicated (in English, or some sort of more visually "pointing" indicator) which specific parts of the previous step would be replaced

    It feels weird, or I'd have to get used to that both what is being replaced and what it is replaced with depends on some distant context, it's very indirect as it requires switching attention between the tactics tree and the context or previous proofs

    • If you have specific ideas, I'm also curious! I wonder if Lean can be extended to support what you're thinking of — its ability to have custom syntax and tactics seems really powerful. That's part of what excites me about Lean as I'm also not always the biggest fan of existing tactics, but they seem to evolve similarly to other pieces of software.

      2 replies →

This is why I prefer Agda, where everything comes down to pattern matching.

  • You can absolutely use pattern matching in Lean instead of tactics, if you prefer to write the proof that is closer to "what is going on under the hood"

    • yeah i didn't mean to imply you cannot do that, but tactics seem to be highly encouraged.

      I'm actually a big fan of Lean, I just like it more as a programming language for writing programs with dependent types then as a proof checker.

It was interesting to me as it didn't fit my expectations. As a math theory ignoramus, I expected that reflection and rewrite are more fundamental than addition. But Lean seems to assume addition but require explicit rfl and rewrite.

Perhaps there's a Lean "prelude" that does it for you.

  • Yes, there is a prelude that defines natural numbers and an addition function on them. As the post notes, the reflexivity tactic "unfolds" the addition, meaning that it applies the addition function to the constants it is given, to arrive at a constant. This is not specific to addition, it unfolds other function definitions too. So addition doesn't have to come first, you are right that reflexivity is more fundamental.

    • Author here — this is correct. I’ve added a paragraph on this in an edit btw so it’s possible the parent poster hasn’t seen it yet. Reproducing the paragraph:

      (Here, rfl closes ⊢ 3 + 3 = 6, but for a different reason than one might think. It doesn’t really “know” that 3 + 3 is 6. Rather, rfl unfolds the definitions on both sides before comparing them. As 3, 6, and + get unfolded, both sides turn into something like Nat.zero.succ.succ.succ.succ.succ.succ. That’s why it actually is a something = something situation, and rfl is able to close it.)

      Also, another resource on this is https://xenaproject.wordpress.com/2019/05/21/equality-part-1...

      1 reply →

  • It's because the addition can evaluate to only one form: `3 + 3` and `6` both evaluate to `succ (succ (succ (succ (succ (succ zero)))))`, similarly Lean can infer `4 * 3 = 1 + 3 + 8` because both evaluate to the same as `12`. But an expression can be rewritten to an infinite number of forms, e.g. `x * 2` can be rewritten to `x + x`, `(x + (x / 2)) * 4/3)`, etc. So `refl` can automatically evaluate both sides but not rewrite them.

The surprising thing to me was that tactics are all written in "user-level" code, outside the proof kernel. This makes sense in the sense that you want a small, thoroughly tested kernel that doesn't change. But it also implies that if you use tactics in your proof, then your proof can go from correct to failing if one of the tactics you use gets modified between releases. Is that a problem in real world use?

  • Yes. I've seen proofs fail after a mathlib update because the behaviour of simp changed. I've never seen it do less after an update (so far), but sometimes it'll simplify more and then the next steps may fail to work.

    I've since adopted the suggestion (that is afaik used in mathlib) to never use bare simp unless it closes the current goal directly. Instead, if you write simp?, Lean will run the simplifier and tell you exactly which theorems it used (in the form of simp only [...]) which you can then insert into the proof instead.

    • I guess it makes sense: the tactics are defined in the lean repo (but not in the kernel of that repo), while mathlib has its own repo. But any changes to tactic code triggers a test to verify that mathlib still works against it. Which implies there's a reasonable set of checks and balances that don't allow tactic changes to break anything too badly if you follow mathlib's idioms, though it can still happen.

Yeah the documentation is also quite fragmented because tactics are user-definable and some are from Lean vs from Mathlib etc. I've gotten quite decent at using the basic ones but I still sometimes ask on Zulip if something isn't working as I expected.

I understand what you mean, but I shudder to think at how complicated it would be to write certain proofs without tactics, e.g. simp, ring, omega, linarith and friends all really do a lot of work.