Comment by Merovius

2 days ago

As I said in the other comment, I'm not a C++ user, so I'm relying on cargo-culting and copy-paste. But I think gcc disagrees - otherwise this would not compile, as line 14 is provably invalid: https://godbolt.org/z/P8sWKbEGP

Or am I grossly holding this wrong?

You need to have something that uses those templates. In your godbolt example, add a struct S

    struct S {
      bool M() { return true; }
    };


    int main() {
      S s;
      foo(s); // this now will check foo<S>
    }

Now you will get compile errors saying that the constraint is not satisfied and that there is no matching function for call to 'bar(S&)' at line 14.

  • > You need to have something that uses those templates.

    Exactly. That is what I said:

    > because you need to know the actual type arguments used, regardless of what the constraints might say.

    It is because type-checking concept code is NP complete - it is trivial to check that a particular concrete type satisfies constraints, but you can not efficiently prove or disprove that all types which satisfy one constraint also satisfy another. Which you must do to type-check code like that (and give the user a helpful error message such as “this is fundamentally not satisfiable, your constraints are broken”).

    And it’s one of the shortcomings of C++ templates that Go was consciously trying to avoid. Go’s generics are intentionally limited so you can only express constraints for which you can efficiently do such proofs.

    I described the details a while back: https://blog.merovius.de/posts/2024-01-05_constraining_compl...

    • There are common solutions for the library issue. Authors of libraries for example can force instantiations for a dummy type that checks their concepts.

        template void foo(Dummy);
      

      This can be done at the consumer side as well. I don't see a big deal of this. Dummy checks are common in Go too. For example, to check if a type satisfies an interface.

         var _ MyInterface = (*MyType)(nil)
         var _ SomeInterface = GenericType[ConcreteType]{}
      

      After all, Go checks that a type implements an interface only at the point where you assign or use it as that interface type.

      Thanks for your blog post. Unfortunately, the intentional limitations make the design space a massive headache and many times lead to very convoluted API. I would actually make the argument that it explodes complexity - for the developer, instead of constraining it.

      3 replies →

  • Just to clarify why this is a problem: it’s possible for foo and bar to be defined in different libraries maintained by different people. Potentially several layers deep. And the author of the foo library tests their code and it compiles and all of their tests pass as and everything is great.

    But it turns out that’s because they only ever tested it with types for which there is no conflict (obviously the conflicts can be more subtle than my example). And now a user instantiates it with a type that does trigger the conflict. And they get an error message, for code in a library they neither maintain nor even (directly) import. And they are expected to find that code and figure out why it breaks with this type to fix their build.

    Or maybe someone changes one of the constraints deep down. In a way that seems backwards compatible to them. And they test everything and it all works fine. But then one of the users upgrades to a new version of the library which is considered compatible, but the build suddenly breaks.

    These kind of situations are unacceptable to the Go project. We want to ensure that they categorically can’t happen. If your library code compiles, then the constraints are correct, full stop. As long as you don’t change your external API it doesn’t matter what your dependencies do - if your library builds, so will your users.

    This doesn’t have to be important to you. But it is to the Go project and that seems valid too. And it explains a lot of the limitations we added.