← Back to context

Comment by lenkite

2 days ago

> the type checking can only happen at the call-site, because you need to know the actual type arguments used, regardless of what the constraints might say.

No longer true after C++ 20. When you leverage C++20 concepts in templates, type-checking happens in the template body more precisely and earlier than with unconstrained templates.

In the below, a C++ 20+ compliant compiler tries to verify that T satisfies HasBar<T> during template argument substitution, before trying to instantiate the body

    template<typename T> 
      requires HasBar<T>
    void foo(T t) {
      t.bar();
    }

The error messages when you use concepts are also more precise and helpfully informative - like Rust generics

As I said in the other comment, I'm not a C++ user, so I'm relying on cargo-culting and copy-paste. But I think gcc disagrees - otherwise this would not compile, as line 14 is provably invalid: https://godbolt.org/z/P8sWKbEGP

Or am I grossly holding this wrong?

  • You need to have something that uses those templates. In your godbolt example, add a struct S

        struct S {
          bool M() { return true; }
        };
    
    
        int main() {
          S s;
          foo(s); // this now will check foo<S>
        }
    

    Now you will get compile errors saying that the constraint is not satisfied and that there is no matching function for call to 'bar(S&)' at line 14.

    • > You need to have something that uses those templates.

      Exactly. That is what I said:

      > because you need to know the actual type arguments used, regardless of what the constraints might say.

      It is because type-checking concept code is NP complete - it is trivial to check that a particular concrete type satisfies constraints, but you can not efficiently prove or disprove that all types which satisfy one constraint also satisfy another. Which you must do to type-check code like that (and give the user a helpful error message such as “this is fundamentally not satisfiable, your constraints are broken”).

      And it’s one of the shortcomings of C++ templates that Go was consciously trying to avoid. Go’s generics are intentionally limited so you can only express constraints for which you can efficiently do such proofs.

      I described the details a while back: https://blog.merovius.de/posts/2024-01-05_constraining_compl...

      4 replies →

    • Just to clarify why this is a problem: it’s possible for foo and bar to be defined in different libraries maintained by different people. Potentially several layers deep. And the author of the foo library tests their code and it compiles and all of their tests pass as and everything is great.

      But it turns out that’s because they only ever tested it with types for which there is no conflict (obviously the conflicts can be more subtle than my example). And now a user instantiates it with a type that does trigger the conflict. And they get an error message, for code in a library they neither maintain nor even (directly) import. And they are expected to find that code and figure out why it breaks with this type to fix their build.

      Or maybe someone changes one of the constraints deep down. In a way that seems backwards compatible to them. And they test everything and it all works fine. But then one of the users upgrades to a new version of the library which is considered compatible, but the build suddenly breaks.

      These kind of situations are unacceptable to the Go project. We want to ensure that they categorically can’t happen. If your library code compiles, then the constraints are correct, full stop. As long as you don’t change your external API it doesn’t matter what your dependencies do - if your library builds, so will your users.

      This doesn’t have to be important to you. But it is to the Go project and that seems valid too. And it explains a lot of the limitations we added.