Comment by lifeisstillgood
8 months ago
Is it a question of how bad OAuth 2.0 is then?
“”” David Harris, author of the email client Pegasus Mail, has criticised OAuth 2.0 as "an absolute dog's breakfast", ””” https://en.m.wikipedia.org/wiki/OAuth
I keep on trying and failing to implement / understand OAuth 2 and honestly feel I need to go right back to square one to grok the “modern” auth/auth stack
It’s funny, I’m the opposite. I love OAuth for what it does, that is, federate permission to do stuff across applications. It makes a lot of cool integration use cases possible across software vendor ecosystems, and has single-handedly made interoperability between wildly different applications possible in the first place.
I’d say it definitely helps to implement an authorisation server from scratch once, and you’ll realise it actually isn’t a complex protocol at all! Most of the confusion stems from the many different flows there were at the beginning, but most of it has been smoothened out by now.
Eran Hammer (the author of OAuth 1.0 and original editor of the OAuth 2.0 spec) resigned during the early draft specification process and wrote a more detailed criticism[1].
I don't think I agree with every point he makes, but I think he had the right gist. OAuth 2.0 became too enterprise-oriented and prioritized enterprise-friendliness over security. Too many implementation options were made available (like the implicit grant and the password grant).
[1] https://gist.github.com/nckroy/dd2d4dfc86f7d13045ad715377b6a...
I wasn't really following OAuth back in those days, but I have heard much of the history from those that were there are the time, and there were some of the failures of some of the early specs in this area for being too secure - and hence to hard to implement and failing to be adopted.
Was OAuth2 wrong to land exactly where it did on security back in 2012 or before? It seems really hard to say - it clearly didn't have great security, but it was easy to implement and where would we have ended up if it had better security but much poorer adoption?
Does the OAuth working group recognise those failures and has it worked hard to fixed them over the years since? Yes, very much so.
Has OAuth2 being adopted in use cases that do require high levels of security? Yes, absolutely, OpenBanking and OpenHealth ecosystems around the world are built on top of OAuth2. In particular the FAPI profile of OAuth2 that gives a pretty much out-of-the-box recipe for how to easily comply with the OAuth security best current practices document, https://openid.net/specs/fapi-2_0-security-profile.html (Disclaimer: I'm one of the authors of FAPI2, or at least will be when the next revision is published.)
Is it still a struggle to try and get across to people in industries that need higher security (like banking and health) that they need to stop using client secrets, ban bearer access tokens, to mandate PKCE, and so on? Yes. Yes it is. I have stories.
Back in 2012, TLS was not enabled everywhere yet. OAuth 1.0 was based on client signatures (just like JAR, DPoP etc., but far simpler to implement) and it was a good fit for its time. One of Eran Hammer's top gripes with the direction OAuth 2.0 was going for is removing cryptography and relying on TLS. I think this turned out to be a good decision, in hindsight, since TLS did become the norm very quickly, and the state of cryptography at IETF during that period (2010) was rather abysmal. If OAuth 2.0 did mandate signatures, we'd end up with yet another standard pushing RSA with PKCS#1 v1.5 padding (let's not pretend most systems are using anything else with JWT).
But that's all hindsight is 20:20, I guess. I think the points that withstood the state of time more is about how OAuth 2.0 was more of a "Framework" than a real protocol. There are too many options and you can implement everything you want. Options like the implicit flow or password grant shouldn't have been in the standard in the first place, and the language regarding refresh tokens and access tokens should have been clearer.
Fast forward to 2024, I think we've started going back to cryptography again, but I don't think it's all good. The cryptographic standards that modern OAuth specs rely on are too complex, and that leads to a lot of opportunity for attacks. I'm yet to see a single cryptographer or security researcher who is satisfied with the JOSE/JWT set of standards. While you can use them securely, you can't expect any random developer (including most library writers) to be able to do so.
OIDC is what fixes the “dog breakfast” criticism. With OIDC you (in theory) don’t have to write custom modules per provider anymore.
It would fix a lot of the provider specific aspects of OAuth2, if the spec would be more strict on some claim (attribute) names on the jwt ID token. Some provide groups, some don't. Some call it roles or direct_groups. Some include prefered_username, some don't. Some include full name, some don't and don't get me started on name and first_name.
If you implement OIDC you must certainly provide a configurable mapping system for source claim name to your internel representation of a user object.
That sounds bad. Why would they under specify all that??
1 reply →