Comment by simonw

8 months ago

It took me an embarassingly long time (given how keenly involved I was in OpenID stuff ~17 years ago https://simonwillison.net/search/?tag=openid&year=2007) to understand that OpenID Connect is almost unrelated to the original idea of OpenID where your identity is a URL and you can prove that you own that URL.

OpenID Connect is effectively an evolution of OAuth.

You may already know this, I'm writing it as a note for my future self.

OpenID Connect (OIDC) is mostly concerned with authentication. On the other hand, OAuth (or, to be more specific, OAuth v2.0) is concerned with authorization.

>> OpenID Connect is effectively an evolution of OAuth.

In my opinion, OpenID Connect is actually evolution of OpenID – in its vision/spirit:

- OIDC, like OpenID, primarily focuses on users' identity and authentication;

- OIDC, unlike OpenID, didn't (re)invent new authentication workflows, which were significantly different in their own ways. Instead, it built authentication workflows on top of existing OAuth spec (which was being (ab)used for authentication in some places which, unfortunately, is still the case) to achieve its main objective (i.e. authentication).

---

Edit: rephrased to better communicate my thoughts (still not perfect; but, as the saying goes, perfect is the enemy of the good so I stop here).

  • > OIDC, unlike OpenID, didn't (re)invent new authentication workflows, which were significantly different in their own ways. Instead, it built authentication workflows on top of existing OAuth spec

    Didn't OpenID predate OAuth? What should OpenID have built upon?

    • >> Didn't OpenID predate OAuth? What should OpenID have built upon?

      Yes, you're right about "OpenID predate OAuth" part.

      However, from my point-of-view, it seems the main source of confusion here is due to the fact that the word OpenID is used in more than one sense:

      - First, OpenID used as part of the original OpenID authentication protocol developed around 2005 which communicates the idea of a decentralized online digital identity where one way a user can asserts their online digital identity is via a URL under their control.

      - Second, OpenID used as part of the compound noun in "OpenID Connect" (which as per Wikipedia is "third generation of OpenID technology", published in 2014[1]) which implements the user identity and their authentication via authentication workflows built on top of OAuth2 spec.

      Now, in my comment earlier i.e. "OIDC, unlike OpenID, ... built on top of existing OAuth spec ... to achieve its main objective ...", I was using OIDC (with "OpenID") in the second sense in comparison to the original OpenID authentication protocol where OpenID is used in the first sense (with both senses mentioned above).

      I hope it helps.

      ---

      As an aside, looking at all the comments about "OpenId" and "OpenID Connect" as nouns, I'm reminded of the following post: Two Hard Things[2]

      ---

      [1] - https://en.wikipedia.org/wiki/Openid#OpenID_Connect_(OIDC)

      [2] - https://martinfowler.com/bliki/TwoHardThings.html

  • My problem with it being called OpenID Connect is that, in my head, an OpenID is a noun which means "a URL that you can use as your identity and prove that you own".

    That definition doesn't work for OpenID Connect. Is OpenID a noun any more? I don't think it is.

    • OpenID Connect can totally work that way if used with WebFinger for endpoint discovery, and occasionally this is implemented (though many websites do not).

      5 replies →

  • > OpenID Connect (OIDC) is mostly concerned with authentication. On the other hand, OAuth (or, to be more specific, OAuth v2.0) is concerned with authorization.

    That's a common refrain and it's quite inaccurate in that it's using a rather unorthodox definition of these terms. It's like the classic "The United States is not a Democracy, it is a Republic", where the speaker reinterprets Democracy as "Direct Democracy" and Republic as "Representative Democracy"[1].

    Same goes for "authorization" and "authentication" in OAuth and OIDC. In the normal sense, authentication deals with establishing the user's identity, while authorization determines what resources the user can access.

    Open ID Connect does indeed deal with authentication: it's a federated authentication protocol. The old OpenID also tried to introduce the concept of a globally unique identity in addition to authenticating that identity, as the GP mentioned. But OpenID Connect still supports federation: An application (the consumer) can accepts users logging in through a completely unrelated service (the identity provider). I believe this was originally specified mostly with the idea of third-party authentication in mind (using Google or Apple to log in to another company's service or conversely using your corporate SSO to log in to a SaaS web app), but microservices are very popular nowadays, even services that don't support external login often use OIDC as the authentication protocol between their authentication microservice and other services.

    OAuth on the hand, started as a method for constrained access delegation. It allowed web services to issue a constrained access token that is authorized for doing a certain set of operations (defined by the "scope" parameter and often explicitly approved by the user). Getting constrained access through OAuth requires performing authentication, so you can say OAuth is also an authentication standard in a sense. But since OAuth was designed for pure deelgation, it does not provide any user identity information along with its access token, and there was no standard way to use it for federated authentication. Open ID Connect essentially takes OAuth and adds identity information on top. To make things more complicated, there have been a lot of OAuth specifications published as RFCs[2] over the last decade, and a lot of them deal with client authentication and explicitly mentioning Open ID Connect (since arguably most OAuth implementations are also OIDC implementations now).

    In short, Open ID Connect is quite accurately described as an Authentication standard. But OAuth 2.0 has little to do with Authorization. It allows clients to specify the "scope" parameter, but does not determine how scopes are parsed, when user and client are permitted to request or grant a certain scope and what kind of access control model (RBAC, ABAC, PBAC, etc.) is used. That's ok, since it leaves the implementers with a lot of flexibility, but it clearly means OAuth 2.0 is not an authorization standard. It only concerns itself with requesting authorization in unstructured form[3].

    Better examples for proper authorization standards are declarative authorization specification DSLs like XACML, Alfa, Rego (the language using in Open Policy Agent). I guess you could also put Radius in as an example of a protocol that implements both Authentication and Authorization (although the authorization is mostly related to network resources).

    ---

    [1] To be fair, the meanings of "Democracy" and "Republic" have changed over time, and back in the 18th century, when the US was founded, it was popular to view the Athenian Democracy as the model case of a pure direct democracy and to use the term "Democracy" in this sense. Over time, the meanings changed and we got a weird aphorism that remains quite baffling to us non-Americans.

    [2] https://oauth.net/2/

    [3] RFC 9396 is a very recently published optional extension to OAuth that does structured authorization requests, and defines a standard method for resource server to query the requested and granted authorization data.

    • Agree. I'd say openid connect looks closer to SAML in terms of authenticating users and bootstrapping a "sessions" if you will. OAuth2 in my mind is one potential approach to maintaining a session, post initial authentication, used for ongoing authentication on a per-request basis. It also has information about which client the session is associated with to allow for per-client authorization decisions to be done via the authorization mechanisms you mentioned above.

      Basically both are concerned with different parts of Authentication, initial vs on-going (though 2 legged Oauth2 is also an initial authentication step).

      The line between authentication and authorization can get quite fine, especially if the authorization policy is as simple as "this set of services can talk to me" if you use mTLS and a fixed set of trusted services in a trust-store.

    • > In short, Open ID Connect is quite accurately described as an Authentication standard. But OAuth 2.0 has little to do with Authorization. It allows clients to specify the "scope" parameter, but does not determine how scopes are parsed, when user and client are permitted to request or grant a certain scope and what kind of access control model (RBAC, ABAC, PBAC, etc.) is used. That's ok, since it leaves the implementers with a lot of flexibility, but it clearly means OAuth 2.0 is not an authorization standard. It only concerns itself with requesting authorization in unstructured form[3].

      This misses the mark - scopes are abstractions for capabilities granted to the authorized bearer (client) of the issued access token. These capabilities are granted by the resource owner, let's say, a human principal, in the case of the authorization_code grant flow, in the form of a prompt for consent. The defined capabilities/scopes are specifically ambiguous as to how they would/should align with finer-grained runtime authorization checks (RBAC, etc), since it's entirely out of the purview of the standard and would infringe on underlying product decisions that may have been established decades prior. Moreover, scopes are overloaded in the OAuth2.0/OIDC ecosystem: some trigger certain authorization server behaviours (refresh token, OIDC, etc), whereas others are concerned with the protected resource.

      It's worth noting that the ambiguity around scopes and fine-grained runtime access permissions is an industry unto itself :)

      RFC 9396 is interesting, but naive, and for a couple of reasons: 1) it assumes that information would like to be placed on the front-channel; 2) does not scale in JWT-based token systems without introducing heavier back-channel state.

      I personally do not view OIDC as an authentication standard - at least not a very good one - since all it can prove is that the principal was valid within a few milliseconds of the iat on that id_token. The recipient cannot and should not take receipt of this token as true proof of authentication, especially when we consider that the authorization server delegates authentication to a separate system. The true gap that OIDC fills is the omission of principal identification from the original OAuth2.0 specification. Prior to OIDC, many authorization servers would issue principal information as part of their response to a token introspection endpoint.

    • > Same goes for "authorization" and "authentication" in OAuth and OIDC. In the normal sense, authentication deals with establishing the user's identity, while authorization determines what resources the user can access.

      "Authorization", in the context of OAuth 2.0, means whether a third-party application is "authorized" to take actions on-behalf-of a resource-owner on some resource server. And, if the answer is yes, what is the "scope" of this "authorization".

      From the OAuth 2.0 RFC's abstract[1]:

            The OAuth 2.0 authorization framework enables a third-party
            application to obtain limited access to an HTTP service, either on
            behalf of a resource owner by orchestrating an approval interaction
            between the resource owner and the HTTP service, or by allowing the
            third-party application to obtain access on its own behalf. ...
      
      

      It's very clear that OAuth2 is all about third-party application and their access to a "resource owner" resource. As far as users and their access to their own resources are concerned, they're "resource owner" and they've all the "power" to do whatever they like (with their own resources–off course).

      For example, in the early days of Facebook, FarmVille games needed user permission in order to post on users Facebook walls and/or message users' friends if something interesting happened in the FarmVille while users are/were playing. And this is just one funny example to get across my point; there're many use-cases where it's super useful if users can grant a third-party application permission so that they can do some useful work (whatever it happens to be) on their behalf.

      > Better examples for proper authorization standards are declarative authorization specification DSLs like XACML ...

      I'm very well familiar with XACML and similar standards about access control policies; actually, I've build/developed an ABAC-based access-control service using XACML-like spec. for one of our customer-facing business application (in the recent past).

      Yes, XACML and similar specs. are good for some use-cases for user access / "authorization" (based on business needs and threat-model). Yet, I'm not sure anyone would recommend them for third-party application authorization. Off-course, it's not impossible and it can be done; however, I doubt anyone would recommend doing it when simpler solutions are available–unless there is a strong business case from the business risks and security (threat-modelling) point-of-view.

      ---

      [1] - The OAuth 2.0 Authorization Framework: https://datatracker.ietf.org/doc/html/rfc6749

Also interesting is that OAuth2 is a bit too flexible in how you can put things together and OIDC provides a lot of good practice about that how.

So even systems where OIDC compliance is a non goal often are partially OIDC compliant, I mean there really is no reason to reinvent the wheel if part of the OIDC standard already provide all you need.

  • OIDC is still far more flexible than I would have liked. It still allows the implicit flow, and it even created a new abominations that didn't exist in OAuth: The Hybrid Flow. If you just want to follow best practices, OAuth 2.1[1] or OAuth Best Current Practices[2] are far better options.

    [1] https://oauth.net/2.1/

    [2] https://oauth.net/2/oauth-best-practice/

    • Yes, indeed. Both OAuth 2.1 & the BCP tighten things up a lot, although neither is technically final yet (the security BCP should be published as an RFC "any day now").

      For people looking for an easy-to-follow interoperability/security profile for OAuth2 (assuming they're most interested in authorization code flow, though it's not exclusive to that) FAPI2 is a good place to look, the most recent official is here:

      https://openid.net/specs/fapi-2_0-security-profile.html

      This is a good bit shorter than the 50 or so pages of security BCP :)

      FAPI2 should also be moving to 'final' status within the next few months.

      (Disclaimer: I'm one of the authors of FAPI2, or will be once the updated version is published.)

      3 replies →

    • Preferable the intersection of OIDC, OAuth 2.1 and Best Current Practices.

      As in use OAuth2.1 recommendations including OAuth Best Current Practices to structure you clients inside of the OIDC framework which mostly will "just work" (if you are in control of the auth server) as it only removes possible setups you decide not to use from OIDC.

      Through I'm not sure if requiring (instead of just allowing) PKCE is strictly OIDC compliant but implementations should support it anyway ("should" as in it recommended for them to do so, not as they probably do so).

      1 reply →

    • Both systems have grown too much and are too complicated. At some point someone will replace it with something easier.

      I used countless ID providers and while they offer a very valuable service, the flows are too complicated and many implementations have a lot of security flaws because the users don't understand the implications.

      The implicit flow was trying to make it easier.

The naming is a complete nightmare.

OpenID Connect is an extension to OAuth2 (RFC 6749) that adds an authentication layer (to identify a user) on top of the OAuth2 authorization framework (that grants permissions)

Earlier versions of OAuth (1.0 and 1.0a) and OpenID (1 and 2) are unrelated, incompatible protocols that share similar names but are largely irrelevant in 2024.

Google with care.

  • Also "OpenID Connect" sounds like some sort of branded product rather than a technical standard.

Your talk at webstock back in 2008 is what originally got me interested in OpenID.

Yeah, it's a profile on top of OAuth, which leverages aspects (the authorization code grant, tokens) but adds some other functionality (another token with authentication information and some defined claims). I'm not aware of any other profiles with anywhere near the uptake of OIDC.

There are a few folks out there doing pure OAuth, but much of the time it is paired with OIDC. It's pretty darn common to want someone to be authenticated the same time they are authorized, especially in a first party context.

I wrote more on OIDC here: https://fusionauth.io/articles/identity-basics/what-is-oidc

No, OIDC is not an Evolution of OAuth. One does authentication, the other authorization. Two very different, but often intertwined concepts, where that both can also be used without requiring the other.

Is it a question of how bad OAuth 2.0 is then?

“”” David Harris, author of the email client Pegasus Mail, has criticised OAuth 2.0 as "an absolute dog's breakfast", ””” https://en.m.wikipedia.org/wiki/OAuth

I keep on trying and failing to implement / understand OAuth 2 and honestly feel I need to go right back to square one to grok the “modern” auth/auth stack

  • It’s funny, I’m the opposite. I love OAuth for what it does, that is, federate permission to do stuff across applications. It makes a lot of cool integration use cases possible across software vendor ecosystems, and has single-handedly made interoperability between wildly different applications possible in the first place.

    I’d say it definitely helps to implement an authorisation server from scratch once, and you’ll realise it actually isn’t a complex protocol at all! Most of the confusion stems from the many different flows there were at the beginning, but most of it has been smoothened out by now.

  • Eran Hammer (the author of OAuth 1.0 and original editor of the OAuth 2.0 spec) resigned during the early draft specification process and wrote a more detailed criticism[1].

    I don't think I agree with every point he makes, but I think he had the right gist. OAuth 2.0 became too enterprise-oriented and prioritized enterprise-friendliness over security. Too many implementation options were made available (like the implicit grant and the password grant).

    [1] https://gist.github.com/nckroy/dd2d4dfc86f7d13045ad715377b6a...

    • I wasn't really following OAuth back in those days, but I have heard much of the history from those that were there are the time, and there were some of the failures of some of the early specs in this area for being too secure - and hence to hard to implement and failing to be adopted.

      Was OAuth2 wrong to land exactly where it did on security back in 2012 or before? It seems really hard to say - it clearly didn't have great security, but it was easy to implement and where would we have ended up if it had better security but much poorer adoption?

      Does the OAuth working group recognise those failures and has it worked hard to fixed them over the years since? Yes, very much so.

      Has OAuth2 being adopted in use cases that do require high levels of security? Yes, absolutely, OpenBanking and OpenHealth ecosystems around the world are built on top of OAuth2. In particular the FAPI profile of OAuth2 that gives a pretty much out-of-the-box recipe for how to easily comply with the OAuth security best current practices document, https://openid.net/specs/fapi-2_0-security-profile.html (Disclaimer: I'm one of the authors of FAPI2, or at least will be when the next revision is published.)

      Is it still a struggle to try and get across to people in industries that need higher security (like banking and health) that they need to stop using client secrets, ban bearer access tokens, to mandate PKCE, and so on? Yes. Yes it is. I have stories.

      1 reply →

  • OIDC is what fixes the “dog breakfast” criticism. With OIDC you (in theory) don’t have to write custom modules per provider anymore.

    • It would fix a lot of the provider specific aspects of OAuth2, if the spec would be more strict on some claim (attribute) names on the jwt ID token. Some provide groups, some don't. Some call it roles or direct_groups. Some include prefered_username, some don't. Some include full name, some don't and don't get me started on name and first_name.

      If you implement OIDC you must certainly provide a configurable mapping system for source claim name to your internel representation of a user object.

      2 replies →