← Back to context

Comment by antihero

2 days ago

This is why you have refresh tokens - your actual token expires regularly, but the client has a token that allows you to get a new one. Revoking is a case of not allowing them to get a new one.

This is an implementation detail in my opinion. There are cases where having the capability to force a refresh is desired. There are also cases where you want to be able to lock out a specific session/user/device. YMMV, do what makes sense for your business/context/threat model.

  • It is, but its an architectural decision that forces expiry by default. So you should probably even have both. AWS runs 12 hour session tokens, but you can still revoke those to address a compromise in that 12 hour gap. The nice thing that forced expiry does is you just get token revocation For Free by virtue of not allowing renewal

This is really just an optimization. It means that you don't need to do an expiry check on the regular token, only on the refresh token. It doesn't change the fact that you should be able to revoke a session before it naturally expires.

  • Having a short session expiry is a workaround for not being able to revoke a token in real time. This is really the fault of stateless auth protocols (like OAuth) which do offline authentication by design. This allows authentication to scale in federated identity contexts.

  • Yeah but depending on how you set it up, you could have a very short expiry. If your auth system is as simple as: Verify refresh token -> Check a fast datastore to see if revoked -> Generate new auth token, this is very easy to scale and and you could have millions of users refreshing with high regularity (<10s) at low cost, without impacting on your actual services (who can verify the auth token offline).

    Say you had 1,000,000 users, and they checked every ten seconds, that's 100,000 requests per-second. If you have 1,000,000 users and can't afford to run a redis/api that can handle an O(1) lookup with decode/sign that can handle that level of traffic, you have operational issues ;)

    It's all a tradeoff. Yes, it means some user may have a valid token for ten more seconds than they should, but this should be factored into how you manage risk and trust in your org.

You only have to do that if you must validate a token, without having access to session data.

I doubt most systems are like that, you can just use what you call "your actual token" and check if the session is still valid. Adding a second token is rarely needed unless you have disconnected systems that can't see session data.

  • Not having to start all my API handlers with a call to the DB to check token validity significantly improves speed for endpoints that don't need the SQL db for anything else, and reduces the load on my SQL db at the same time.

    • Then don't hit the SQL DB directly, cache the tokens in memory. Be it Redis or just in your app. Invalidate the cache on token expiry (Redis has TTL built in).

      UserID -> token is a tiny amount of data.

    • Does it actually improve speed though? The DB check is simply "does this key exist", it can be done in a memory database, it doesn't have to be the same DB as the rest of your data.

      Validating a token requires running encryption level algorithms to check the signing signature, and those are not fast.

That's a great way to interfere with local work when the network goes down.

  • If you've built a local app that has to authenticate you against a remote web service even when offline, and all the actual work is being done locally, you have much bigger design issues than authn IMHO