| |
I don't care if you want to use stateless client tokens. They're fine. You should understand the operational limitations (they may keep you up late on a Friday scrambling to deploy a token blacklist), but, we're all adults here, and you can make your own decisions about that. The issue with JWT in particular is that it doesn't bring anything to the table, but comes with a whole lot of terrifying complexity. Worse, you as a developer won't see that complexity: JWT looks like a simple token with a magic cryptographically-protected bag-of-attributes interface. The problems are all behind the scenes. For most applications, the technical problems JWT solves are not especially complicated. Parseable bearer tokens are something Rails has been able to generate for close to a decade using ActiveSupport::MessageEncryptor. AS::ME is substantially safer than JWT, but people are swapping it out of applications in favor of JWT. Someone needs to write the blog post about how to provide bag-of-attributes secure bearer tokens in all the major programming environments. Someone else needs to get to work standardizing one of those formats as an alternative to JWT so that there's a simple answer to "if not JWT then what?" that rebuts the (I think sort of silly) presumption that whatever an app uses needs to be RFC standardized. But there's a reason crypto people hate the JWT/JOSE/JWE standards. You should avoid them. They're in the news again because someone noticed that one of the public key constructions (ECDHE-ES) is terribly insecure. I think it's literally the case that no cryptographer bothered to point this out before because they all assumed people knew JWT was a tire fire. | |
| |
> they may keep you up late on a Friday scrambling to deploy a token blacklist Because every token has an iat datetime, you don't need a token blacklist to invalidate tokens. You just need some sort of tokens_invalid_if_issued_before_datetime setting that gets checked whenever you validate the signature of a token. The alternative is to store a UUID for each user, and just rotate those whenever they log out, change or reset their password, or there is some sort of security event. These are then stored in the payload and used as a secret. The one advantage over just using dates is that with the former, there can be weird bugs if you have multiple servers with clocks that are out of sync. But you shouldn't ever need to blacklist specific tokens, at least not unless you have some highly specialized use case. | |
| |
Agreed. Revoking all user sessions instead of a specific token is the common case. The only usage I see for revoking a specific token is when the user is deactivating a specific client. | |
| |
Also when a user changes his password, no ? | |
| |
> The alternative is to store a UUID for each user Is that not effectively a server-side session? | |
| |
> Is that not effectively a server-side session? With most web frameworks (e.g. Django), the user model is retrieved on every request anyway. So it would be perhaps more accurate to say that it's a server-side session that's effectively not a server-side session, since no additional lookups are needed, only the user model lookup that's already done anyway. | |
| |
So every user on your system has to reauthenticate if one client token is compromised? That seems like an invitation to a thundering herd. Not necessarily fatal, but I'd consider it a nice feature to not have to invalidate everybody's tokens to get at one. | |
| |
> So every user on your system has to reauthenticate if one client token is compromised? No, because you would also store either a separate datetime or uuid on each user model. And if just one user has their credentials compromised, then you would bump the date or generate a new UUID for just that user. The global datetime would only be bumped if some site wide vulnerability were found. | |
| |
So there is a db roundtrip involved? Like a inverted session. Whats the point of using jwt then? | |
| |
For most web frameworks, the user model gets retrieved from the db automatically whenever an authenticated request is made. So there is no extra lookup. | |
| |
Oh man. Proponents say critics dont offer alternatives at the same time they always literally 'reverse' engineered sessions if you dig deep enough. I give up. JWT is just a hip thing to do right now. :( | |
| |
Got it, didn't catch that you were referring to storing that timestamp per-user. That's what I do in my system. | |
| |
Assuming that: - your JWT libraries don't do anything dumb like accepting the `none` algorithm - you're using HMAC SHA-256 - your access tokens have a short (~20 min) expiration time - your refresh tokens are easily revocable Can you elaborate on the specific security advantages that a token encoded with ActiveSupport::MessageEncryptor would have over such a JWT implementation? Why do you think there aren't more AS::ME implementations out there if it's a superior solution? I only know of a Go implementation and haven't seen others: https://godoc.org/github.com/mattetti/goRailsYourself/crypto Edit I saw you mention Fernet in another comment. As a Heroku alum I'm quite familiar with Fernet (we used it for lots of things), but to my knowledge those projects are on life support at best. | |
| |
You should also make sure to allow only tokens with the "HS256" alg headers before you verify them, in case somebody decides to add a new signature algorithm to your library, and it turns out it could easily be broken and lets you use the same key you used for HS256. | |
| |
If it's your software generating the tokens, then that means they'd need the shared key, or private key in order to sign the token... which is already a problem. Now if you're accepting tokens from a third party, that's another issue, and should be much more constrained. I go farther still and require a VERY short expiration on service to service requests (documented as 1m, coded as 2m) which combined with https limits the chance of replay attacks. | |
| |
yes, that's another good point and probably something many folks mess up. I am explicitly specifying my algorithm for both encode/decode :) | |
| |
> using HMAC SHA-256 HMAC is great for monolithic architecture, but I've quite enjoyed using asymmetric RS256. I don't think that's something AS::ME offers. | |
| |
I'd also be interested in hearing an answer to this from tptacek. My (limited) understanding is the security issues arise around the implementation & handling some of the default claims (NBF, IAT, etc.) and producing/verifying the signature. But I don't quite understand how moving to a different format solves these issues? | |
| |
I appreciate that this comment is wise from a cryptosystems perspective, i.e. there are a number of ways to do JWT wrong, not enough safety guards, etc., but is there not a subset of JWT that is safe to use? The OP article makes it sound like it's impossible to use JWT correctly, but I was under the impression that if I 1) am the issuer, and 2) I hardcode a single algorithm on my API endpoints, that neither of the issues in the OP apply. (The EC issue would apply if that algorithm was chosen). Is there a safe subset of JWT? And isn't there value to small players in using the safe subset of JWT which is battle-hardened by guys with big security teams like Google? | |
| |
It's not that need an RFC standardized solution for everything, but I'd rather not roll my own anything related to crypto. Would something like crypto_auth(json(bag)) be better here?(crypto_auth from libsodium, json being sorted without whitespace) | |
| |
Yes, that would be much better, and it's what I mean when I say that JWT doesn't bring anything to the table. | |
| |
"json being sorted without whitespace" What is the significance of that part? | |
| |
It makes JSON deterministic, which it isn't by default (e.g. {"foo": 1, "bar": 2} and {"bar":2,"foo":1} are both valid serialisations. Of course, it'd be better still to use a format _meant_ to provide human-readable canonical representations of data, e.g. Ron Rivest's canonical S-expressions (http://people.csail.mit.edu/rivest/Sexp.txt), but of course this is information technology and we have to reinvent the wheel — usually as an irregular polygon — every 3-4 years rather than using techniques which are tried and true. | |
| |
Ah yes, similar to canonicalization of XML for XMLSignature? Presumably this means that you have to have have a "flat" JSON structure rather than lots of nested objects and arrays? | |
| |
Afaik you just need to alphabetize the properties of every object | |
| |
> there's a reason crypto people hate the JWT/JOSE/JWE standards. You should avoid them Could you give more info about this? If ECDHE-ES is avoided why else is JWT insecure? | |
|
| |
Seems like, practically, that suggests three options: 1. Take something like AS::ME that already has real use and implement it for as many platforms as possible 2. Define a really restricted subset of JWT (which may be necessary anyway for purposes of saying to management "yes, we're buzzword compliant") 3. Invent a non-AS::ME "bag-of-attributes secure bearer token" system and implement it everywhere. I think part of the trouble with 3 is that people like me genuinely worry that if we tried to roll our own we'd manage to do worse than JWT in spite of JWT being terrible. So maybe step zero is for somebody with crypto knowledge to explain one sane way to do the "bag-of-attributes secure bearer token" part ... or you to point the audience to a blog post that already exists that describes it, because, well, because I suspect quite a few of us trust you to say "this post actually describes a sensible plan" while we don't trust ourselves to be able to tell. | |
| |
1. The reason AS::ME can be that nice is because it assumes a monolithic architecture and a single framework. For example, AS::ME relies on shared secrets, which I think makes it unfit for distributed systems. Implementing JWK with asymmetric keys can really reduce provisioning and configuration costs. Keeping the signing secret on one private, hardened auth server (or cluster) also allows smart things like automated key rotation. 2. 100%. There's at least one right way to do JWT, but more ways to do JWT wrong. 3. JWT et al provide a fine starting point, I don't see a reason to start from scratch. I'm not tied to the JWT spec, but I'm quite happy with what I've been able to accomplish using a careful implementation in my AuthN server: https://github.com/keratin/authn | |
| |
Agreed... my first two experiences with JWT were creating my own implementation... in my case, the allowed public keys had to come via https from a specific server in the domain, even without PKI using shared key... I had hard coded the algorithm used for the signature. This could just as easily be filters on a library though, it's just my first experience didn't have a valid library, so I had to composite one (did use existing crypto library though). JWT is a perfectly valid structure, even if the spec is more flexible than it should be. By that matter, https also has historically supported algorithms and protocols later broken. Nobody is suggesting we stop use HTTPS, only that we limit acceptable protocol and algorithms supported. | |
| |
No, almost everybody in the field laments SSL and TLS. It's probably too late at this point --- and has been for well over a decade --- to get to something better than TLS, and so TLS 1.3 is what we're stuck with. But that is demonstrably not the case with JWT. We don't have to convince all the browser vendor to upgrade out of JWT in lockstep. Avoiding another 20 years of hair-on-fire crypto vulnerabilities seems reason enough to lobby against that spec. | |
| |
But any given algorithm today may not be sufficient tomorrow... so we just don't use ANY encryption? JWT is a perfectly valid structure.. there are options as to signing, so use/limit as needed. | |
| |
And I think JWT is more flawed than SSL/TLS. | |
| |
Or just use Fernet: https://github.com/fernet/spec/blob/master/Spec.md Fernet was written originally for Python but there's a Ruby implementation, a Golang implementation, and a Clojure implementation. I believe that for at least 80% of applications considering JWT, Fernet provides exactly the right amount of functionality, and does so far more safely than JWT. | |
|
| |
This looks very much like the approach to session-data-in-encrypted-and-signed-cookie I've seen used to great success in lots of places (where for a stateless-ish API the contents are just a user id or whatever). Am I right that this would work fine both in that or in e.g. a query parameter? (sorry if I'm asking really stupid questions, but I'd rather look stupid than accidentally a security hole) | |
| |
For my service to service requests, I tend to require the token itself be set to an expiration of less than 1 minute from creation. I actually code 2min in the check, but document 1 for access clients. This allows for more than enough drift and with https mitigates the level of risk for replay attacks. Beyond this a header/signature for the body/payload will reduce the risk of the rest. As to being able to select the signature algorithm, or set the uri for the public key... ignore this, or whitelist domains or methods. Yes, there's some wholes regarding a "by the books" implementation... that doesn't mean you need to support the entire spec. I implemented about 1/2 the SCORM spec in an API once, and it was 8 years before a specific course needed a part that was missing. Yes, it isn't 100% compliant, but if it does the job, and is more secure as a result, then I'm in favor of it. | |
|
| |
>that rebuts the (I think sort of silly) presumption that whatever an app uses needs to be RFC standardized. I thought crypto mantra was "Never roll your own." An RFC (Request For Comments) is a literal attempt to follow that advice by seeking the advice of cryptographers who are presumably smarter at coming up with crypto standards. Where were the cryptographers during the draft phase when comments were being solicited? >I think it's literally the case that no cryptographer bothered to point this out before because they all assumed people knew JWT was a tire fire. Oh. Cunningham's Law. You know, if you're not part of the solution, you're part of the precipitate. That's some considerable JWT fallout, since companies making a business out of security are endorsing it. Auth0 for example, https://auth0.com/docs/jwt Until I read this article, I was under the impression JWT was the best new thing. | |
| |
I don't care about these moral arguments. I'm making a simple, positive claim: JWT is bad. You can blame whoever you'd like for it being bad, but as engineers, you need to understand first and foremost that JWT is bad, and reckon with your feelings about that later. You have a responsibility to built trustworthy systems, and you get no pass on building with flawed components simply because you wish experts had made those components less flawed. | |
| |
"Contribute to standards processes" and "don't roll your own" aren't moral arguments. They're complimentary pieces of practical advice on how to make trustworthy systems. Meanwhile your comments bury whatever substantive content they might hold under layers of emotional, accusatory garbage. Maybe get those feelings locked down a bit before posting? | |
| |
Do you have any recommendations for SPAs where the API is hosted on a different subdomain than www? I think everyone agrees that JWT is a bad spec, the problem is that setting cookies across subdomains ranges from difficult to impossible. If you have access to an experienced devops team who can securely maintain an nginx server with some proxy logic then maybe that's a possibility, but otherwise what other viable options are there? Wishing that JWT were more secure won't make it so, but neither will wishing that CORS were more flexible. And if it's a choice between subclassing the JWT handlers to provide a couple extra security checks vs trying to securely configure and maintain a whole extra proxy setup, then the former seems like the lesser of the evils. | |
|
| |
I have no feelings on the subject. I don't use JWT. I just want to point out this sounds like (and continues to sound like) "Roll your own" advice to me. | |
| |
Either way, if you're serving up a REST API to a JavaScript UI... what's NOT a good option is server-side session state (e.g. Java servlet sessions) Can you explain what you mean, as oppose to other kinds of session tokens? Roy Fielding makes it abundantly clear[0] in the seminal delineation of REST that We next add a constraint to the client-server interaction: communication must be stateless in nature, as in the client-stateless-server (CSS) style of Section 3.4.3 (Figure 5-3), such that each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client.
This has given me pause to doubt just how many people are really implementing REST, and/or how useful a model it is in modern web applications. [0]https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arc... | |
| |
> This has given me pause to doubt just how many people are really implementing REST, and/or how useful a model it is in modern web applications. Not many. I've made a good career out of consulting people who are doing REST wrong :) Usually they are either storing state or forgetting about HTAEOAS (Hypermedia). Often they are also negotiated format and version incorrectly and in a way that doesn't scale. | |
| |
I'm going to contend that a "correct" implementation of REST has never existed | |
| |
IMO sessions for authentication is fine as long as we don't store any session variables. In the end someone's going to be keeping track of the number of GET requests made by each API consumer on each endpoint, and practically it's not breaking REST as long as that state doesn't affect the information GETable by the client. | |
| |
I just can't reconcile the fact that if I hit an endpoint, and I get back certain data with a 2XX response code because I previously accessed a "login" resource, but I would have gotten a 4XX response code if I had not gone to that prior "login" resource that I haven't violated REST: my request for the second endpoint takes advantage of stored context on the server. Even worse, if I restart the server, change out its database, or make some other stateful change on it, the response changes. And I don't mean to imply that this is a universal sin of computing architecture, but it sure as hell looks like a violation of REST, and it begs the question of what our standard is. Headers, path-based nomenclature, network layering, authentication - I'm on board with all of that. I just worry that a lot of people might be falling victim to commonplace misconceptions of just what REST is, which in turn may be causing us (the web dev community) to make misplaced value judgments. | |
| |
Rest doesn't mean no state in the world exists. It's not a violation at all that an endpoint changes its output. Rest only reasons about idempotency, not reproducibility. | |
| |
> Rest only reasons about idempotency, not reproducibility. Absolutely. If you GET a collection resource, then POST a new item into the collection, then GET the collection again, the response will have changed. Having these kind of temporal dependencies on the answer you receive is not something REST argues against. | |
| |
None of that violates REST; REST is not statelessness. In fact, REST (REpresentational State Transfer) is all about state and how it is changed and how those changes are manifested. | |
| |
So, does this include web tokens too, or is auth a special case? | |
|
| |
I could care less about this request, but I can't be bothered to make the effort. | |
| |
I am American, and I'm sure this bothers me more than it actually bothers the queen. Between that and the incessant use of the incorrect phrase "for free" instead of the correct phrase "for nothing" it's amazing that I can stand to read anything on the internet. xD | |
| |
You should write a browser extension that fixes that. I'd use it. | |
| |
Ooh, I like that idea. Don't tempt me! xD | |
| |
What's difficult about setting up a Redis cluster to back sessions? Yes, it adds a point of failure... so does having a database of any kind. However, I'd hardly call it difficult. If you're on Amazon, you can just create an Elasticache cluster and not even concern yourself with the ops. I don't hate secure cookies or anything, but some people act like plain-old regular cookies haven't been thoroughly solved by this point. Related: Something people get wrong a lot with secure cookies is worrying about obscuring the cookie more than securing it. Encryption does not give you authentication; you need MAC for that. An encrypted message can still be blindly modified. Imagine being able to change a UID stored in a "secure" cookie even if you couldn't 100% control which. Eventually, if you try enough permutations, you're going to escalate your privileges! | |
| |
Nothing's difficult about that. But that doesn't mean that it's a good idea. How about distributing a signature and encryption key to all your servers, and using them to secure the outgoing and verify the incoming tokens. If you want easy, that's probably even easier than setting up a Redis store and tying your services to it. Need an emergency revoke of every token? Easy: Replace your signature key. Any older token will fail signature verification. In which case, your system should require authentication and then generate a new token with an updated signature. | |
| |
Here is the use case that lead to the first implementation of JWT I was ever part of. You have a single page webapp that uses two APIs for part of the application. For security reasons the APIs are zoned in such a way that neither of them can communicate with each other. The machine of the user sits in a zone where it can send HTTP requests to the zones of either API. Now design a way to manage sessions across both APIs... There are certainly a number of ways to accomplish this, but JWT was the cleanest and most performant. | |
| |
When the authentication info needs to be used with servers in different location of the world, JWT is better. Getting the session info from a redis server or something equivalent isn't free. Deciphering a JWT token may be faster. So, the most appropriate method depends on the use case. If all the servers are in one location, a random byte sequence as session id key with cached info is the most simple, compact and efficient. | |
|
| |
What happens when that replication breaks? Can people still log in? Can you still validate their sessions? Or are you going to have an outage in a geographical region? | |
| |
Fall back to connecting to the main instance directly | |
| |
And if the replication is down because the main instance can't be reached for a minute? | |
| |
And if your user base is hit by a nuclear bomb? | |
| |
The problem is you're suggesting real-world, ops-capable solutions to a problem "Devops" (as in, developers can do it, we don't need ops) people don't want to understand because they'd rather jump on yet another poorly designed "solution" to a problem they don't really have. | |
| |
JWT signature verification usually takes less time than the network request to a redis server... assuming it's non-local, because HA. | |
| |
I'll just say this: if the most expensive part of your API is calling Redis, it probably doesn't have anything worth authenticating for in the first place. | |
| |
I can make an attempt at an alternative: Distribute signing and encryption keys to all servers. Have them encrypt and sign the outgoing serialized token, whatever that consists of. Have them verify and decrypt the incoming token. This is just straight-forward cryptography, with keys known only to the server, so I'm pretty sure you won't get any arguments from (1). (And, I suppose the encryption could even be skipped, if you don't care that the internal format of the token is known.) Emergency revocation of all tokens [0] is simply rotating the signing key. All tokens issued prior to the rotation will fail verification with the new key. That should trigger the authentication process, which will issue a new token with the updated key. This solves the revocation issue present in argument (2). [0] Any other form of revocation is, in my opinion, not distinguishable from having server-side state. If you have to keep a list of bad tokens, why not just keep a list of the good tokens instead... And then it's only a short hop to the token being nothing but a key to lookup the full session state on the server. | |
| |
This is exactly the solution I use. I use a secret server (Vault, using Consul as a backend) to securely distribute and manage the encryption keys. | |
| |
And if you want to be able to authenticate users to a service that you do not want to send private keys to? | |
| |
Then sign / encrypt the token with a private key and distribute a public key to the untrusted peers. Since you're just using bog-standard cryptography primitives you can change them at will to match your use case. Need to handle untrusted peers? Asymmetric keys are the answer. | |
| |
Or, you use the asymmetric key/pair to sign the JWT, and lock your environment to only public keys signed by your DC's cert in your org. If only that was supported by JWT.. oh, that's right, it is. Nobody has to implement the FULL spec, you only need to allow what your environment needs. | |
| |
I'm not really sure what your point is, other than an apparently fervent desire to prove JWT's worth while speaking down to me. I didn't say that JWT couldn't do that... You're the one who set up the strawman of untrusted parties, then gleefully knock it down after I address the issue. You have contributed no other valid feedback to my proposal, just a defense of JWT which is not an answer to anything I ever stated. I just want you to know that such tactics are not very appreciated from this side of the conversation. What does JWT provide that using bog-standard crypto primitives in the way I described doesn't? Other than a name and a standard? | |
| |
You do understand that this is fundamentally the exact underlying mechanic of JWT? | |
| |
You call it the underlying mechanic. I call it the only necessary mechanic. If those additional mechanics are where the security issues come from, then just get rid of them. | |
| |
No one is forcing you to accept/implement ALL possible aspects of JWT.. in fact, that's generally a bad idea... Only need to implement what you need. If a specific algorithm is bad, don't allow it... Isn't this how HTTPS works, HTTPS today doesn't use the same SSL and algorithms allowed in 1996, it's evolved and changed in practice. The author isn't suggesting everyone just not use HTTPS because some possible algorithm has been determined to be weak is he? | |
| |
> No one is forcing you to accept/implement ALL possible aspects of JWT.. in fact, that's generally a bad idea... I think this is a very interesting, because it's basically validating the article's argument. People are going to feel safe implementing JWT because it's an RFC, without knowing where these "generally bad idea" landmines are. That's the dangerous part. And yes, the same issues exist in SSL / TLS. And guess what? There's loads of articles just like this one stating how dangerous older modes of these protocols are. Articles like this and the discussions they spawn are exactly the kind of thing necessary to move the world forward into safer implementations. | |
| |
> If I'm providing a REST API, then I'd prefer a token string that I could pass as a header value rather than forcing the use of cookies Aren't cookies just strings passed as HTTP headers? | |
| |
I hope someone can explain to me in practical terms difference between a session cookie string on a request and a token as header value. | |
| |
Cookies are just string in a header. The difference is that unlike normal headers browesers treat cookie headers in a special way. They automatically add and remove keys from it, and they allow the server to set the header in a way that the client can neither see it nor change it (http only headers) | |
| |
The downside being that the browser will attach it to every request; if you use cookies, you MUST be aware of this, or you are (IMO) pretty much guaranteed to write a CSRF vuln. (I'm much more in the localStorage + Authorization header for this reason. I recommend [1] for reading. If malicious JS is running, cookies won't save you, since the malicious JS is capable of simply making the request itself, to which the cookie will automatically be attached by the browser. localStorage+JS eliminates CSRF. If someone XSS's you, the difference is irrelevant.) [1]: http://blog.portswigger.net/2016/05/web-storage-lesser-evil-... | |
| |
> I'm much more in the localStorage + Authorization header for this reason. That's just exchanging one security issue for another. Now you have the ability for people to steal tokens after an XSS attack. And yes, that's significantly different from "can make requests on your behalf". The correct solution is to solve the CSRF vulnerabilities by using CSRF tokens. Not to change your auth persistence mechanism. | |
| |
CSRF protection: * Use SameSite cookies (unfortunately, not yet supported by all browsers) * Don't accept application/x-www-form-urlencoded, multipart/form-data, or text/plain at your endpoints, or * Use CSRF tokens if you need to accept server-side rendered HTML forms | |
| |
CSRF is the easiest vulnerability to avoid, a csrf token solves all csrf attacks. XSS is a lot harder to protect against, one of the better ways to mitigate it's effects is to use http only cookies | |
| |
Well, I keep running into CSRF vulns. in the wild, so… XSS is avoidable by systematically having a framework that escapes any inputs that are run through it. (jinja2, on the server, can do this, though it defaults to not, which I wish wasn't true.) I'm not saying that XSS is much better that CSRF, really; I've seen these, too. the point (and that of the linked article) is more that either you're not subject to XSS, in which case localStorage is strictly better than cookies (it is default-secure), or you're subject to XSS, in which case neither saves you. | |
| |
The idea that Session Hijacking attacks are irrelevant when a user can use XSS to perform any action on the client is interesting. Definitely if your service is a valuable target that hackers will spend the time to reverse engineer your client code to create custom tailored XSS attacks then protecting against Session Hijacking does seem to be pointless. But session hijacking is considered to be a very common attack (though I can't find any real numbers anywhere, maybe it's not?), most services with low attack value will probably be better served by httpOnly cookies and csrf tokens that make worthwhile XSS attacks more time consuming then preventing XSS altogether, which is an enormous, continuous effort. Also your implying that CSRF is hard to defend against (otherwise why do you keep running into it) but in the same breath saying that XSS is simple to defend against. If people can't defend against CSRF (which is usually just a simple flag for most frameworks), they aren't prepared to defend against XSS which means getting into a security mindset in all things. A serverside template is not enough - XSS can manifest in headers, in clientside code, in third party code, in redirections and it is easy for a developer to mistakingly add a new attack surface. | |
| |
One small correction: the client (i.e., the web browser or other web client) can see HTTP-only cookies just fine; code running in a conforming browser cannot. But if I write some code using DRAKMA, urllib or net/http, and can see those cookies just fine. | |
| |
The very next sentence: > Although I suppose you could argue that a cookie is just another header value. So... yes. | |
| |
Ack - missed that - was visually jumping around some. But yes, it is. | |
| |
I grow more and more tired of posts touting engineering sensationalism. Here is a point of order for developers who work on something realize it isn't idiot proof (them proof) out of the box and then want to write a sensational post: Implementation and design are a core part of anything you do and considering the risks and accounting for them are part of doing business. Having worked with large organizations that do active and passive scanning of the web I am constantly shocked how often we are contacting someone about basic SQL injection in their application... in 2017. JWT is an incredibly powerful standard if implemented effectively but its not for the LAZY, it requires thoughtfulness where ever it is active. JWT solves a serious and real problem that organizations face at scale which is why you see it implemented in systems like google sign in. Realistically its not going anywhere. People love criticizing the movement towards stateless tokens on the web I find it pretty funny... crawl down the stack from their webheads and you usually come face to face with Kerberos managing auth within their networks... | |
| |
The article very clearly is about the standard, not a particular JWT library. Server-side session tokens stored in the database worked fine ten years ago, and they work fine today. No need to muck with the load balancer. Stateless tokens are great too, and use two-factor auth when you need that extra layer of security. No need for newfangled standards; HTTP Basic remains a simple and effective way to convey that token. | |
| |
Except there are now many instances where no single database server can keep up with request load. It's not fine in all cases today. Where I work now a single request from the user goes into a pipeline of requests (some can be parallel, others not)... our SLA is X, everything that adds up to the total request time counts. Adding even 2-3ms for each service layer to verify session keys is too much. This is as opposed to < 0.1ms for verifying a JWT. JWT is a structure for stateless tokens... once you have a token, what does 2FA add? nothing. Also, some algorithms are insecure, so don't use them, or blacklist them... Or, better whitelist the algorithm you do use. | |
| |
I've got nothing against stateless tokens. What I'm saying is that it can be a much easier and more effective pattern to add a second layer of security than to add complexity to the first layer (the token). I believe this is like the idea of defense in depth. For example, making signed-in users re-enter their password before performing certain actions may be preferable to introducing cryptography into all sign-in actions. | |
| |
This isn't just for all sign-in actions.. it's for all API requests, and in some designs passthrough requests on behalf of a user to another server/service. It isn't just used for UI requests. It can also be used from Server to Server/Service requests... across data centers. You can do signed tokens/authentications without introducing many potential points of failure. | |
| |
If you're on Java and using an ORM like Hibernate, then that user will be found in the second-level cache. This will eliminate the need for a database roundtrip for all requests after the first authentication. From that point on that particular user will be retrieved from memory. | |
| |
Which will require session pinning for the load balancer, not to mention, I'm not using Java or a similar ORM. That will only help for a single instance of an application on a single server... not much help when you specifically don't want session pinning. | |
| |
I agree that not everyone is on Java and using an ORM. But is it only useful for a single server? If you have multiple servers then you would also have a distributed second level cache which would eliminate the need for session pinning. | |
| |
distributed, or duplicated... each server potentially making that DB request... depending on load adding at least 2-3ms, potentially more. If a given request to a single endpoint needs to touch a dozen more, not including resource lookups and when not everything is parallel... or across datacenters, from the colocated to aws, etc.. it all adds up. Very short lived JWT mitigates this as the window for replay is reduced, over HTTPS by the time you can crack it, that window is effectively gone. The server can verify a signature on a JWT in a fraction of a second... far faster than a DB call... Not including replication issues. | |
| |
For number 2, you could expire them by encoding some identifier based off a hash or key tied to the user object. Change that object and have the server reject the token if that meta data no longer validates. | |
| |
Or have really short lived tokens, requiring regular refresh, and don't worry about expiring them... you can then delete the refresh token so it can't be found requiring full re-auth if necessary. OAuth2 + JWT is fine... just whitelist the algorithms you allow and use HTTPS for all communications, even internal. | |
| |
I feel like the argument against #2 is usually purely hypothetical in nature. I really do not have a problem maintaining a small lookup cache for revocations. I feel like the argument against doing this tries to take the form of all server-side kept state is bad when in reality it's sticky state and huge object graphs (read: memory consumption) that get stuffed into session objects that are the real evil. A server with 1GB can hold a lot of JWT's in memory. Probably more than most of the people building services here have to actually deal with. | |
| |
Sure, revocation lists are relatively small. But they need to be available to every server (replication), be proof against server/service restarts (durable), and checked with every request (highly performant). So, a good revocation list effectively requires a database. Not a trivial thing to implement yourself, and a weighty requirement for an otherwise stateless service. | |
| |
JWT are even smaller than their size since you can revoke them by hash (although you should really just revoke by user ID in most cases). Your tokens should generally have a rather short lifetime - then you can keep the entire relevant window of revocations in memory. The implementation is not trivial though, that's for sure. | |
| |
Hmmm, using a database (eg PG) for the authoritative information, with memcached in front sounds like it would be practical for most uses. | |
| |
At which point you should probably ask yourself: "What value is keeping all of my state inside this token providing me?" | |
| |
Probably not. If the Pg instance is replicated, as indicated above, it'll be challenging to keep the Memcached copy in sync. In other words, you can't just use the caching feature of your ORM, you'll need another piece. | |
| |
Thanks, that does need further thinking about. :) | |
| |
Postgres is not a good solution for this kind of data. I'd use Redis, but maybe there are even better products. | |
| |
Could you just have per-server tokens? Wouldn't a single client tend to hit just one server anyway? | |
| |
Wouldn't a single client tend to hit just one server anyway? No? Maybe? It depends on your load balancer. Assigning a client to a specific server is "sticky sessions". Many of us don't want to tie a client to a specific server and prefer a completely stateless 12-factor-style mechanism where any server can serve the client and stateless tokens provide a mechanism to achieve this. | |
| |
Not to mention the challenges with multi-region replication needs... to do this for every request along a server-server pipeline adds more latency still, since each request to the db means potentially 2-3ms on top of more complex requests, which all adds up. | |
| |
> and stateless tokens provide a mechanism to achieve this without revocation. What's wrong with tieing a client to a server, or co-located server? Either they are close enough to share tokens / sync fast, or not? | |
| |
What's wrong with tieing a client to a server, or co-located server? Nothing, if you can get away with it. What do you do if your server dies or is overloaded? The 12-factor patterns came to be for services running on ephemeral hosts in cloud environments. Stateless servers mean you can seamlessly serve requests from another server without problem. Sure, you can store the sessions in a shared resource (redis perhaps?) but this complicated failover and redundancy and may add latency. Maybe this isn't an issue, maybe it is. If you don't need or want that, then just use normal sessions, for sure. Revocation can be handled (although admittedly not as well as with sessions or stored tokens) through short TTL's and refresh tokens (which are stored, but only need to be looked up when the stateless token expires). Its not perfect, but its often a good enough tradeoff. | |
| |
What if you are running dozens of services each specializing in its own domain? Do you proxy each service through a pool of central webservers? Or do you just stand up a central auth server and have each service trust that auth server? | |
| |
The latter makes sense to me. Auth is a cross-cutting concern. | |
| |
If you go so far as to maintain a revocation store that is checked on each request, you might as well just use that same store for full-blown server-side sessions. By your measure, 1GB can store a lot of tokens in memory: 32B tokens + that again as metadata (e.g. user ID + TTL in Redis) = 15,625,000 tokens. | |
| |
No, the session state itself may be orders of magnitude larger than an id. At 4 bytes per token you can store up to 250M ids per 1 GB. But session state might store kilobytes of data per user for roles, permissions, names, descriptions, links etc. And you're overestimating the necessary size of a revocation store. Only a tiny fraction of your users ever log out or otherwise invalidate session via means other than TTL. You're looking at storing just a couple thousand 4-8 byte revoked session ids and ttls instead of gigabytes of session data. If you have a security breach and need to invalidate all tokens, just reject all tokens with an issue date before it's fixed. And they all fall off anyway after a week (or however long the ttl is). | |
| |
Or better, have a really short lived requirement for server-server jwts (I suggest even 1m, having a new one per request). For client-server a 5 minute refresh is fine, as long as you do a lookup for refresh, so you can expire refresh tokens, requiring a full re-auth. | |
| |
I found this video incredibly informative about how to effectively implement JWTs, along with security advice and a nice refresh-reissue process: https://youtu.be/mecILj3p4VA?t=2m8s | |
| |
> (1) Criticizing vulnerabilities in particular JWT libraries, as in this article. The purpose of this article it criticize the standard, not particular libraries. | |