Hacker News new | past | comments | ask | show | jobs | submit login
Bypassing GitHub's OAuth Flow with a Head Request (teddykatz.com)
470 points by fanf2 on Nov 7, 2019 | hide | past | favorite | 98 comments



* 2019-06-19 23:28:56 UTC Issue reported to GitHub on HackerOne

* 2019-06-19 23:36:50 UTC Issue confirmed by GitHub security team

* 2019-06-20 02:44:29 UTC Issue patched on github.com, GitHub replies on HackerOne to double-check that the patch fully resolves the issue

* 2019-06-26 16:19:20 UTC GitHub Enterprise 2.17.3, 2.16.12, 2.15.17, and 2.14.24 released with the patch (see GitHub’s announcement).

* 2019-06-26 22:30:45 UTC GitHub awards $25000 bounty

So used to these disclosure articles where the timelines look very different, communication is lacking and there's pushback on the bounties etc.

Refreshing to see that sometimes the process works as everyone hopes it will.


I mean, it’s a total auth bypass in GitHub by simply sending a HEAD request, with a one line fix. The turnaround time shows that they have the ability to evaluate a serious report through HackerOne quickly, they have a dedicated engineer who could get in and make the one line fix, and they have the ability to deploy new code quickly. All good things!

I am mostly impressed that they paid out $25k in under a week. That’s a sign of not just a good engineering setup, but some good bureaucracy as well.


Perhaps it does merely reduce to "being competent". But the contrast with nearly every other vendor out there is stark.

Been managing GHE for years, and I have been consistently impressed with Github's code quality, competence, and even product management (nearly every time I think, "there should be a way to...", there already is).

From the outside, at least, they look like the model to emulate for this sort of application.


You're both right, competency is in fact really damn impressive.

(I have come to think, as a software engineer, that we've comuterized so much of social life such that our economy literally could not afford it if it wasn't mostly crap).


How long until someone is coding up a minivan?

https://dilbert.com/strip/1995-11-13


It doesn't say they paid out yet, just that they 'awarded' the bounty, which could just be GitHub telling HackerOne they intend to pay without actually having moved any money yet.


Regardless, the fact that someone at Github can approve such a large payment so quickly, is still impressive.


I don't think $25k is a large payment in githubs eyes. There are much smaller companies that doesn't even blink spending ten times as much.

And this was definitely approved before they went out and told people what they would pay for different bugs.


Exactly what I thought too. Just for once, a bug bounty program worked as intended. Bug was fixed in a timely fashion instead of being denied or ignored, bounty was awarded, disclosure was made, life gets better for everyone. Nice to see.


"Just for once, a bug bounty program worked as intended" I would just say, you don't generally hear about it when it works as intended (as it usually does).

Your sentiment, though, is why it's nice when it is publicized that the system is working as intended, like it did in this case.


I run a bounty program and I like to think my company does a good job, too. I received a report of a RCE in a desktop app and got a fix released in 72h. We have a bounty committee that reviews payouts and paid in one week.


It's good PR too. This really makes me appreciate GitHub that much more


This is why I hate it when people think there are N possible cases, and in a cascading if/else construct they only check for N-1 conditions and assume the remaining all fall within the Nth case. Thing is application logic in the real world is often fuzzy, it is entirely possible to overlook certain possibilities (which may or may not arise from bugs elsewhere); and even if you have considered all cases at present, future changes to other parts of the code may inadvertently introduce new possibilities.

Personally, I always check for all N conditions, and raise an exception when "the impossible" happens; this way, the application would fail noisily instead of silently doing something unintended. However, sometimes other people would "optimize" away my last condition, which is very frustrating.

Edit: Checking for all N cases helps with readability too. Otherwise one has to figure out what exactly is the remaining case by exclusion, which is sometimes not at all obvious.


A common reason people don't do it the way you're suggesting is dogma about code coverage.

If the code has branches for each possible case, and then an "else" to cover anything else (which raises an exception indicating something impossible has happened), there is often no good way to write a test that covers the "else" logic, since the coder doesn't believe there's a way to end up going there. Or if there is a way to force it to happen, it involves jumping through hoops that are too much effort.


Some languages have good alternatives to this. Examples:

* Rust has an unreachable!() macro that tells readers and static analyzers that it should be dead code; but raises an error if executed at runtime. https://doc.rust-lang.org/std/macro.unreachable.html

* Languages with pattern-matching will either crash at compile-time if the set of clauses is non-exhaustive, or at run time if a case is missing. That is, if you don't have a catch-all clause.


Cool language feature! I’ve done this in other languages using a code style policy where all “naked” else statements and ‘default’ inside a switch statement must throw an uncaught exception or otherwise deliberately crash.

    if(option==1)
        do_somethimg();
    else
        do_something_else();
...must then be rewritten as:

    if(option==1)
        do_somethimg();
    else if(option==2)
        do_something_else();
    else
        abort();
This kind of defensive coding would have helped to prevent the famous ‘goto fail’ bug from a few years ago.


Typescript has the `never` type, which is meant for values that are never used. You can combine that with other stuff to get something _like_ `unreachable!` but it's a bit hard to get working right to be honest.


So true! Another reason against dogmas, especially concerning code coverage.


In general I’d say testing the method in a controller is a smell,and in this particular case I’d suggest GET and POST should never have been handled by the same function. If GET did nothing but render the view sending a HEAD would be harmless.


Yeah the "if GET" code raised my eyebrows a bit, I know rails can handle separate GET and POST funcs.


While you're not wrong, if I explicitly tell the framework to only accept GET and POST, I consider it a bug if any other method "gets through", particularly since the docs don't say that can happen.


As an average Ruby developer. The first 100ms I saw:

  if request.get?
    # serve authorization page HTML
  else
    # grant permissions to app
  end
I know that's really smell. Why would I put a serious statement inside `else` block?


Could you explain that further.

Why would you avoid a serious statement in an else?

Would this be more acceptable? The negative on the if

  if !request.get?


I assume the correct pattern for finite possible inputs would be something like

   if request.post? || request.put? || more...
     # do dangerous thing
   else
     # do safe thing
This of course ignores the smell which others have pointed out where with rails you would ideally have a totally different method get called in the first place for a GET vs a POST as you can have different actions for them in the controller.


Because unknown gives anxiety.

Robot:

  if !request.get?
    do_nothing
  else
    kill_one_person
  end
What are possible states for NOT `!get?`, I have to think twice or all day thinking.

The problem is that real world program may have exception state you can't think of. If you write down a set of finite state; {A,B,C,D}.

  if A
    # For A, do nothing
  else
    # for B or C or D, kill a man
  end
  
It's sure that if not A, it's going to be B|C|D. But what if it isn't? You could get an exceptional Z which will kill a man too. You know A better anything "else".


A cascading if/else construct can have up to 2^(N+1) possible combinations where N is the number of if-else constructs. That can become very hard to test. If you have more than two levels of nested if-else's you should probably refactor especially if it is security sensitive code. The mental model you have of your code is as complex as the code itself and there is no way that you will be able to accurately model some more complex than what you can see all at once. Scope reduction is super important if you want reliable code and function extraction is one way to achieve this. It also allows for much cleaner test setups.


What a fantastic bug. Rails routes HEAD to GET, but .get? in a HEAD handler is false. Github uses a postback to /authorize to complete OAuth, distinguishing between the original GET and the completing POST with .get?. Rails sees the HEAD request and assumes it's a POST (because it's not .get?) --- and also assumes that, because it's a POST, it's CSRF protected, which it is not, because it is not actually a POST.


It's bad assumptions all the way down.

Also easy to bake into basic static analysis going forward. Knowing GitHub, I'd think a bug like this probably won't show up again in the near future.


> It's bad assumptions all the way down.

It always is. The moment you can shine light between your model of what the code does and what it actually does you have at least a bug and maybe much worse.


It also looks like a Rails bug, unless Github routed the HEAD requests themselves. In that case, it’s a shortcut that bit them in the ass.


Rails is not assuming that it's a POST. The conditional logic in the controller explicitly checks for a get, and then else's otherwise. This is an unfortunate consequence of not having routing go to specific methods, e.g. `def post():`.


Rails routing does go to specific methods ordinarily.

What's funny here is, apparently, .get? used to return true for HEAD requests.


> So Rails (along with some other web frameworks) implements a clever hack: it tries to route HEAD requests to the same place as it would route GET requests. Then it runs the controller code, and just omits the response body.

I wonder if Rails will fix the default implementation of handling HEAD requests, similar to how they overhauled the default handling of mass assignment of parameters in 2012 (coincidentally, a flaw also exposed via Github) [0]? Obviously, the latter issue posed a much bigger risk to the average Rails app.

[0] https://news.ycombinator.com/item?id=3666564

edit: fixed URL


> I wonder if Rails will fix the default implementation of handling HEAD requests

It absolutely should. Nobody expected Rails to covertly turn HEAD into GET. It must be explicit "get_or_head" method in routes.rb


HEAD isn't turned into GET, it's routed the same as a GET request. If it were actually turned into a GET request, then `request.get?` would have been true, and this exploit wouldn't have happened.

But I do agree with your point that routing of HEAD requests should be explicit.


> HEAD isn't turned into GET, it's routed the same as a GET request

Pardon my poor wording indeed, I intended to mean this too.


Rails actually did turn the HEAD into a GET back when this code was written - https://github.com/rails/rails/blob/e17e25cd23e8abd45b170646...


Considering that GitHub is one the main Rails maintainers, probably. That might actually be how they fixed it internally and will push it upstream.


ISTM it's reasonable not to force everyone not to write a separate HEAD handler everywhere. The main result of that would be lots of 405 errors. This was a really dumb mistake at the application level. Maybe the library needs better documentation, but it shouldn't be made less functional.


Agreed. This is a corner case because Rails encourages you to route different request verbs to different endpoints via the `resource` routing scheme. The situation played out like this because of handling GET and POST via the same action which should not have been done and in general is not something Rails encourages.


Your link doesn't work.


In the proof of concept[1]:

1. Send an ajax HEAD request, which grants the permissions.

2. However, since the HEAD response doesn't have valid CORS headers, you can't read the 'code' parameter from the HEAD response.

3. To get around CORS, the proof-of-concept redirects the user to the same OAuth url again.

4. Since the user has already authorized the scope (from the previous HEAD request), Github automatically redirects back to the redirect_uri (the proof-of-concept page) with a new 'code' that the author can then use.

So I have two questions:

a) If I have 3rd party cookies disabled, will the "credentials: 'include'," part of the ajax HEAD request still work?

b) Why does Github automatically redirect back the 2nd time without asking the user to authorize again? I thought it was best practice to ask the user to click authorize every time even if they're authorizing the same scope they previously authorized?

[1]: https://not-an-aardvark.github.io/oauth-bypass-poc-fbdf56605...


Why would you want to ask the user to authorize an app they already authorized? IMO that would be a bug and definitely unexpected behavior.

It would give the end-user the impression the app was not authorized when in fact it was, which could create its own issues.

So now you’re designing a whole different page which shows the app is authorized with a POST form just to do the redirect?

And if the entire purpose is to try to avoid any place that you can GET a redirect to the app, you better hope that’s the only place where it could happen.


Hmmm, as a user, I guess I would expect the opposite behavior, and I'd think the question should be why wouldn't you want to ask the user to authorize and app they already authorized?

A client should usually only redirect the user to the OAuth authorization endpoint when:

(a) they don't have an access_token (e.g. they haven't asked for authorization yet)

(b) their current access_token or refresh_token got rejected (e.g. the resource owner revoked access, so they need to reauthorize)

(c) they want to change scope of access

So the situation where you redirect the user to the OAuth authorization endpoint when they've already granted access and with the same scope is usually when something strange is going on (like when the client loses your access_token or this exploit happens).

So why would you have different behavior (not asking to click Authorize) just to streamline the atypical scenario? Especially when it leaves open the possibility to be exploited[1], create confusing situations[2], or be mis-categorized as an opt-in event when it isn't[3]. As a user, I'd like to think that I'd always get the same behavior every time I'm redirected to an OAuth authorization endpoint, because the OAuth provider doesn't have any idea what the situation is on the client side and assuming it's okay to auto-redirect without asking for my consent again is dangerous.

[1]: This HEAD bug would have not been able to be exploited in the wild if the auto-redirect wasn't in place.

[2]: In clients that allow users to have multiple accounts (where each can hook-up a github connection), auto-redirects creates confusing revocations across the multiple client accounts. For example, Bob authorizes a github connection to account A, then when he switches to account B wants to authorize github again. If auto-redirect is in place, then Bob wouldn't see anything (since it is a transparent redirect), but account B's connection would start working and account A's connection would silently stop working. If there were an authorize click, Bob would see the same behavior for account A and account B, and probably be able to put together that if he clicks authorize on one account, the other stops working. Auto-redirects makes Bob figuring out what's going on really hard.

[3]: Privacy regulations are requiring opt-in more and more, and OAuth is seen as a fairly good mechanism for proving someone "opt'd in". If a regulation requires a "consent" to have an expiration (e.g. an accounting app must require the user to re-authorize bank access on a yearly basis), then when the users is asked to "consent" again, if they get auto-redirected, they can later claim they were never asked to consent because they never clicked authorize again after a year.


You raise good points. I just wonder -- and I don't know enough about OAuth to say -- is there really no other way for the attacker to discover their token, since Github was still in a state where they though the attacker had been authorized?

I think you should post this on Github's HackerOne, maybe they'll agree and pay you some lunch money!


> I thought it was best practice to ask the user to click authorize every time even if they're authorizing the same scope they previously authorized?

I've never seen an authorization server that works that way. Because I walked to make sure I wasn't missing anything, I went back and reviewed all the guidance I could find on the matter and can't find any reference to the suggestion you make.


Hmmm, is there guidance saying that you should auto-redirect?

The spec says, "If the authorization server observes multiple attempts to exchange an authorization code for an access token, the authorization server SHOULD attempt to revoke all access tokens already granted based on the compromised authorization code."[1]

If an authorization server implements the recommended behavior and an auto-redirect, doesn't that mean that GET requests will no longer be safely considered immutable since they may cause revocation of previous access_tokens?

[1]: https://tools.ietf.org/html/rfc6749#section-10.5


I did not get the impression that there is an attempt to reuse the authorization code, but rather that a completely new flow was initiated.

Reusing the same authorization code should of course be considered suspicious.


Ah, yes, that's correct. I guess I was assuming that once you issue a new authorization code, the authorization server should invalidate any previously issued access_tokens/refresh_tokens because the new authorization code represents a new grant. Or are you supposed to wait until the code is sent to the token endpoint (to get the access_token/refresh_token) before invalidating old access_tokens?


For web app clients, you definitely don't want to invalidate other access tokens in case the user has other sessions open. For native clients, the calculus might be different.


Facebook does this when you sign into an app you've previously authorized, in native application flows. Microsoft just added this recently as well. We do recommend it to other IDPs.

This is only enforced for public clients though that don't have verifiable reply URIs - so web sites are OK.


I've only ever implemented web-based OAuth flows, so I haven't seen that behavior.

Do they also disallow the use of refresh tokens? It would seem that allowing refresh would let a malicious app get around the requirement.


If you force a check at the start, refresh token seems fine. That's much preferable than popping ux every hour or whenever the access token expires.


Then what's the point? The malicious app would just keep refreshing so you never see the confirmation again, anyway.


> Why does Github automatically redirect back the 2nd time without asking the user to authorize again?

Other comments here seem to disagree with this, but I think you are correct. In all of my applications with OAuth, I always ask the user to confirm, even if they already authorized. I figure that user should be very aware that they are authorizing a 3rd party.

Now I've never actaully seen another system that worked this way (as was pointed out), but I think they are just doing that for user convenience without realizing the security implications.


Only step 1 is important, that's where the bug is. The rest is routine to get a code for your already authorized app.


I found a very similar bug in my code, though not in such a dangerous place! So thank you Teddy Katz and GitHub for helping me to fix a mistake in a less stressful way. This is why sharing detailed vulnerability reports is such a good thing.


It took all of 8 minutes from the bug bounty hunter reporting, to Github confirming the finding. That's amazing.


Interesting bug.

It makes me appreciate working with languages that support pattern matching like Elixir.

I think this type of bug would have been preventable with pattern matching because that single controller function would have likely been split out into 2 functions that each matched against a specific HTTP method, and when the unmatched pattern came into play (the HEAD request) it would have failed to find a match and then threw an error instead of processing it as something else.


This class of bug (CSRF bypass via route confusion) is probably more common in Phoenix apps. I’ve found a handful of apps vulnerable to this issue with Sobelow.

People create (for example) a get ‘/profile’ and a post ‘/profile’, and the action intended to correspond with post requests really just pattern matches against params.

I’ve also seen at least one app implement this properly, matching against the HTTP method as you described.


To be safe from this in Phoenix that would look like this right?

    def profile(conn = %{method: "GET"}, params) do
      # ...
    end

    def profile(conn = %{method: "POST"}, %{"user" => user_params) do
      # ...
    end
This would be in a case where your router looks like:

    get "/profile", UserController, :profile, as: :user
    post "/profile", UserController, :profile, as: :user
That's what I'm doing in my code base at the moment. Mainly thanks to Changelog open sourcing their platform, and you can see that pattern being used here: https://github.com/thechangelog/changelog.com/blob/f9b0a7587...

The above seems like the natural way to do it with Phoenix once you get a hang of pattern matching.


Yep, that’s the way!


Presumably a SameSite=Lax cookie wouldn't have helped here, because

> "Safe" HTTP methods include "GET", "HEAD", "OPTIONS", and "TRACE", as defined in Section 4.2.1 of [RFC7231].

But I wonder if GitHub could issue a separate SameSite=Strict cookie here as a defense in depth measure to cause the initial part of the PoC (HEAD request with credentials: include) to fail?


Wow, why would the browser allow HEAD request in the first place w/o explicit confirmation from github server with CORS headers. This is very strange.

It also reminded me of why "match" was removed in the first place - http://homakov.blogspot.com/2012/04/whitelist-your-routes-ma... - you could route any POST request via GET therefore bypassing CSRF checks still getting inside the controller.


Why's it strange? In a properly designed application, it shouldn't do anything any different to the no-cors requests you can already achieve with e.g. an <img> element. HEAD, GET et al are all supposed to be "safe methods" per the spec.


You can't achieve a HEAD request with <img> elements, can you?

In a rails env the only way to upgrade to non-standard methods (PATCH, PUT etc) were always to supply _method and pass CSRF protection first.

This trick is clearly a bypass because it instructs browser to make a HEAD request, something it never did before. If you try to make other non-standard request you will get this error:

Uncaught (in promise) TypeError: Failed to execute 'fetch' on 'Window': 'PATCH' is unsupported in no-cors mode.


No, but you can achieve a GET which is also a "safe request method" per RFC7231. All of these requests should be idempotent.

I'm not a fan of the hacks that exist to allow HTML forms to emulate other request methods. They're off by default in ASP.NET Core MVC, which is good IMO.


"should". Real apps are much more complicated than what they are in theory. In theory all CSRF protections must be based on an auth token, in practice they routinely rely on the method or referrer.

For this reason browser upgrades must be extremely careful to not break older protections.

That's why you cannot set your own referer (which is supposed to mean nothing), you cannot supply Content-type: application/json unless instructed by CORS preflight, etc etc.

Here, the backwards compatibility was clearly broken. It was never possible to send HEAD few years ago with regular XHR or <img>s. Web standards owe $25k back to github. Yes, github's routing was flawed but it wasn't exploitable until browsers allowed HEAD in fetch() no-cors mode.


>For this reason browser upgrades must be extremely careful to not break older protections.

It's too late for that though. We've already had e.g. Referrer-Policy come and break existing CSRF protection which treated an empty/missing Referer header as 'safe'. You're right that bad assumptions are going to made all over the place in the real world, but browser vendors shouldn't shoulder all of that responsibility.

You have to draw the line somewhere, right?


I worked a lot with client side bugs earlier, and clearly this trick has crossed this line. Browsers cannot say in one scenario that "content-type:application/json" are unsafe and in another allow completely unnecessary HEAD method that nobody ever used to be sent with 3rd party JS page.

Seriously who needs HEAD in their client side development? It's purely a technical method w/o clear use pattern. It's not even valid in <form method=""> so why would it be valid in fetch()? Oh and PATCH/PUT are still invalid in fetch().

We're lucky that this code pattern is only common in Rails. Otherwise it could open a whole class of bugs.


Wait, why is it useless? At minimum there is the example cited in the article of checking file length without having to download the actual file, but more generally, if headers have any value, and they must since they exist, why can't you imagine situations where you just want to see the headers without downloading a giant body?


For a client side code running on 3rd party page, there is no use case to let it send HEAD to you. Only GET and POST should be allowed by default, other methods only through CORS preflight. That was the premise of CORS. They broke it.


If GET is considered safe, HEAD must be as well. It's complete nonsense to claim HEAD is dangerous while GET is safe. Semantically, HEAD is literally just "do a GET but don't give me the body", so it should have the exact same server-side behavior as GET, except with the available optimization of not generating the response body (since it will be discarded).

The only reason this bug existed is because Rails treated HEAD as GET in some cases but not others. The sensible behavior here would be, in the Router, if the route didn't explicitly specify :head then it should convert the request to GET before handing it to the route. A route that wants to explicitly support HEAD (e.g. by skipping the response body) should explicitly specify :head on the route.

But the existence of this rails bug says nothing at all about the security of HEAD in general.


> The only reason this bug existed is because Rails treated HEAD as GET in some cases but not others

It is the main reason, yes, but not the only reason. If it wasn't possible to craft cross-site HEAD (which devs use in real life like.. never?) the bug would stop right there.

With an extra method if-else logic turned faulty. I would argue 90% web devs don't even remember of HEAD and what it means. Reasonably so, because it's rather never used.

IMO order of blame: 1) rails 2) browsers 3) github code relying on .get?


> If it wasn't possible to craft cross-site HEAD (which devs use in real life like.. never?) the bug would stop right there.

You're still focusing on the wrong thing. HEAD requests are largely obsolete at this point, yes, but that doesn't mean browsers would be right in changing the semantics of a HEAD request. The problem here isn't that browsers use the same security model for HEAD that they do for GET (as that's absolutely correctly) but that Rails decided to only partially support HEAD. Another simple fix for this bug would have been for Rails to simply give no special behavior to HEAD at all, and therefore any route that doesn't explicitly specify :head wouldn't be used for a HEAD request. The fact that Rails decided to deliver HEAD requests to a GET route without changing the request to actually appear as a GET request is a serious design mistake, and not one that browsers are responsible for.


Ok I can agree with that. This must definitely be discussed in rails/rails and fixed by design. I probably overplayed my concern with browsers.


Making sure the referer is correct is good enough. Other sites can't without permission send requests with a referer belonging to your site, and that won't change, unless the spec changes, and that's very very unlikely.


How about <img> on the page of the victim that triggers some CsRF with GET


Oauth is used in other places than browsers, HEAD has always been possible


I'm also not a fan of them, but I am even less a fan of the HTML spec for forms not supporting all methods.


But if your form is an advanced search form of instance, it should lead to a get request in my opinion.


The author proxied the request through his own server in order to bypass CORS restrictions.


That would defeat the point because then the server would need to know your authentication cookie. I can’t see the PoC but I doubt this is how it works.


There was no proxy view-source:https://not-an-aardvark.github.io/oauth-bypass-poc-fbdf56605...

  const authUrl = `https://github.com/login/oauth/authorize? 
  client_id=${CLIENT_ID}&scope=read:user&authorize=1`;
    fetch(
      authUrl,
      {
        method: 'HEAD',
        credentials: 'include',
        mode: 'no-cors'
      }
    )


There was a proxy, but I may have misunderstood what it was being used for

    fetch(
        // For the proof-of-concept, use a proxy to get around CORS. This is only necessary because the proof of concept runs
        // clientside in a browser; an alternative would be to just send the code to a server and do the request there.
        'https://cors-anywhere.herokuapp.com/https://github.com/login/oauth/access_token',
        {
        method: 'POST',
        mode: 'cors',
        headers: {
            Accept: 'application/json',
            'Content-Type': 'application/json'
        },
        body: JSON.stringify({
            client_id: CLIENT_ID,
            client_secret: CLIENT_SECRET,
            code
        })
        }
    )


I think HEAD requests are how the browser gets CORS headers in the first place.


I believe you are thinking of the OPTIONS preflight request.


Ah, I guess so.


This is the reason I don't use else much and use explicitly matching.

`if/else` is pretty much only use for boolean with exact 2 states.

Not like this issue but the use of `if/else` can lead to thing like:

  loop do
    # do thing and increase i
    break if i == 10
  end
If i go from 9 to 11, we never exit the loop.

In language like Erlang, when you write `if/else` it force you to handle all the cases with pattern matching which I think may help reduce amount of unhandled state check error.


Defensive code would have you write 'break if i >= 10'.


What is it about OAuth’s design that provides such an incredibly diverse number of failure modes?

I feel like it’s the most common critical vulnerability that crops up on every major site in one form or another.


"What is it about OAuth’s design that provides such an incredibly diverse number of failure modes?"

A solid lack of understanding at all layers of the OSI model, especially the first three.


Did you mean the top 3 (Session-Application) not first 3 (Physical-Network)? This is deliciously ironic if not.


And this folks is why you should always use separate sessions for different websites, and remember to log out of sites you aren't using.


very cool, thanks for sharing. i'm a junior dev who recently implemented oauth in a rails app and this is instructional. thanks


Ohhh, so that's how my github account was stolen.

One day I received an email saying that a third party app was added to my github account, they in an instant it says my password has been changed, key was stripped, etc.


If the folks at GitHub are half-decent engineers (and everything I've seen suggests they know what they're doing), they'll have request logs that would make it very very obvious someone had done this.

Would advise sending a support request in, if you think this happened to you.


I don't believe it is possible to change your password with the GitHub API, and that action also requires reauthentication.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: