Hacker News new | past | comments | ask | show | jobs | submit login
Zero-click, wormable, cross-platform remote code execution in Microsoft Teams (github.com/oskarsve)
1307 points by Tomte on Dec 7, 2020 | hide | past | favorite | 317 comments



This reminds me of finding and trying to report a bug in Internet Explorer 5.5 20+ years ago (not a difficult task). To report a bug, I had to pay. Yes that's right, I had to put in a credit card, and pay $100.

If it turned out it was deemed to be a real bug, I would be refunded my $100 money. If it wasn't, well that should teach me for wasting their time.

Guess the folks running the bug program got promoted.


Oh that's nothing.

When they introduced IE7, they broke ClickOnce launchers all around the globe due to the new download prompting. I raised a defect with my MS Partner support dude and normal MS support. All they managed was a registry fix shipped out to turn an old flag on that was removed from the UI but was still in the code inside IE. I did the diagnostic work to get that far.

After arguing for months with various support people at Microsoft I managed to get hold of people on both the IE and CLR teams and they both pointed at each other and refused to fix anything blaming the other team.

They called me every 6 months to ask me to close the ticket and I denied it because it wasn't fucking fixed. Eventually they stopped calling when Microsoft Connect was shut down. I wonder how many millions of issues they solved at that time!

Oh no wait, the issue still exists in IE11. They fixed it in old Edge.

This was a manual registry fix we had to deploy to 20,000 users at over 500 companies for 10 years.

Eventually we rewrote the software so it didn't use ClickOnce, instead passing context to the application via a shell protocol handler (much like Slack does).

Incidentally we're no longer an MS Gold partner and have no certified staff any more. This is not a coincidence. They did a shitty job and like hell we were paying any further. Amazon got our business in the end.

The issue?

You can't set window.location.href=""; to a clickonce activation link because of a race condition in the download bar in IE.


I guess I'm old enough to only ever know this side of MSFT. I see them getting positive reviews (for various different products) here on HN. But stories like yours is what I think of when I hear the word Microsoft thrown around at work. It's a company to be avoided at all costs.


Same here. Microsoft acts like an ex that wants you to believe they’ve changed but you always regret trusting them. Windows still has a lot of crap and actually gotten worse in a few ways (like the spyware and ads).

Windows is basically a glorified games console UI for me now but thanks to my actual consoles and VM-able games I don’t really miss Windows at all.

I had not used anything by Microsoft for 10 years ago until they bought GitHub. Fuck.


> Microsoft acts like an ex that wants you to believe they’ve changed but you always regret trusting them

This is the most apt analogy I've yet seen. I went "back" to Microsoft in the mid 2010s right around the time they started gaining a reputation for sucking less. Big mistake. But at least it gave me enough perspective to realize that the competing non-microsoft products and OSes (even the FOSS ones!) really aren't lacking as much as you'd expect.


Microsoft OSS stuff is generally good, VSCode, .NET Core and that ecosystem. The enterprise stuff (SFB, Sharepoint, AD, etc.) is a pile of garbage I wouldn't touch unless my life depended on it.


Sharepoint has tortured people 17 years and still it sucks. I had unfortunate privilege back in 2003 to be involved to many "intranet" projects to customize the Sharepoint to customers needs. That was horrible from the technical point of view.

By enough marketing power you can sell almost anything.


Oh, yes, Share point, the worst nightmare when I worked in a big corp years ago. I wonder if there are really some sort of experts that understand how does it supposed to work?


There is a reason it is called "Scarepoint," even internally.


I remember the first time I was shown a SharePoint

"This is what we use to keep track all purchase orders! It's GRRREEEAT!"

Neat, so it's a big Excel spreadsheet we all use at the same time? I can get into—hang on, some of this stuff doesn't have a da—should I worry about—are 'Michel' and 'Michell' the same—what about 'Msh', and why are her rows purple?

"Old, all old. If someone from lab needs something, they will tell you."

But I still gotta—

"Everything in the SharePoint."


^that If you told me that MSFT championing OSS space ten years ago, I would have assumed that you were trolling.

With the acquisition of Github and NPM, MSFT might just be the new ally of the open source movement.


Buying things up means monetising them aggressively (after all, that is what acquisition is for). This is the opposite of what an ally would do, it is an impending sign of corporate oppression. Give it a few years.


Acquiring everything doesn't make it an ally, merely an owner or dictator of policy.


Microsoft (and other large companies, surely) has a huge problem dealing with issues where multiple teams share responsibility. There's no supported way to use DirectX (or Direct 2D or DXGI or ...) from .NET. They know there's relatively large demand for it, but the .NET team says "I'll forward that to the DirectX team" and the DirectX team says "we just do C++ over here." The default response to cross-cutting concerns should be that both teams take responsibility but right now neither team will take responsibility. Another corollary to Conway's Law.


I've always heard the phrase, "if two people are responsible, no one is responsible".


> There's no supported way to use DirectX (or Direct 2D or DXGI or ...) from .NET

Wow, they’re still treating .NET as a second class citizen?? That’s one of the biggest reasons I jumped off the MS boat.

At least Apple is going all-in on Swift and actually using their latest tech in core parts of their OSes (unlike MS and things like WPF etc.)


Well you can’t expect Microsoft to rewrite all their software in the framework of the week, right?


That would be stupid. /s?

They could have an official .NET wrapper for DirectX, but then it's not "direct" :) anymore.

Of course then the next question is whether it works on Mac/Linux/... in dotnet.core

which is in conflict with their inner lockin tendency


Same experience with IE bug-reporting here. I reported a bug in May 2015, put up a little example page in order to show them the problem:

http://bjarneh.github.io/ie/index.html

Strange bug where input fields added to the DOM containing non-ASCII characters (whether you escape them or not) trigger 'input' events. So the example page loaded in IE will already have triggered 2 'input' events, since the last 2 input-elements contain the letter 'å' (escaping it as å makes no difference).

After getting through the trouble of submitting the bug, nothing happened. About 6 months later I got an email saying the ticket was closed; as they were not able to reproduce the bug. The bug is still here today 5 years later.


If that event appears as user interaction you might be able to make a minor vulnerability out of it, instance to create pop-ups without interaction.


Yes that should be possible. Unfortunately I was in the opposite position, i.e. I had to make this browser look normal by hiding all of its misbehaviors, not try to exploit them :-)


Yes but if you can find a way to exploit it you can file it as a security issue and get Microsoft to fix it or even pay you!


Ah, that's the loophole in Microsoft's bug-reporting. It's not just hard to file bug-reports, you have to find ways to exploit every bug in a way that make them security issues :-)

> and get Microsoft to fix it or even pay you!

That would be nice...


I feel for you. At least it’s good to know that IE11 is likely to die before we do at the moment :)


The sad news was that this browser was the recommended browser to use inside the company as it worked best with "Sharepoint".

Basically a total hack was required for every AJAX request that could return input fields with non-ASCII characters, as the page itself listened for input events on these elements. I.e. we basically have to wait for the AJAX-elements to be added to the DOM (removing any 'input' listener as they are), then add the same event-listeners again after every AJAX-call etc, not exactly an elegant solution...


> You can't set window.location.href=""; to a clickonce activation link because of a race condition in the download bar in IE.

I don't even... shakes head in dismay


It’s even worse when you realise I spent ten years convincing tens of large angry enterprises why they needed to deploy a registry fix to make our web based software work.

I put the cost of this bug for us just in the $100k range.


Were these deployed on prem on Windows servers?


No - web deployed click once applications. Straight to desktop from browser. Signed and whitelisted.


I had the same reaction to this when I was told by Microsoft, however this description seems intentionally misleading. Microsoft Support accepts calls for support and bug reports. There's a fee for the support. If it turns out that the issue is a defect, then you won't pay for the support call.

Unfortunately, this was the only way to report a bug at the time.


Yeah thats definitely possible, I may be misremembering the specifics. I think for us the end result was the same: it made us a lot less likely to help them improve their product.


We did the opposite. With premier support, you used to buy a bucket of hours. We had a few folks that learned the lingo and basically was able to attach a product defect to almost any call we made, billable to the product group.

We’d end up with a surplus of hours, and leverage the threat of slashing those hours to get the execs on the support side to push for concessions from the other parts of the business. Premier was basically a revenue generator for us :)


Something tells me they don't mind if the bucket's overflowing so long as you're paying for the water. "Make 'em feel like _they're_ workin' _you_" is a tried-and-true sales tactic


But sadly not enough to break their desktop monopoly.


I found a nasty XSL rendering bug for some version of IE many solar rounds ago. Took them 2 years to get back to me: "this bug is closed because that browser is not supported anymore" or something like that and "you are not eligible for a free copy of Windows X since there are no bugs". Or something like that, memory is fuzzy. But other people were getting free copy left and right for PEBKAC problems.


Microsoft vulnerabilties are now being disclosed on a Microsoft subsidiary website, Github, Inc.


I’m pretty sure their App Store has a 5-6 year old bug where kids are prompted to use their (child) PIN to authorize purchases instead of a parent’s PIN. I haven’t tested it in a long time though.

I tried to make a video demonstrating it a couple years ago, but VBox video recording mangled it due to a bug. I found it amusing that one bug preventing me from demonstrating another bug.

I tried figuring out how to report something like that to MS and gave up. It’s way too much hassle.


Last month they broke integration with PowerBI service with directquery, marked it “info” without estimated resolution time, posted a windows server specific solution, and ten days later silently fixed it for the rest of the platforms including Azure SQL.


That reminds me of all my efforts reporting bugs in a Certain browser today... Except there is no option to pay $100, you just get ignored and sent to WONTFIX.


I'm assuming that policy was designed for general bug reports, rather than specifically for security bugs? Especially if this was 20 years ago.


Thats ridiculous


I presume you declined to pay?


The company I worked for at the time decided to pay since it was affecting users and we had to do an annoying work around. If I remember correctly we got the $100 back a month or so later, with a one liner reply saying something to the effect of "this will be fixed in a service pack at some point, godspeed."


20 years ago you needed that kind of filter.


Haha. I couldn't even invent something that stupid. I guess they had a lot of cranks reporting bugs? The $100 would make sense if there was a large reward on the other end. Like, if it was deemed to be a real bug, you'd get $10K or $100K.


It sounds like the normal Microsoft support (aka PSS) when you don't have a support agreement. If the problem you are reporting is solved by a hotfix or update, then MS refunds the support fee.


They also refund it if it’s a demonstrable but in the product, even if it’s a WONTFIX bug.

The only downside is the fee is now $500, not $100


I wrote this. This is one of five similar reports for MS Teams.

Even outside RCE, just consider the impact of access to SSO tokens and wormability :)


Could you clarify the "one of five" statement please? Are the other 4 vulnerabilities still unfixed, or they are fixed but a write-up is still pending? If there are still 4 unfixed RCE bugs in Teams I'd rather people uninstall Teams than wait for the fix...


It would be safest to assume that you have at least one unfixed RCE bug in Team, even if oskar did not discover it yet.


Could you provide a disclosure timeline and the version or indication of the version which has fixed this issue?


you can find both disclosure dates and versions in the report.

As for when it was fixed - I have no idea, as they never told me, one day it just was.


Thank you for reporting it and not selling it on the black market!

I agree the categorisation is very bad.

I hope raising this here will help you getting rewarded properly.


> Thank you for reporting it and not selling it on the black market!

I disagree. If MS is going to treat major issues like this then researchers should be selling them to the highest bidder. Maybe that way they'll actually treat disclosures properly.


Pretty bold to advocate for blackhat behavior on one of the most schoolboy vanilla places on the internet, but I can't say I necessarily disagree with your sentiment, big tech needs a lesson but is this really the vulnerability we want? 115 million DAU on teams...

The amount of damage the NSA or some other state sponsored actor could do with this... It would be very bad to say the least. How bad depends on which state acquires it.

If a script kiddy got it they would likely do a mass randomware infection, hospitals would get hit, people would die. Millions in crypto would be lost to unencrypted wallets found on the vulnerable machines (yes people do that..), this could cause some to lose their life savings... People have commit suicide for less.

My point is its important to look past FAANG being cheap and look at 2nd and 3rd order effects from something this powerful and widespread.


Governments around the world already regularly trade in exploits that are as or more severe than this one.

That isn’t to advocate for brokering to a government, just to say that the market already exists and contains comparable exploits. It’s only a matter of time until we see the next EternalBlue to WannaCry lifecycle.


> look at 2nd and 3rd order effects .. which FOSS engineers have spent their lives on, while FAANG acumulates patent and SSL money across international borders? forcing TEAMS kool-aid with surveillance built-in, down your desktop with the help of C-Suite and their attorneys?


The ethical thing to do is immediate full disclosure, not selling it and not this (ir)responsible disclosure crap.


> researchers should be selling them to the highest bidder

But what about all of the innocent people who would be harmed by such a callous approach? I'm glad some researchers have a conscience.


> But what about all of the innocent people who would be harmed by such a callous approach?

They should then think again about their choice of using teams. Why should Microsoft rake in money from a shabby product while volunteers have to fix their shit?

Assigning a ridiculously low score to significantly lower the bounty as a billion dollar company is disgusting.


> They should then think again about their choice of using teams.

Try saying that to a student who is using Teams on a school-issued laptop, by no choice of their own.

I'm not in any way defending how Microsoft handled this. Frankly, I'm ashamed of my former employer (though I worked in a completely different division). But your outrage toward the company should not extend to its unwitting users.


There wouldn't be very many unwitting users if their software had a serious reputation for being a serious security risk.


Bullshit. Currently there are millions of children who are obligated to use Teams for their publicly funded education.

And thinking these huge metrics get changed by selling black hat exploits to what? Teach Microsoft a lesson? While harming an already vulnerable population (not just children are obligated to use Teams). As if the long term goal of educating "unwitting" users is advanced at all by blackhat behaviour.


Let's dump public education!

Deschooling is done on discord


Microsoft has had a serious reputation for being a serious security risk for the 30 or so years I've been in IT. It's one of the oldest jokes in the industry. People and the world in general clearly do not work the way you apparently think they do.


Zoom still has a ton of users, and every single thing they make or do is a serious security risk (or has been in the past, evidencing a distinct lack of secure development culture).


Windows XP is still seen in the wild.


Problems need to hit the users, otherwise the market is uninformed and cannot work.


There are lots of vulnerabilities in most door locks, does that mean we should go around stealing things because Chubb have made money selling insecure locks?


A wormable, widely deployed, Chubb lock would be interesting.

Let's see how Ring goes over the next few years... ;)


> They should then think again about their choice of using teams.

What percentage of Teams users do you think have a choice in their use of Teams?


If it's on their work machines then it primarily endangers their employer's data, much less their own.


Funny thing to say when we're in the middle of a global pandemic, and more people are working from home than ever.

I work at an university and I've been forced to install that crap on my home computer because I need to teach from home. And so do all the professors in around half the universities I know in my country.


Interesting, I'm surprised that they don't have to provide you with the tools needed to do your job!

In Australia, the emp is generally responsible for providing any necessary tools or equipment needed to do the job (contractors are another matter though)


In normal circumstances they do provide the tools needed for the job, as they should. But this was a sudden state of emergency triggered by a pandemic, there were no funds, reactions weren't fast enough... so basically, they didn't.

Anyway, those of us who have research projects (as is my case) typically do have computers provided by the university at home, because research has strange schedules and working from home has always been a need (meeting with colleagues in different timezones, waiting for experiments to complete at night, rushing for deadlines, etc.).

But... it's not really practical to make room for two different desktop computers for my own use in an already spaced-starved flat, or to work in a laptop for many hours when I could do so in a desktop. So in practice, my home computer and my work computer are one and the same. And it's like that here for most, if not all, people I know.

We are a Latin country and also tend to live in small flats, maybe in other places it's different. I can imagine that if I had one of those American McMansions, it would make sense to have a home office with a sober, black work computer, a good camera setup and a green screen, and then a gaming room with a flashy gaming computer and huge speakers (near the billiards and darts room, probably :)). But that's not really how things work about here. Here, separation of home and work computers at home is almost exclusive of jobs with high security restrictions. Most people in normal jobs just don't do it because it's not practical.


And then when the company loses business from the disruption, do you think employees walk away scot-free?


I consider that inherent risk. Not getting a raise because the company made business decisions that turned out suboptimal (such as gaining short-term profits by not investing IT security) is a risk that any employee faces. If you want a more stable environment you go for a more risk-averse employer, perhaps even public sector jobs.


That's a silly proposition. If my field of expertise is inherently private, I don't have that choice. Also I can't solve for every variable when searching for jobs. I choose among the ones I get an offer for, and obviously their IT decisions aren't top of my list (nor do I know what those are prior to hitting the desk)


Ruining companies that can't (or won't) get their act together (whether it's security, finance or any other critical and undervalued area) is a short-term pain that fixes the issue. Refusing to fix simply prolong the problem - at some point you have to say "enough is enough" and tear the bandaid off, if you don't, and you don't do so with severe enough consequences then businesses will simply conveniently ignore what they're being asked to do.

Necessity is the mother of invention, I have no doubt that the opportunities created by blowing away poorly-behaved incumbents will cause a healthy collections of startups who will be operating within the required framework.


You may not see yourself as having a choice but that wasn't really my point. What I was getting at is that being an employee in general comes with a diffuse risk of many factors that can result in not getting a raise or the company even going bankrupt. Many of them are outside your direct responsibility or influence and yet you take up the whole risk package when joining that company. The company getting ransomwared is just one more factor. It's not special. Well, one issue with it is that it requires criminal activity so it's dragging us down to a worse equilibrium where more resources have to be spent on countermeasures. But arguably that cat is out of the bag, so the next best thing that we can do is to make security best practices easy. And microsoft wasn't doing its part here.


Punishing innocent people is not the answer.


They are not innocent. They made very poor life choices, picking microsoft software. Why should the world reward their poor choices?


To what extent should the blame for any harm fall on Microsoft? They are the ones relying on effectively free labor to protect the innocent. In such a case blaming the free labor instead of blaming the ones relying on free labor seems to create some very bad incentives.

Personally I would prefer just having all new vulnerabilities immediately disclosed once found. No selling, but letting people decide for themselves if they want to continue to use a product after someone has found a vulnerability. I also think the incentives this creates would mean that Microsoft and similar shops would put more effort into testing their own software because they would no longer have the safety net of a grace period when someone finds a problem.


Thing is, we don't know if this was found before by malicious actors and sold and/or abused.

This thing sounds like it is mostly pretty straight forward to find once you start looking - "you" being somebody experienced in this field of research, that is. At least you don't have to construct fancy weird machines (with type confusion, heap spraying and all those shenanigans). It comes down to finding something that can perform code execution in their internal API (here: "electronSafeIpc") and then finding a way to get there (here: angular escape bypass/not-properly-sanitized user provided data) and you can do both in javascript and don't have to read tons of machine code.

Given that Teams is a great target because of it's large and often corporate user base, I'd be surprised if none of the usual industrial espionage suspects (e.g. China, NSA, etc) had a look at Teams before. And I'd think the chance of them having found the same bug, or a related bug, once they looked is pretty good too.

From what I am hearing even the (US) military uses Teams sometimes... If that isn't incentive to look at this thing for "interested parties", then I don't know.


> This thing sounds like it is mostly pretty straight forward to find once you start looking

Most security bugs with 20/20 hindsight are "obvious" when explained well. Personally, I think that is an insulting and immature thing to say IMHO.


please check out how much code MS Teams actually has, before statements like this :)

(it’s more than 30MB of compressed JS)


I didn't want to belittle your work, if you think that was the case. It's still outstanding to find things like that on your own, and a lot of work goes into it. Sorry if I gave the wrong impression.

I have analyzed foreign code bases of similar dimensions in the past myself and found critical bugs. The size doesn't say much, it comes down to identifying the "interesting" bits (like the electronSafeRpc in this case), which can be hard and tedious, but greatly reduces the code you have to look at in detail. My assertion is that if your name is e.g. China then you will not be turned off by that.


that electronSafeIpc API is actually not that interesting and a completely standard way to do things for ElectronJS apps.

No, I do agree - from my perspective C/C++ class bugs are more difficult. Maybe they see this as magic as well.

Still, it was painstaking work and in either case CountryX will easily surpass those difficulties.


30MB of hand-written JS? For what's basically a glorified chat client?

With that much code I'd expect an AI to talk to people so I don't have to.


Yeah, that will show 'em...

Then people will move to some understuffed FOSS alternative with 5 people working part-time on it, with as severe bugs that nobody notices (remember Heartbleed and countless others?)...


Imagine thinking people move to FOSS alternatives.

Imagine thinking PHBs at most companies even care about security.


If the bounty money borders on insignificant, there's always public shaming. Demo the exploit in a controlled environment, and let the media cycle go.


why controlled? Last time some dude got frustrated and started dropping zero days pretty much weekly Microsoft finally hired him to make it stop.


So the people / companies who would be hacked and have their data / systems destroyed are what? Acceptable collateral damage?


While I get your sentiment, I must disagree.

Profiting from the very likely unethical use of the exploit would be unethical.

Instead this mishandling by M$ should rather cause researchers to publicly announce the vulnerabilities which would hopefully cause M$ to change their ways in future dealings.

It is ofcourse easy for me to say this, not being a researcher who lives off of the discoveries made.


Participating in a system that exploits researchers for free labour using societal guilt-tripping is the unethical move here. That means you.


I see you completely missed my point.

My point is that in the case of M$ the defects could be publicly announced to all parties at once as a way of making M$ realize that how bad their handling is/was. In all likelyhood this shouldn't have to happen for too long before they would realize their mistake.

Many other corporations do indeed value the discoveries of researchers and do pay accordingly for being notified. Never did I suggest that this should become the industry norm (i.e not paying for private disclosures).

Now what ever your personal feelings on that idea is, it does not change the fact that selling exploits to other parties would be unethical.

Furthermore, participating in a system that promotes assumptions and flawed reading comprehension is not conductive to a good discourse. That means you.


you could be a grey hat if you averaged out one exploit turned in to the proper group to one exploit sold to the highest bidder. Flip a coin to be a real greyhat


"Locks can be picked so everyone should break into homes to proved a point"

Lol, no.


That most locks are pickable is common knowledge and that is why high-risk targets invest in additional security beyond locks.

That crufty electron apps are a security risk is not. So yes, you do need someone to run out into the streets and yell that the emperor has no clothes. Otherwise common knowledge will not be established.


Not selling this is the real crime here. Microsoft's conduct in this case deserves much worse than just that.

Hoping for a reward now is obviously not going to happen - the best you can hope for as a response to an act like this is legal action. In a vindictive way, you can definitely hope they will get significantly damaged by this and in that way learn their lesson, but I doubt it.


Sorry if I am just obtuse but I don’t see a timeline in the linked report on GitHub. All I can see is that you tested against a version of Teams from 2020-08-31. Being able to see the complete timeline of communication with MS from discovery to public disclosure is not necessary but would give a more complete picture of how this went down, and I’d like to see it too if it’s not such a hassle.


There is no timeline besides when I reported it and now minus 2wks. They never told me when the fix was deployed.

There is little value in going through the email chains to note each date:(. Final decision was made 2020-11-19


Could you put that in the README, is what we're asking, as vague as it may be.

At the moment the 'has been fixed' is the only clue to this in terms of resolution, and it's tucked away; without it it looks like most of the README is attempting to capitalize on the shock/outrage factor.

Edit: Thanks, author has added some dates.

https://github.com/oskarsve/ms-teams-rce/commit/35eac619fdef...


Have you been tempted to build a worm and click send? not to brake anything, just a text popup with an optimistic optimistic quote.


only as a thought exercise. the ability to 'switch off the internet' (115 million daily active big corp users) is tempting, but no, not really :)


That's one way to force them to not make bug like that "important, spoofing" and "out of scope".


Google Robert Morris to find out how that goes.


Wikipedia:

In 1989, Morris was indicted for violating United States Code Title 18 (18 U.S.C. § 1030), the Computer Fraud and Abuse Act (CFAA).[2] He was the first person to be indicted under this act. In December 1990, he was sentenced to three years of probation, 400 hours of community service, and a fine of $10,050 plus the costs of his supervision. He appealed, but the motion was rejected the following March.[4] Morris' stated motive during the trial was "to demonstrate the inadequacies of current security measures on computer networks by exploiting the security defects [he] had discovered."[2] He completed his sentence as of 1994.


In case people don't know already, he's one of the YC founders: https://www.ycombinator.com/people/


From his wikipedia:

He is a longtime friend and collaborator of Paul Graham. Graham dedicated his book ANSI Common Lisp to Morris. Graham lists Morris as one of his personal heroes, saying "he's never wrong."

to be friends with Paul Graham, i should make a worm. Got it.


Ehh in 1988 that worm was like an alien artifact from the cyberpunk future.

First "real" worm code, multi-platform, multiple payloads, "staging", first practical buffer overflow exploit and it does credential brute-forcing.

Heck it was not until nearly a decade later that people were really doing buffer overflows, and there were a LOT of easy overflows to be found.

I'd make the case rtm didn't just "make a worm" he foreshadowed the next few decades of computer exploitation.

Took a whole bunch of research and ideas, synthesised them, built an actual working "product" a decade or two ahead of its time and released it in a transgressive way.

If you are the kind of person who can do that I'm sure lots of people would like to be friends with you.


or Samy Kamkar.


Samy is my hero


It's one thing to find a security issue, it's another thing to exploit it and easily leads to jail time even if it's harmless.


Maybe I missed it but I do not understand why injecting a null byte allowed you to bypass Angukar's protections. Is that a bug in Angular and if do is it fixed?


Is there any tell-tale sign this happened to you? I had a really weird experience on Mac last week: I opened up my machine and when I focused on teams I got a security alert saying something called Endgame from Elastico was demanding permissions. Never downloaded it but there it was in Applications.


It is technically never possible to guarantee tell-tale signs of an RCE. At the point where you're running compromised code, that code could in most cases be constructed as to erase its own tracks. There might be some visible sign at the moment of exploitation, but after that it's kinda over.

(Yes this assumes the RCE escalates to a reasonably high privilege, but that's just a matter of chaining. You can try to go for things like sealed logs, but ultimately arbitrary code can put your machine in an arbitrary state.)

Particularly insidious for this would be the case of data theft. The RCE might load some code to upload your company secrets and keep itself strictly in RAM, and then erase itself when done. With enough blackhat craftiness you'd never be able to pinpoint the exact location of the leak.


If you're using an employer provided computer then they've likely installed Endgame[0] which is an endpoint (it runs on each device) security tool. Endgame was acquired by Elastic[1] last year

[0] https://en.wikipedia.org/wiki/Endgame,_Inc.

[1] https://en.wikipedia.org/wiki/Elastic_NV



Is this a work Mac? If so then it is likely managed through some kind of MDM system (JAMF etc), and it wouldn't be unreasonable for the owner of the hardware to be pushing down an endpoint agent like Elastic Endgame. Check in with your security team and ask them.


no, as you can see in the first demo it could be completely silent.

not saying you are safe - I don’t know :)


Thank you for making the internet slightly better.


There is, however, some consolation in the fact that only an individual who is already connected to you in Teams can run this.

That's not to say - of course - it's not abuse-able, it just gives some context to the fact threat MS calls this "Spoofing", since presumably, your Teams contact is someone you trust. So the bad actor is "spoofing" as someone trustable within your org (or outside it). But is does prob need some social-engineering for a bad actor to truly exploit this.

But the threat is still sever since the above logic only holds up to the point-of-entry, once the worm has infected someone the people forwarding it around are truly trusted.


One of my health care providers use Microsoft Teams as their telehealth solution. My city government uses Microsoft Teams for some public meetings. The idea that folks are only using Teams to connect with other trusted parties is comforting, but false.


> Microsoft Teams as their telehealth solution

That sounds..interesting.

I suspect with the on-going pandemic lots of tools are getting used in interesting ways they where never really designed for just to keep things going.


Microsoft advertises Teams for telehealth:

https://www.microsoft.com/en-us/microsoft-365/microsoft-team...


It’s bad, but it’s mostly bad because Teams is bad. It’s still better than Amwell, which somehow manages to have multi-second latencies and requires me to manually mute my video preview to stop it looping back my own audio.

The old P2P Skype had better video quality and latency, even when talking to people 4000 miles away, than every video product I’ve used in the last year. Probably not coincidentally, every video product I’ve used in the last year has been web-based. WebRTC is an enormous disappointment.


Teams as their telehealth solution? What is wrong with Doxy.me? It is HIPAA compliant and privacy-orientated for telehealth than Teams.


believe Teams is also used for the NBA virtual fan thing, so there are... a lot of people connecting there...


That’s pretty scary tbh. All you need is a single employee to fall for a phishing attack or other social hacking attempt and that’s game over. Everyone from the CEO down is compromised. Zero click wormability with remote code execution on a platform the entire company uses gives the exploit unlimited reach within a company. This makes this one of the most effective hacking/corporate espionage tools I’ve heard of.


Imagine a bad actor starting work at large corp having all confidential information up for grabs from colleagues on Teams. It is especially scary during these times where a lot of companies moved completely to working from home. Some health organisations also use Teams for group support meetings. Imagine someone being able to rummage through your documents during an appointment.


sure, add guest accounts to that and we are almost on the same page.

I can’t call this “spoofing” as there are many many things you can do wih it


I'm confused about the scope of the RCE. Can it escape the Chromium renderer sandbox? Or is that sandbox disabled? Based on the following:

> MS Teams ElectronJS security: remote-require is disabled & filtered, nodeIntegration is false, webview creation is filtered and normally removes insecure params/options. You cannot simply import child_process and execute arbitrary code or create a webview with a custom preload option.

it looks like they did everything right.

I would like this thread to go beyond outrage at how Microsoft handled this, or another excuse to bash Electron. What lessons can developers using Electron take from this? (No, "don't use Electron" doesn't count.)


The article explains the technical details of the render process escape. Contrary to all the current replies to this comment, it does not look to me that this is using a generalized Electron escape; rather, it is using specific main/render IPC calls which Teams has implemented unsafely as the escape mechanism. Perhaps folks are confusing this with an electron sandbox issue because Teams happens to have called the variable containing their IPC APIs "electronSafeIpc".


What lessons can we learn from banging our heads on the wall? (No, "don't bang your head on the wall" doesn't count.)


Yeah, because remote code exploits are particularly an issue with Electron, as opposed an order of magnitude less likely to happen with it compared to native code.

Basically you got it completely backwards. The typical native app (which is not Rust or Java but C/C++/Obj-C, etc) only keeps the unsafe part of Electron, and even drops the sandbox (whose holes can always be patched, but total absense cannot).


Complexity leads to RCEs like the one in TFA, not “unsafe” languages.

It is very disingenuous to say that other native apps keep the unsafe parts; there are no such parts at all.

It should be telling that Microsoft, in one of their most successful products of late, have punched an RCE sized hole in that sandbox unwittingly.


Don't use Teams app, but connect through browser instead... that is if there is no other way to avoid Teams, because you're company has already migrated from Slack.


Correct me if I’m wrong but the XSS stuff still works then. Having all your Office 365 data stolen may be a problem.


Bang our heads on the wall at the appropriate angle. It's only our fault if we get hurt, we must learn to properly use the tools we have


I bang my head against a wall everyday for a living.


Avoid Microsoft for stuff that is important, and don't install their software on your machines if you can help it.


Yeah, because other companies or FOSS have a better track record.

E.g. we can drop Exchange for email for a safe alternative like Sendmail.


Or wuftpd. I once accidentally dropped my keyboard, and it turned out that the keypresses that generated were a remote root exploit for wuftpd.


What are the chances of that happening, that's pretty neat!


Eh, at first I thought so too, but then it turned out my neighbours nan once accidentally hacked the US Navy that way, so after that I didn't feel all that special any more :(


Hehe, that's hilarious.


With wuftpd? P(x)=1


Maybe I’m in a filter bubble, but I’ve never heard of anyone else assigning such low priority to exploits like this.


Yes, Google has a better track record. You brought open source and - totally unrelated - sendmail into it.


>Yes, Google has a better track record.

Do they? This is just from last month: https://www.zdnet.com/article/google-patches-second-chrome-z...

>You brought open source and - totally unrelated - sendmail into it.

No, you brought the totally unrelated "Avoid Microsoft for stuff that is important, and don't install their software on your machines if you can help it." as if Microsoft is the only place to ever had a RCE...


That's Chrome, not meet.

As for bringing in unrelated stuff, no that was still within the context of Microsoft's meeting software. It was someone else that brought in the OS.


Reminds me of the old joke: VirusScanner: We found a virus on your machine called Windows. You you like to remove it?


Starting by anything running on top of Electron.


Especially if said machines are actually built by Microsoft.


In which case they may be nice enough to block some of their software for you https://www.techrepublic.com/article/microsoft-blocks-major-...


>No, "don't use Electron" doesn't count

Why would it count? The situation would have more easily occured and be even worse with a C/C++ native app.


would it though - in a C++ app there'd be way less places where people could send you arbitrary code to run over the network ; and if you use any kind of high-level network library you'd be well pressed to have any kind of buffer overflow, as all the buffer handling is done in already-vetted libs, such as Qt, Boost, cpp-rest-sdk, etc


It's in the IPC invocation, so for this app in particular, it'd be in the exact same place.


I have a hard time seeing how. It seems to work by having some JS code to create a webview getting executed. In a pure C++ app there would be no place that would even be able to interpret that js, or any other "script language" in that pipeline.


[flagged]


Or, you know, reasomable people, with their own arguments, you knee-jerkly dismiss...

(Not that Rust's safety over C/C++ is an "opinion" or subject to argumentation).


there are different levels of security for ElectronJS, some, like in this case are not enough.

I think it will take a long time before we can call ElectronJS secure. there are regular sandbox escapes and that is from what we know publicly


The OP is asking for more detail than “not enough”, though:

“Can it escape the Chromium renderer sandbox? Or is that sandbox disabled?”


to simplify - no it’s not enabled

the real answer is more complicated as it is not necessarily a global setting and depends on what you call a “sandbox”


Thanks. I'd pay (moderately) for the more complicated answer. An ebook on Electron security might be a good idea.


I'm not an expert on Electron security!

But if not addressed to me, there is no need to pay, you can start here: - https://www.electronjs.org/docs/tutorial/security - https://github.com/electron/electron/security/advisories

As you can see there are plenty of considerations and pitfalls to take into account. Best option is to enable contextIsolation for everything.

Further, Electron security is closely tied to Chrome security so that is one deep rabbit hole


Best Electron security is not using it in first place.


Yeah, let's stick with raw C/C++, that would be much safer...

Or maybe let's use some research language made by Wirth, and get access to all 10 of packages and 5 devs worldwide using it :-)


For starters, leave it on the browser.

I didn't mention any programming language.


Telegram Desktop is a cross-platform C++ app. What similar remote code execution exploit has existed in the wild for it?



One of them requires the user to click run on a file, much like running an EXE. The other, simply saves potentially malicious data to external storage which would then have to be run by a separate malicious third-party app. This are far from RCE exploits that execute immediately without poor user decision making, and Rust is not impervious to security exploits similar to these.


C'mon. Just because there is one C++ app without remote exploits doesn't mean all C++ apps are immune.


FYI it's not just PL that factors into security. The engineers, for example.


Rather just keep it in the browser? ;-P


This is safer to a significant degree.


They could ummm.... build a cross-platform UI framework that rivals Electron without the security and memory bloat issues? I think that's the plan with MAUI.


This essentially allows you to infect all (online) machines running Teams in some timespan, because of the wormability, if I understand this correctly. There are 115 million daily active users.

The absurdly low rating by Microsoft is horrendous.


I wonder if the team giving these ratings is the same team responsible for introducing the bug in the first place? I could see why someone in that situation would be incentivized to downplay the severity of a bug report like this.


There is only one rating higher at Microsoft I think. So important is actually pretty bad, but agreed it should get their critical rating according to their own scale.


It could only infect all machines running Teams if all Teams organisations were connected. As is, likely many large orgs are linked together, but I'd wager there are just as many users in small orgs with no/few guests as there are users in large interconnected orgs.


> Microsoft accepted this chain of bugs as "Important" (severity), "Spoofing" (impact) in O365 cloud bug bounty program. That is one of the lowest in-scope ratings possible.

This is beyond believe: a RCE classified as "Spoofing".


The reason is probably to safe money. The bug bounty for a critical RCE would be between 10k$ and 20k$ depending on the quality of the report. Important Spoofing is rated for 3k$ and 500$.

So that is basically a giant middle finger to the security researchers.

Source: https://www.microsoft.com/en-us/msrc/bounty-microsoft-cloud


It wouldn't be the first time Microsoft has screwed over independent security researchers. There's a Twitter thread of a researcher who was accepted into the Azure bounty program, found a lot of important zeroday vulnerabilities, and was paid nothing. In fact he expected to be paid for his findings, and then had trouble with his basic living expenses. Anyone who has worked with bug bounties should know to stay far away from them since you can't get assurance you'll be paid (and companies are not incentivized to pay security researchers).


One thing I don’t understand is why security folks still bother with public bounty programs, when I hear that the market for software reviews is massive and very profitable. Is there a gap in the market for something that can matchmake skilled people with companies at reasonable rates...?


Most of us don't bother with bounties anymore. There are a lot of types of software review so I'm not quite sure which one you're referring to. If you're talking about matchmaking for pentests then you're essentially describing a bounty program, the only difference is that bounty programs don't pay researchers for their time. If you're referring to blog/publications on security then this is the first time I've heard of that market.


I’m thinking of security-oriented code-reviews of various enterprise software. One of my old clients commissioned some last year over a piece of work I made, and apparently they had to go “to hell and back” to source a reputable (and very expensive) reviewer somewhere in California, while I’m sure there must be plenty of UK talent available. They then had someone else pentest it as a blackbox, which is definitely easier to source locally, although the quality can be very variable. I understand it is a very sensitive area, maybe it needs some sort of professional body to provide accreditation and self-regulate and promote reputable members, I don’t know.

I think bounties are an unbalanced system; as you say, pentesters don’t get paid for their time and often don’t get paid at all, like in this case. There must be a better way, where an independent third-party can judge actual severity of the hole and sanction payments.


This is a bit of a guess since this kind of security research is but a hobby to me, but if with reviews you cannot publicly post your results after they're fixed, the best way to build a portfolio would be public bounty programs. And without a good portfolio, you don't get hired for reviews.


Yeah I think this entire industry is fucked up by bastard middle manager types.


Save money? $20K to Microsoft is like $0.02 (if even that) to you and me. Even $200K would be a drop in the bucket compared to the damage from a widely exploited Teams vulnerability.


I was curious how accurate that was…

MSFT has a market cap of $1.62T. A quick Google says "The median net worth of the average U.S. household is $97,300." That works out to 0.1¢.


It's probably why they have so much money. It only goes one way. They are greedy and unethical just as any other big co, what's worse about them however is that they are trying to wear that deceptive image of them being changed now and embracing open source blah blah, but it is the same greedy Micro$oft it has always been.


These payouts don't come from the grand central corporate treasury, but from the budget of a director or a VP. They might have a small amount of discretionary funds after their headcount and other expenses are accounted for.


$20k to reward the amount of security analysis that went into finding this bug is an absolute deal for Microsoft. Seriously- a single FTE security researcher is going to be costing MS >$400k a year (salary, bonus, health care, office space, etc).

I work at a BigCo as a recipient of some of these XSSs and I'm awed by the amount of work that goes into them. I always try to overstate the impact to boost the reward- it's not just the bug that they found, but how much of the system they had to look at before they found this. The security folks at BigCo that I interact with are badasses, but it's just so hard to get this level of attention.


Companies offering bug bounties should allow appeals of the amounts/severities they determine by an independent body that is qualified to make these assessments. (Perhaps a respected team of security researchers would be happy to take on this responsibility).

To prevent the appeals process being abused, the appellant should have to pay for the time spent by the independent researchers verifying their complaint. For a successful appeal, the company offering the bounty should have to pay that extra cost, encouraging them not to be stingy with the awards they give out in the first place.


Won't the market just correct them here? If others are willing to pay more for the RCE 0-day, and are more reliable, they'll stop getting the reports and end up scrambling a few times trying to catch up to the curve until they get the message.


I'm in a good position to answer your question. I've been involved with making both bug bounty and zeroday companies, and I have experience selling zerodays to bounties and independent buyers alike.

The truth is that the exploit acquisition market has many legal issues. Zerodium, who is often thought to be the leading buyer, publishes misleading guides and has had unusual timing in between the initial disclosure and hacking attempts on the researcher themselves. Other buyers have non-negotiable sale (not license!) contracts that may result in your zeroday being misused, and you may find yourself in a conspiracy. And those are the reputable and responsible buyers, there are others outside the US that are fronts for Israel/UAE/China. The market has plenty of room for correction, but there's a shortage of ethical buyers.

If you could easily sell an exploit outside of a bug bounty program for more money, you'd see more people doing it regardless of the ethics (see: the NSA doing a bulk of the hiring in infosec, noone I spoke with that applied cared about the illegal surveillance disclosures and said they chose it because they offered 100k+). So the researchers currently have no choice, and the bounty programs take advantage of that. When the pendulum swings the other direction, you'll see bounty programs becoming more fair/lucrative.


> The market has plenty of room for correction, but there's a shortage of ethical buyers.

I wonder whom you'd consider an ethical buyer apart from the software maker for a closed source software since no one else can realistically patch it?


Well if you want my opinion, military sales. Most government entities apart from the NSA are likely to use vulnerabilities as you'd expect.


I think most bug bounty programs are so pitiful in terms of time/effort vs reward that almost nobody is dedicating research on the basis of it. a $10k bounty works out to less than a week's of work for a cybersecurity consultant (at consulting rates), and it's not even guaranteed (both in terms of not finding a bug, and the company offering the bounty marking it as invalid/duplicate). Not to mention, that the black market will pay far more for a 0day than the companies offering the bounties.


I've successfully completed bounty programs for Google, MasterCard, Dropbox, Pinterest, and some others I can't quite remember. I did the math each time for dollars earned over time spent, and the resulting figure is always under minimum wage. This is for critical vulnerabilities only (P1).


The only company you can get bug bounty money from is the company that makes the software. If you sell your findings elsewhere you're selling an exploit. Which is probably more lucrative in most cases but also far less ethical.

So there isn't much choice here


The market still kind of works: Who is going to bother looking for vulnerabilities in your software if the pay is so much better elsewhere? There may be only one place you can sell a bug, but the security researchers have better places to spend their time.


My first thought is that this has nothing to do with money and the truth is probably that some team wants nice metrics to show their bosses (see? zero critical vulnerabilities!).


I was wondering if there's any weird KPI over at MS that can be gamed by reclassifying an RCE as something less severe.


It would be ridiculous, I don't think that's the case here


Whats your interpretation. The even acknowledged that the bug is an critical RCE on the desktop app. Coincidentally the desktop app is not part of the bug bounty programm.

To be fair the impact on the desktop app is higher since it also has access to the OS and the attacker is not stuck inside the browser sandbox. But from my understanding it still is possible to steal the SSO token. When i think about O365 setups with OneDrive for Business and Sharepoint that means the attacker would have access to all files stored there. That usually means all company related files that person has. Additionally the attacker would have access to all emails and messages of the user.

How is that not critical?

And according to the Bug Bounty side, Spoofing bugs "do not qualify for this severity category".


"from my understanding it still is possible to steal the SSO token"

Isn't that precisely what spoofing is?


I wasn't arguing the spoofing part but the important vs. critical part. The same bug for ankther platform is ranked as critical.

The thing is that according to Microsoft critical spoofing is not possible.


> Isn't that precisely what spoofing is?

Not in my opinion. I’ve always thought of spoofing as a write only type thing where you can impersonate someone, but not have access to any of their existing data. Email spoofing is a good example of this.

Token theft is WAY more severe because it gives you complete access to everything. It’s total control. You can exfiltrate data and that’s what I’d consider the biggest difference, at least in my opinion.


The RCE isn't classed as "Spoofing". The RCE is in a product for which Microsoft don't have any bug bounty product at all (they only run a bug bounty for a very limited number of products, and Microsoft Teams Desktop is not one of them). Hence the RCE falls outside of the classification.

The technicality is still absurd and beyond belief, but I'd say the responsibility for that absurdity falls with company policy, not with the MS security staffer's classification.


Yeah, although technically it's "out of scope", I think there are times when you should stop debating the technicalities and consider the business impact.

I mean, do you look at that demo and think "yeah, that's technically just 'important' let's fix it in 2 months"?


A couple of weeks ago I found an MFA bypass in Azure. I jumped through the hoops and filled out their security vulnerability report form.

I got one automated email from them since, that's all.

I don't expect to get paid, I was just curious to see what the "process" is, and how they treat security vulnerability reports.

The verdict: Badly.


Well, somehow I'm happy if they keep this lower priority than fixing broken notifications.


> Sooo, after around 3 months it ended as-is: "Important, Spoofing" and that the desktop client - remote code execution - is "out of scope".

literally unbelievable. wow.


Its out of scope because the scope microsofts bug bounty programm is limited to web applications and endpoints.


Honestly, for a severe finding like this in their product I think they should have:

a) Paid out a bonus anyways for the finding (bug bounties do this often, certainly we did at Dropbox)

b) Made this scoping issue more explicit somewhere


This certainly isn't something that is listed on their bug bounty page, and would also be a ridiculous limitation in reality considering the scope of Microsoft's services.


Are we looking at the same page?

Here https://www.microsoft.com/en-us/msrc/bounty-microsoft-cloud is a header "IN-SCOPE DOMAINS AND ENDPOINTS" with alist of domains and that is described with the following: "Only the following domains and endpoints are eligible for bug bounty awards."

I couldn't find something that would match the Teams app on general bug bounty website either (https://www.microsoft.com/de-de/msrc/bounty)


I would assume it would be under the Microsoft office insider bug bounty.


This is incredibly believable for Teams development and bug fixing timelines.


Microsoft Teams is clearly a product worrying about user base growth and nothing else. There are bugs, quirks and performance issues everywhere, and then - out of nowhere - you get an update about its new "AI Real-time Speech Translation for Your Calls!".

They are just pushing new features in and hoping that everything will hold together until they dominate the market. I'm not saying that this is wrong, just that this is a fact for anyone that uses Teams on a daily basis.


sounds like zoom.


Whats the reason to even participate in most bug bounties for serious shit like this knowing you could get 10-100x more submitting to Zerodium? Is it the hope of getting on some 'hall of fame' which might land a job offer?

Like, If I found a exploit for something random like skype/slack/etc.. that let you run code on any targets machine with zero interaction, there is zero chance my first stop would be the bug bounty program. For serious exploits, I believe you can get up to 2 million bucks with zerodium. Just seems like a no brainer.

Now that said, I would definitely use the bug bounty program for boring/low impact stuff like XSS and whatnot that has limited value/impact as nobody else would likely ever buy it for that much higher of a price.


Maybe some people are ethically against selling to an organization that then resells the zero day to governments instead of, you know, fixing the problem.


I consider myself an ethical person and I’d have a tough time ignoring the potential for $1 million dollars. After taxes I could buy a really nice house for cash and have enough money left over to drive a new car for the rest of my life.

One option is life changing and the “ethical” side might not pay enough to buy a gaming PC. Meanwhile the executives at the companies that claim security research needs ethics are making millions of dollars selling insecure apps. It’s like a church asking poor people to tithe IMO.

I actually think it would be better if there were no laws regarding the sale of security exploits. Everything should go onto an anonymous marketplace and the companies that have affected products should have to pair fair market value for bug discoveries.

Skimping on security and guilting researchers into being ethical is a total scam.


Sounds like maybe you shouldn’t consider yourself an ethical person. Otherwise, if you’re involved in enough startups, you’ll eventually be in the position where have to ignore the potential for insider trading to make you a million (or several) and it doesn’t sound like you’d be able to.


Those aren't even close to the same thing though. One is illegal and the other, selling to a company like Zerodium, isn't. The ethical objection I have is that you lose control over how that exploit is used if you sell it to someone like Zerodium and I think it's better for users and the general public if bugs are disclosed directly to software makers.

However, the idea that security researchers are guilted into "being ethical" while the (rich) executives for massive, multi-billion dollar tech companies are saving money on security, plus skimping on paying security researchers fair value when bugs are discovered, frustrates me.

It's hypocritical for big tech to expect "ethical" behavior from security researchers when it's a lot closer to "let us take advantage of you" IMO. If it becomes a debate about ethics, I think every time an exploit is sold to a company like Zerodium it's primarily the fault of the tech companies that are exploiting security researchers.


While I certainly understand the viewpoint of being bribable by a million dollars, legality has absolutely nothing to do with ethics.

As you say, clamoring for ethical behaviour at all in this context is terrible and completely missing the point.


No, that's perfectly ethical compared to betraying the company that employes you, because:

    * The money otherwise goes to the pockets of completely-useless C-suites.
    * The exploit is likely out anyway.
    * Nation state actors may indeed prevent yet another 9/11 attack. In worst case they don't use it to spread ransomware.


Wow, https://zerodium.com/program.html literally places router RCE at the bottom. I mean I never trusted my home router router vendors but this is like an ice cold shower.


In a previous life we customised some router firmware of a linksys router. One of the many issues I found was the password validation logic was `validatePassword(password) || password == "password"` via shodan I could see many of these devices with remote admin (login) enabled and avalable on the internet. The overall code quality was easily the wost I have seen in a project.


Then those researchers need to stop complaining when they get screwed over by Big Corp. I'm definitely not saying that the researchers shouldn't be rewarded appropriately, but we've seen countless times that, even with official bounty programs, these companies don't care about the researcher at all.

If someone still wants to put in all the work, that's great, submit the vuln and reap the good karma but they shouldn't expect more, even if the org they're reporting it to promises otherwise.


One should never stop complaining about bad things. It is important that everyone knows it and is reminded of it regularly. Especially now that it seems to be common knowledge that Microsoft got rid of their bad past with Ballmer and is now one of the good ones with their great new "Microsoft <3 Open Source" approach.


The hope of not letting thousands of people being easily attacked by some shady organization, maybe?


So Zerodium claims their customers are mainly government organisations. I find it amusing and sad. Wouldn't be more efficient to just force vendors to implement backdoors? Why maintain a lie, that citizens enjoy privacy and vendors are required to keep their data safe? Why the charade?


Not necessarily US government. A foreign government can't force Apple to implement a backdoor in iOS - but they can buy an exploit and use it.


Unbelievably lax response. However, I've encountered a similar response with Microsoft 365 login phishing sites being hosted with a nice windows.net SSL certificate. Sites remained up for more than a week after reporting through official channels (CERT). Never received a response.


Just for comparison: I reported a Facebook phishing site to Netlify, it was taken down within 9 minutes.


It seems like 365 has so many problems whether they are security or uptime related. I'm glad my company hasn't moved over to it yet.


I refuse to install this junk, it's Google Meet or bust for us and so far that has served us well. Zoom, MS and lots of others besides have all had their share of vulnerabilities to the point that I'm not happy discussing anything under NDA on one of those channels. For now Google seems to have their act together on this.


When Google is beating you on chat, you know you've failed.

That's like Apple beating you on price. Or open-source beating you on look-and-feel.


Which is pretty despicable for a chat application.

I blame the constant bloat of unwanted features. Each comes with it's own inherent risk of vulnerability, yet it seems like these companies can help themselves but to add "integrations" that nobody wants or asks for from a chat application.


It's not a vulnerability when the private info goes to the Google itself :)


That too is a problem, but at least one that I can factor in.


What about Wire[1] and Matrix[2]? Both self-hostable or hosted with E2E encryption. If I'm not mistaken Wire has E2E calls too, but I dunno...

At least E2E chat is better than MS, Google or Zoom.

[1]: https://wire.com

[2]: https://matrix.org/


I just today switched to Google Chat from Teams and find it severely lacking. I don’t see a way to call or screenshare with another person/group unless I generate a Meet url and paste it in the chat? Is it meant to be that way or our admin has not enabled something?


Meet links are all over the place in what until-recently-was-called-Gsuite, for instance, in the calendar and in the chat (the little camera icon). Usability could be better, that's a fact, but it worked flawless for me over the last 8 months or so and if people are used to MS/whereby/zoom/whatever then we typically get comments halfway into an interview day that the video hasn't crashed yet, that the audio still doesn't lag by 5 minutes and that nobody has been booted out for no particular reason yet. That's how used people are to these glitches.

I'm not a big fan of Google, but the video meeting software (and the little pc that you can buy with a dedicated setup including super good echo cancellation) is at the moment best of the litter.


If you can chat with them, you should be able to initiate a call from the chat window (handset symbol top right). Screen sharing is to the left of it.

At least that's what I can do, and I'm in multiple orgs.


For reasons I don’t know why, I don’t see these buttons. Not sure if it’s because some of the colleagues I’m chatting with are on Hangouts presumably. Have not bothered to ask that with them though.


These are some of the reasons why I refuse to use the desktop application and on Linux at least, it isn't hard to define a shortcut that works like one; path ~/.local/share/applications/ms-teams.desktop

  [Desktop Entry]
  Version=1.0
  Name=Microsoft Teams
  Comment=Teams without Electron
  GenericName=Teams
  Exec=/usr/bin/chromium-browser --user-data-dir=/home/prussian/.config/ms-teams --app=https://teams.microsoft.com/_#/conversations/General
  Terminal=false
  X-MultipleArgs=false
  Type=Application
  Icon=ms-teams
  Categories=Network;InstantMessaging;
  Keywords=teams;messaging;internet;
  X-Desktop-File-Install-Version=0.23


I do similar things, but a few weeks ago I had to learn, that many of the issues I had with the online Spotify Player (slow loading times, incomplete pages, not playing music) were caused by the included ServiceWorker. Gladly I could disable it in my Firefox Profile and now everything works just fine.

Maybe the local version wouldn't have had that problem.


It's because of behavior like this that future Microsoft RCEs may be sold on the black market instead.


Microsoft only grossed $100,000,000,000 last year. What makes you think they can afford more than $500 for a bug bounty?


An ethical way of dealing with it in this case would be to publish that you found a security flaw (without actually disclosing the exact details) and maybe have some trusted third party verify it e.g. another white hat organisation like Google zero?

Then the clients of Microsoft will put pressure to get it paid for and fixed - because they are the ones that bear the true cost of security violations (Microsoft only has indirect costs).


Yeah. This is how it should work IMO. Have a trustworthy organization that confirms bugs, but doesn’t require disclosure before payment. Let researchers place a value on their own work via a “buy it now” option and let organizations negotiate to buy bugs affecting their products.

The only info shown publicly should be a severity rating. If an exploit is fixed, published, or used before a company buys it, the 3rd party could publish proof of previous knowledge about the exploit.

There are a lot of good incentives in that system. The trusted third party will want to do a high quality job of verifying and categorizing exploits because overselling them will deter companies from buying and companies would be increasing their liability when they fail to buy bugs.

For example, in this case, what’s the damage? IMO you could do millions of dollars in damage. In fact, if you went hog wild exfiltrating or destroying data, I bet the industry would value the damage over $1 billion dollars when you’re being prosecuted.

I wonder what kind of legal hurdles a company would face trying to set something like that up. I think it makes a lot more sense than having companies working on the honor system because they’ve demonstrated they don’t play fair and bugs are severely undervalued.

This is a $1 million bug IMO.


The clients won't care until after all of their files have been encrypted/deleted.


Everyone, literally everyone working on exploits right now will see this and potentially be influenced by how Microsoft chose to handle it.


Reported information leaking from password fields back in Windows 8 days.

I was even busier back then than now and found no application besides getting information about an already filled in password, but I was still massively underwhelmed by the response which basically boiled down to "that's funny, thanks, bye".

Last year I found a really ugly glitch were you can easily get files unencrypted past an older (but still available) version of Azure Information Protection tooling.

This time I haven't bothered to report it yet.


And on the same day Microsoft announces they're enabling guest access by default.


In case anybody else was wondering about this: https://tomtalks.blog/2020/12/important-microsoft-teams-chan...


IMO the best situation _for customers_ would be for researchers to sell their discoveries in an open market, one in which MS is free to pay "market price" (they certainly have the funds).

In the short-term, MS buying these discoveries would allow them to close vulnerabilities, ensure researchers are compensated appropriately, and establish a clear financial cost to poor security. The long-term effects would be increased security research, shorter windows of vulnerability, and more secure software.


MS don't need to pay 'market price', but they need to pay a price reflective of the seriousness of the vulnerabilities.


The former emerges from the latter.


Greyhats with good anonymization need to start forcing companies to take their bounty programs seriously instead of the joke that it is now. We are too nice.

This is a bug that should have a minimum payment of $1 million.


hmm...seems a bit counterproductive trying to build good will by offering a bounty program and promptly nuking said good will with questionable ratings decisions.

Immediate money saved, long term rep damage incurred.


With age one realizes computer security field is very similar to game dev and entertainment - there is constant stream of young naive and impressionable people willing to do the hard work in exchange for "exposure".


The rep damage doesn't come from the disillusioned/cheated young guy that swears to never work with them again but from people discussing the matter publicly on hacker news.


Isn't that from one of the CIA books?


So glad Microsoft installed teams on our server with an update even though we never asked for it.


How sure are you that it wasn’t some sort of Active Directory group policy which did the install?


Because they forced the install with an office 365 update.


Fun. What is interesting to me is that my work computer just got unannounced update that included MS Teams pop up. I get that my IT team dropped the ball by just allowing this to show willy nilly, but I don't think we can take MS off the hook for installing, promoting their own solution in user's face ( along with telling me snip tool is moving away, resetting all file associations, and making pdf default to IE.. ).

Whatever happened to user agency?


>Whatever happened to user agency?

When was it ever an expected thing with the likes of MS? I'm hard-pressed to think of much examples where they haven't pushed themselves forward using anticompetitive market-practices when it made sense for em.


read the report fully - RCE is "out of scope", however the impact from stored XSS itself is crazy!


Out of scope for the bounty, but it's still very valid


Flagging because headline incorrectly implies the vulnerability still exists.

Mods, can you please update the title?


A clear timeline would be useful to have. When was this originally reported to Microsoft, when did they reply, how much back-and-forth was involved, was a proof-of-concept of the attack sent to Microsoft, when was the fix released, and what version has the fix?


>Sooo, after around 3 months it ended as-is: "Important, Spoofing" and that the desktop client - remote code execution - is "out of scope".

So, if you use Teams for your work comms, you should know that they do not care about security or privacy.


I want to see IT people of bigCorp justifying that you must use Teams because of security.


How about cross posting / leaking messages across random organizations?

We use Slack, but have to use Teams with one partner only for video. One day I found a message that that was weird (in the sense that it was from nobody within the organization) also marked as "(no title)". When I clicked on it gave an error "We weren't able to access your conversation".

Here's the screenshot https://imgur.com/a/C2IKK2b


Too bad this vulnerability couldn't infect every teams installation and cause them to uninstall and then make it's way all the way back to the original source code and make it self destruct. Teams is a fucking nightmare of a piece of software. When your shitbarf architecture makes Slack look like a reasonably good performer, you should just fire everybody and start over from scratch.


I am very unfamiliar with electron and security in general. But generally I understand electron as a browser-like sandbox for desktop applications.

Can someone please explain how the "electronSafeIpc" might be implemented? Naively this functionality seems to be the very dangerous part of this exploit, and seems to be a workaround of electron's intent to sandbox your application?


Electron uses an IPC to communicate between processes. Each process is like a thread, but really it’s more like a chrome tab. Most apps have at least two processes, main and renderer. The IPC passes JSON events between them. “ElectronSafeIpc” appears to be a Microsoft implemented function that is wrapping up the native IPC functions and is assumingly providing some sort of safety checks. I gather the safety checks weren’t good enough, so once one process is taken over, the researcher has managed to use the IPC to access the main process. That’s still sandboxed usually but...


Maybe add that it's a "zero-click, wormable, cross-platform remote code execution in Microsoft Teams" :)


4 hours ago I received an e-mail in my Outlook inbox about all my e-mails being leaked. Now, I'm extremely paranoid about being phished, since Outlook's spam filter is extremely useless, and I can not check where the URL in the message points to because of safelinks...



Is Teams still using an older version of Electron? Also looks like they are still using Angular, thought Teams was being moved to React, although not sure if that would help here in any way.


Teams is the biggest crap I’ve seen in the recent years. Unbelievable.


Tangential, but how is the RCE situation for past few years for Windows, Mac and Linux, especially for unsafe languages?


>‼ That's it. There is no further interaction from the victim. Now your company's internal network, personal documents, O365 documents/mail/notes, secret chats are fully compromised. Think about it. One message, one channel, no interaction. Everyone gets exploited.

Yeah, so? He makes it sound like some novel nightmare, but that has been the case with 0-day, RCE bugs for half a century, and we have had tons of those...


I mean, the impact varies per organization but can be pretty severe in certain contexts. I'm considering the fact that many organizations have the majority of their proprietary data living on things like Sharepoint, OneDrive, and so on (as opposed to sparse data stored locally).

I would consider this a bit more unique.


Ooh, this makes me happy that I bothered to invest time in getting the Teams app to run within a docker container.


docker is not a sandboxing tool. On the contrary, it has a very significant attack surface.


Yikes. Windows and office are fine, but I don't see trusting windows with Xbox or any other service.


Well, I already use the web version of Teams because the native Linux version is a RAM hog.


Microsoft this is terrible!


Can someone explain why the nullbyte disables the expression filtering?


A null byte in a string can typically be treated as 'end of the string'.

I'm assuming what happened, is that the expression filtering code stopped analyzing the string after the null byte, but other parts of the code continue past the null byte (eg, because the string indicates its length at the start of the string, which extends beyond the null byte) and the code that processes that (supposedly sanitized string) interprets beyond that null byte (to the payload).


Is spelling "zero" as "xero" a pun I don't get?


More likely a typo, considering that "z" and "x" are next to each other on the keyboard.


DVORAK users might have something to say about that, you insensitive clod!


Really, the assumption that QWERTY is the most used keyboard layout is the hill you decide to die on?


And QWERTZ and AZERTY users, which I'm pretty sure far outnumber Dvorak users.


I came here hoping for an answer on this and got none.


It's zero-click, not "xero-click". @dang, please fix the title.


I honestly wondered if this was some printer-driver based attack for a second


I thought it's one of those named bugs like Heartbleed, Shellshock etc.


The accounting platform Xero would like this fixed!


I was wondering why Xero had a wormable RCE involving Microsoft Teams and exactly what the "click" product was.


Title says Xero, repo says zero, misleadingly made me think it was associated with Xero.


xero?


If a company responds poorly to bug reports, you should sell the bugs instead of reporting them.


Exactly how terrified should I be right now?


The vulnerability was patched, so not too terrified. You should, however, be concerned about MS's overall nonchalant attitude towards a very serious vulnerability.


I hope somebody can clarify this




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: