Hacker News new | past | comments | ask | show | jobs | submit login
Keybase Exploding Messages (keybase.io)
413 points by aston on June 20, 2018 | hide | past | favorite | 142 comments



Author here. I'm seeing the same comment in 4 different places on here, worded with various amounts of hostility. I now wish I had addressed this in the FAQ on the post.

There's the suggestion that an exploding feature is worthless, given your partner can just take a screenshot or video of what you sent.

This suggestion is missing (1) that your relationship with a partner is disproportionately okay at the time you sent something (i.e., you trust them THEN) and (2) there's a whole different class of adversary who compromises your or your partners' devices in the future.

SnapChat, as far as I know, has none of the cryptographic implementation of Keybase. And yet it has likely protected hundreds of thousands of kids from severe bullying. Consider the teen girl who sends the goofy sexy pic to her boyfriend. Before the advent of exploding messages, he might've iMessaged or emailed that to a friend, just one friend, his best friend, out of pride. And that friend sent it to a few more, and so on. Not out of malice, but suddenly the whole school has seen her pic of god knows what and she literally wants to die. But with Snapchat, taking a screenshot is knowingly violating a social agreement. It's also violating the trust of his current girlfriend - everyone knows it's not okay to screenshot that shit. And the number of people who would do that is much tinier. Second, consider the far worse scenario: she dumps him a month later and until then he has been NiceGuy. But then he becomes r/niceguy, the guy who will look through the old pictures and spread them around.

Finally, let's not forget that your device can be compromised by loss, theft, or hackers, at any time. Exploding messages are gone when that happens.

People can be tricked, compelled, coerced, blackmailed, and hacked. Or just turn evil. All in the future. Which is what a timed message protects against. This is why Keybase is doing this. Paired with encryption it's quite powerful.


The most important purpose of these exploding message capabilities is destruction of data that doesn’t need to be archived.

The primary threat is compromise of a device. Keybase allows you to revoke keys but that assumes you are aware that the device has been compromised. Which is already too late for sensitive messages.

The average user doesn’t understand data persistence, or secure destruction of data. Manafort is a good example of this. I wish apps just expired messages by default. I don’t understand why WhatsApp doesn’t have this feature.


As a user of messaging services, I nearly never want to delete a message. I want to be able to use my digital memory extension (phone) to store messages so that I can easily recall my conversations. Rarely do I want to delete a message. In fact, I would only want to delete it if it's sensitive: I rarely message such sensitive things. Most people fall into this camp. It's rare for someone to never want any message to be kept.

Why do you want your messages deleted by default when you use one of these secure messaging clients?


Plenty of people feel exactly the opposite, and avoid using messaging services for many purposes because of it. They want the bulk of what they say to fade away, because it is ephemeral, and they don't want to worry about it forever. More and more people are aware that, even if what you say today is perfectly benign, tomorrow it may be a problem. And why create potential problems, when there is absolutely no benefit to you in putting your request to your partner to buy some eggs on the way home on a permanent record?

You might worry about not being able to find something you said. Others worry about being able to find something they said.

I personally chose my defaults appropriately, with work stuff getting archived and everything else not even getting backed up. And realistically, even the work stuff is completely useless after a couple of years; a problem I have is not finding information, but finding current, useful information.


Ephemerality is liberating. A large portion of social media use is not about exchanging information (which would be useful to persist) but about socializing. Just as you probably wouldn’t feel comfortable if every conversation you had with your friends while hanging out were recorded, a lot of users (particularly young users) feel more comfortable expressing themselves when they know with reasonable certainty that their communications are not being recorded online. Its often for sharing moments and making jokes and hanging out, not for conveying actionable information.


I deliberately don't pay for Slack because of this. The 10,000 message limit is perfect for "enough memory to be useful, not enough to be dangerous". I'd love to see it as a feature in other messaging apps (i.e. "permanently erase all messages over 6 months old")


Does Slack actually do that, though? Or just soft-deletes the older messages, hiding them from the UI? (I don't think they make a statement either way)


My understanding is that it just hides them from the UI; if you upgrade your plan, you get access to all your old messages and files that were previously "gone".


Yes this is correct. In fact even without a paid subscription you can access all files that have been added to a Slack through the web UI (myslack.slack.com/files). You can't see the related messages, but all the files (images, snippets, etc) are available as one big list.


I did not know this. Useful, thanks.


Hell, I wish messaging services made conversation much more searchable. I hate having to scroll and scroll to find some past conversation topic that maybe had interesting thoughts/links/shared media.


As far as I know, Slack and Telegram are currently the two leaders in the “searchable” area of messaging apps.


Any client with proper log files (many IRC clients, Pidgin, etc) is much better than Slack, which uses word indexing rather than full search, meaning it doesn't find the message "helloworld.com" when you search for "world".


I have never searched my message text history with the exception of trying to find images sent to me. Never content. Most companies I’ve worked at have a similar policy of not archiving text messages from internal chat. No reason to keep content, minimizing the amount of data you archive is a core element of security and risk mitigation for a number of reasons. Plenty of large organizations don’t archive employees Lync/internal chat messages for similar reasons. And from a threat perspective you don’t know ahead of time what information an attacker will find useful.


Keybase isn't making messages exploding by default. There's a button to opt in to the feature.


Sure, as a recipient I want to keep a history of everything. But as a sender, I might want sometimes to send a message with some guarantees that it will self destruct after a period of time, mission impossible style.


Because WhatsApp is done with the endeavor. The founders would have wanted this kind of feature but they've since parted ways after selling out to FB, probably because FB isn't interest in such features and privacy.


> SnapChat, as far as I know, has none of the cryptographic implementation of Keybase. And yet it has likely protected hundreds of thousands of kids from severe bullying.

Is this true? (Asking with no implication of criticism or being a leading question - I just genuinely don't know the answer)

I can believe both that these teens were going to sext each other anyway and Snapchat is keeping them safer, or that they weren't going to and Snapchat has convinced them that it can be done more safely than it can actually be done.

Has anyone done studies on this? (Is it even possible to do studies? I suppose you'd either need information from Snapchat itself on how often they detect screenshots, or from high schools on bullying cases over time and whether Snapchat is involved + hope that bullying cases that get escalated to adults at high schools is a meaningful proxy for actual bullying.)

I'm inclined to buy your argument that because of the implementation making stored pictures not the default, and the social pressure not to take screenshots, probably Snapchat's disappearing messages are better than iMessage. But this seems like the sort of thing that's dangerous enough (in either direction! if the technology works and we refuse to deploy it, that's bad too) that hard data would be useful.


Sexting on Snapchat is rampant in high school today (I'm a current high school student). The self-destruction principle has allowed for people to feel comfortable about sending explicit photos to eachother -- in relationships, it's almost ubiquitous.

That isn't saying that Snapchat has removed the potential of spreading explicit content. As someone mentioned in another comment, screenshotting the snap circumvents the system. It's also just as easy to take a photo of the screen with another device -- both an untraceable and permanent record of the photo.

As a whole, Snapchat has had a net positive effect for people my age. I can attest that teenagers make unwise decisions now and again, and Snapchat has helped in that those rash decisions are less likely to bite us in the future. While I don't have the data to back up the claim that hundreds of thousands of kids have been protected due to Snapchat's impermanence, I certainly wouldn't be surprised if it was true. It's the most popular social network in my demographic for a reason -- it oozes the ephemeral teenage spirit.


There's a theory about this: https://en.wikipedia.org/wiki/Risk_compensation

> "Risk compensation is a theory which suggests that people typically adjust their behavior in response to the perceived level of risk, becoming more careful where they sense greater risk and less careful if they feel more protected. Although usually small in comparison to the fundamental benefits of safety interventions, it may result in a lower net benefit than expected."

There's a book too, with a special emphasis on on financial crises. https://www.theguardian.com/books/2015/oct/12/foolproof-greg...

> "In the run-up to the crash, consumers and even policymakers had come to believe that smart regulators and forward-thinking bankers had made the world of money a much safer place.

> "The fundamental insight of Ip’s new book, Foolproof, is that this very belief was a key factor in the lead up to the crash. When people believe they are safe, they take more risks – they drive faster, in motoring terms – and “speed makes everything worse”. Or as the economist Hyman Minsky, whose work Ip revisits, put it: “Stability is destabilising.”

There are applications in our field too:

- safety features for users might make them behave less safely (e.g. exploding messages)

- better reliability of systems might lead us to put more trust in them, leading to even bigger outages when they occur (e.g. centralising trust in cloud providers)

It's interesting to see things like Chaos Engineering (https://principlesofchaos.org/) introducing intentional "danger" into a system in order to improve system-wide stability. Of course, maybe Chaos Engineering will give us more trust in our systems which may lead us to take even bigger risks...


Yup, that's basically what I'm getting at, thanks for the links!

So, I'm okay with risk compensation if people are net doing better. I don't think that "if even one person is hurt by this, that's too much" is a meaningful basis for decisions, especially when there's a risk that even one person will be hurt by not doing the thing. So at the risk of reducing people to numbers, if, say, 100 teenagers send sexts when they otherwise wouldn't have and get screenshotted, but 1,000 teenagers send sexts when they otherwise would have sent them to a non-disappearing-by-default client, and now their photos don't get copied because of social pressure / high-but-not-impossible technical barriers, that still seems like a clear win.

That's the sort of data that I think would be very interesting to inform good engineering decisions, and also pretty impossible to get.


I would also expect people's propensity to take screenshots to be correlated to how sensitive the image is. For example, I would expect many people to take a screenshot of a nude pic their partner sent just so they can look at it for longer than the default timeout of a snapchat message; this is even more likely for teenagers who may be less mature about not betraying the other person's trust.


Indeed, that's the digital equivalent of the $5 padlock. Sure, you could pry it open with a crowbar, but most people won't. IMHO the situation is more of "opportunity makes a thief" rather than "keeping honest people honest" - crossing the line is very explicit in both cases, analog and digital.


I don’t think you can turn back the clock and do studies with any sort of control nowadays. The generation using Snapchat is the one prior to mine - they saw the value from my generation getting bit over and over from text logs and pics getting posted. Sexting existed the second the technology was there for it.


> I can believe both that these teens were going to sext each other anyway and Snapchat is keeping them safer, or that they weren't going to and Snapchat has convinced them that it can be done more safely than it can actually be done.

Related to

https://en.wikipedia.org/wiki/Risk_compensation

(I also don't know the answer!)


That's certainly true of Snapchat in the past: http://www.businessinsider.com/snapchat-doesnt-delete-your-p...

It's unclear how they protect images today, but they have never once mentioned any use of encryption.


I guess I quoted poorly - I meant "Is it true that Snapchat has likely protected hundreds of thousands of kids from severe bullying," not "Is it true that Snapchat does not use encyption in its implementation of disappearing messages".

I think it is possible that Snapchat has net caused more kids to get bullied as a result of ill-advised sexting, by being the company advising ill. I can see both arguments and I don't know which one is actually true.


slightly unrelated note but you both are also talking about the way the official Snapchat app chooses to handle snaps (opt in and notifying the user) when theres a multitude of workarounds and non-official snap apps only a google away that make it extremely simple to save a picture someone sent to you without the sender knowing.

Preventing phone-screen capture isn't really something you can't get around but Snapchat could certainly afford to put their money where the mouth is and try to provide their users with a safer experience by cracking down on 3rd party apps.


> But with Snapchat, taking a screenshot is knowingly violating a social agreement.

My exposure to SnapChat suggests that this is not the case. Screenshoters are treated more like rascals than felons. This may depend on the content of the message though. My incoming messages tend to be more silly faces than nudes.

Edit: Or rather, it is the case, but the social agreement is a lightly enforceable one. Closer to not holding an elevator door than eating a coworker's lunch.


This is what people don't seem to get, exploding messages aren't an airtight solution to the risks of sharing sensitive information with someone. You're always taking a risk when you do that. Exploding messages change the default way that sensitive information is handled, and changing the default can have a profound impact, for all the reasons you lay out.


My issue is with the way they are marketed. I would be cool with just a “don’t retain” flag that does just that.

But making a big deal about “exploding” is dangerously incorrect that many users will make incorrect assumptions.

I’m not worried about screenshots, I’m worried about my plugin that archvives all text inbound to me that then requires me to respond to subpeona, etc.

From a security standpoint, this feature should not impact behavior since it is meaningless. If users don’t understand this, then it will cause heartache.


I don’t see your point. If you archive all inbound text, this feature is clearly not for you. This is like saying a door lock isn’t useful for anyone because you keep your window open.


The people I chat with do not know that I archive (nor should the) and will have an inaccurate and misleading expectation of behavior.

To use your door analogy, it’s like telling someone that a door lock keeps people out when there’s an invisible teleported that also gets installed with the door lock.

It’s a hard analogy to follow because me retaining information you sent me is different than me breaking into your house. If you send me info, it’s mine. The weird mental model is that you still control what you give to me.


If I seen that flag without your comment, I would have no fn idea what it does and how.


A use case I run into often is with people I trust, so I don't fear they will take screenshots, etc, but I don't want to keep that data in the chat history. Most of the time I turn to protonmail using their expire option, now I can use keybase. Most of the time is when I need to pass a password to coworkers.


I hate to use this adjective, but this feature is cute. I love the little bomb. I love the concept. I love how you've applied it to several types of things. And I love how you've taken something that could be complicated and made it simple.

Keep up the good work, guys!


I love this! And I love the bomb gif. I still miss your original logo, but have come to like the little girl.

Anyway, maybe it's just me, but I never communicate anything to anyone that would be hugely problematic if published. That is, for that persona. Which is carefully compartmentalized from other personas. So Mirimir has rather restrictive limits. My meatspace identity has even more restrictive limits. But some of my personas have no limits, and are basically throw-aways.

Edit: And that's basically how accounts work on HN, right? I mean, throwaway use seems quite common, and accepted.


The assumption being that the personae are not linkable to each other. Is that a realistic assumption?


Well, it has been for me, so far. But then, it's my main hobby these days, and I take extreme care.

If you're interested, I explore that and related issues in one of my series on the IVPN website.[0] There's also an old guide on nesting VPNs and Tor with VMs.[1] And a tribute to Kevin Mitnick, featuring onion SSH hosts for chaining.[2]

The tl;dr is that compartmentalization is the key. At all levels. At physical levels such as hosts and VMs, LANs and vLANs, and uplinks and proxy chains. And at behavioral levels, such as interests, forums and social media, projects, and language and writing style.

Mirimir is my only main persona that writes about privacy issues. He has temporarily had a few secondary personas for particular projects, just for casual deniability. But none of my other personas have written at length in English.

0) https://www.ivpn.net/privacy-guides/online-privacy-through-o...

1) https://www.ivpn.net/privacy-guides/advanced-privacy-and-ano...

2) https://www.ivpn.net/privacy-guides/onion-ssh-hosts-for-logi...


I think the most succinct way to put it:

You send a message to someone whom you trust (and therefore won't screenshot). If their device is later compromised, forward secrecy ensures the message can't be retrieved.

Even revoking the compromised device is insufficient, as they could retrieve your chat history long before the user realizes they've been pwned.


These are great rationale, but I think they belong in the feature marketing and UI, not just the FAQ.

As publicized (by Keybase and every other platform), exploding messages appear to put control of post-receipt management in the hand of the sender. This is especially credible coming from Keybase, since you guys are educating a lot of people about possibilities with careful crypto (e.g. forward secrecy). This has risks... you mention the Snapchat user who was protected from bullying, but what about the teen who wouldn't have sent that pic in the first place but felt safer because of SnapChat -- only to be bullied over a screenshot anyway?

Your description here is that exploding messages make it easier for both sides to announce and abide by a social contract about deletion. A name like "flag messages for auto-delete" (I'm sure someone can do better) would set the right impression.


Don't forget a major reason for message accumulation: laziness. People often just don't bother to delete private messages. Especially true after long conversations because there might be stuff to keep in there somewhere.


I don't think anyone's being as hostile as you make it out to be. They're just talking about how you can't really guarantee safety, which is true.

And I find it weird that you're comparing yourself with Snapchat. Snapchat is a casual app, targeted at a completely different audience than the people Keybase targets (at least that's the impression I got so far)

Also Snapchat is mobile only product, which makes all the difference. It's much easier to detect screenshotting on mobile than desktop. And as far as I know, Keybase is desktop-first app. So it's kind of ridiculous that you're comparing yourself to snapchat.

I don't know if you are aware of above distinctions or not, but if you're not aware of this, there's something wrong here. You guys are supposed to be completely aware of all these subtle differences. And if you ARE aware of this, why are you trying to make these claims pretending there's nothing wrong?

I have nothing against Keybase, I'm just pointing out the faulty logic in this specific comment you're making (which happens to be hostile towards those who are just pointing out the issue with no trolling intent)


> And I find it weird that you're comparing yourself with Snapchat. Snapchat is a casual app, targeted at a completely different audience than the people Keybase targets (at least that's the impression I got so far)

I don't think they're comparing themself to Snapchat; I think they're using a hypothetical situation that everyone can understand in order to explain the threats that an "exploding message" protects against; Snapchat is used merely because the scenario is easy to understand.

EDIT: grammar


Keybase may have started from the technical community b/c of its foundation with how it handles identity and encryption, but I definitely don't view it as an app targeted at a different audience. It is an app that can be used by the general public and I use the mobile version quite often. I don't find the comparison odd at all.


Fwiw, as a non-security at risk casual user; I really enjoy ephemeral chat. I don't like snapchat as a main chat application (ie, Telegram-esque replacement), and aside from that I don't have many options. I think we're going to try Keybase out, assuming it has native desktop clients.


I will suggest that if you add this to the FAQ, you spend more time talking about things like your device can be compromised by loss, theft, or hackers, at any time. Exploding messages are gone when that happens. and less time talking about how people can go from seemingly a Nice Guy to r/niceguy when a relationship ends. Make relationship drama a footnote, not your primary emphasis.


They always push features to their limits and then criticize. Even telegram’s “screenshot taken” notification can be overcomed by taking a photo/video of the chat with an another phone. But the hassle of doing that is not worth it sometimes, so one can estimate the expectation of the leak, while being completely unsafe before “special forces”. We figured it out in one of in-house intrigues, but didn’t do it even having three phones on the table. Boring, unproductive and shady methods were high enough barriers to stop. Do a good thing and don’t care about pedants.


I agree that a feature doesn't have to be 100% foolproof to be beneficial. I also agree that leaving sensitive things lying around "by default" is a poor approach to security, and think that software should facilitate automated cleanup. However, I fundamentally object to the subversion of my will by my device or any program running on it. In my opinion, DRM in any form is not a solution - it in inherently evil.

I wouldn't mind messages that were flagged for automatic deletion after some time interval, if I were also provided with controls for when and when not to honor such requests. But currently Signal, SnapChat, Keybase, and others don't provide me with such a choice - they do what the sender requested, regardless of whether or not I approve.

It goes without saying that providing such an easily accessible option would almost certainly result in it being used at times in socially inappropriate or distasteful ways. But consider, do you really want to give up control of how your device behaves in an attempt to prevent others from behaving poorly? Perhaps applications should focus on providing practical security (ie facilitating, not forcing, automated removal), and leave the social aspects up to the humans to sort out.


Do you know https://privnote.com ?

I think it is very easy and useful. It is great to have something like this on Keybase.


I would be hesitant to trust a controversial screenshot of text because I know that can be faked so easily. A lot of people don't have that awareness, though.


Another feature of Keybase's exploding messages is that when they expire, the text is replaced by the md5sum of the message. So a faked screenshot can (potentially; I haven't verified this) be proven to be faked by appealing to the md5sum in its place, crucially, without needing to reveal the contents of the original message.


Sorry, I was wrong. I misread in a chat thread on KB something about md5s. Can't find it now because no searching in KB (yet!).

Exploded messages are just replaced with an image of what people are calling 'ashes'.

Further conversation on KB about this points out that hashing the message would compromise the secrecy. I still think it would be a neat feature.


That would only work if everything else about the photo was identical - device, resolution, carrier, time, battery level. Seems very unlikely one could substitute even identical text in a screenshot with enough accuracy to get the same hash from an image file.


The md5sum would be of the message itself, not the illicit screenshot.


That makes sense. I think my HN incoherence limit was exceeded here at 2 am.


I'm not a Snapchat user but doesn't that app, at least on Android, alert senders when a receiver takes a screenshot? You can still take a picture with a second device and that functionality isn't totally portable, but interesting feature. Do you think that concept has any utility here?


if someones determined they'll root their android and capture them anyway.

I think it's a great feature if you think of it (exploding messages) not as an assurance against someone who shouldn't be trusted, but that they won't forget to clean the trash.


Good thing there are no other devices in the world that can take a picture of another devices screen. /s


Can you make that disclaimer obvious in the software? "Keybase exploding messages only work if who you're chatting with doesn't have a hostile client or intent"


That would come uncomfortably close to the toothpick instructions in the Hitchhiker's Guide to the Galaxy series (http://hitchhikers.wikia.com/wiki/Wonko_the_Sane). Does anyone think that technology can stop people from divulging secrets?


Fine, but I never want to receive one of these. How do I turn it off?


I used to be against this style of 'DRM' -- either analog hole (screenshot or physical camera) or client subversion (client logs all messages out of band forever); but I think I misunderstood the use case. This is about changing behaviours to be more ephemeral; An expectation that your messages are not there forever. In a world of "unlock your phone or arrest" these features are very important. Thanks for rolling this out KB


Nice, succinct response in the FAQ to those who don't care about privacy:

"I have nothing to hide"

Because no one is trying to hurt you


... currently.


... to your knowledge.


As I understand, Keybase chat is open-source? (https://github.com/keybase/client)

I don't have time to read through the code right now, but I'd love to hear how they implemented exploding messages with untrustworthy clients.

I've thought about it a few times before, and it seems one of the few places that closed software has an advantage - You can't easily force third-party clients to delete messages.

If they've solved that, I'm really interested to learn how it works!!


It is impossible to implement this feature "safely" even with trusted clients -- worst case I take a screenshot or even a photograph of the device displaying the message before it explodes.

If you don't trust the person at the other end, this is never going to work. It's more useful for "we both agree that we don't want a paper trail" kind of thing.


A dead simple way to thwart the “screenshot” attack is to release a tool for accurately falsifying a screenshot. I’ve never seen this employed in practice though.


Photo, audio, and video evidence should already be dismissed until one is able to verify the integrity and source. All of these can already be believably faked - it's just a matter of educating people that a layperson can easily create fake things by using tools developed by research teams.

Fake text is the easiest to fake if you can identify the font used - any image editor will work. HN uses 9pt Verdana, even without using dev tools I could fake your post to say anything I wanted it to say since it would just be 9pt Verdana on a solid background set to text wrap every 1050px.

See: https://www.youtube.com/watch?v=ohmajJTcpNk & https://www.youtube.com/watch?v=AmUC4m6w1wo


Not even that much effort, just open the browser's dev tools and change the text in the post to say whatever you like.


I'm aware - but specifically excluded dev tools as faking text in scenarios where dev tools may not exist (eg: chat programs that aren't taking place within the browser) is still trivial.


That's an interesting way to combat certain kinds of attacks - those in which the screenshot 'reveals' embarrassing, illicit, or illegal communication: things you've done or intend to do that have no external evidence aside from the message itself.

But it does not combat attacks that are not embarrassing but rather a release of information you are known to have but which is intended to be kept secret, or which is easily verified: if someone captures your social security number, private key, or home address in a screenshot, you'd better be really good at bluffing.


If I release data that only I have, it’s obvious that I released it. If I release data that two people have, I can counter your screenshot with a faked screenshot of my own.


How is any bitmap editor not such a tool?


I think they're talking more along the lines of those online meme editors. Yea i could download the lion king and clip the frame with simba and then pull it into photo shop and then add text and then upload it to imgur. Or I could go one of those online meme editors in click the photo and enter my text and then get a link to the meme I made.

The whole point is to totally lower the bar for anyone to make a passable copy, thus removing all confidence that any screen shot is genuine.


If it’s made by the company making the messaging app, it will be pixel perfect, and will only take a few seconds to make instead of a few hours. And if you create one for a date in the past they can use the UI they had at that time.


Apps do screenshot detection but again nothing beat the old Polaroid.

Any DLP or DRM can be circumvented using analog means.


Absolutely - That's my understanding as well.

There are ways to mitigate (snapchat detects screenshots, etc) but no way to fully prevent - Someone could always use an external camera, etc.

I was just really hoping that they had come up with some sort of cool technical way to stop the ability to decode messages after XYZ time, even if they couldn't prevent it from being copied once decoded.

For example, imagine if a message were wrapped in two levels of encryption - Once from the user, and once with a key that you have to retrieve from keybase.io - If you weren't in the right time-window, you wouldn't be able to retrieve the second key.

There's lots of problems with that particular approach, which is why I was hoping they had come up with something awesome, not just asking the client nicely to delete it. It's a nice feature either way, though.


This is exactly how most DRM schemes work. You ping the keyserver a key, they check if you're still authorized, and if you are, they send you a key. There's even a bunch of tech that got built into commercial OSes [1][2] and hardware in an effort to try really, really hard not to expose decrypted content outside some protected path. But if a chunk of decrypted content is displayed on a screen, there's always the analog hole, e.g. taking pictures or a video of the display.

Nonetheless, it'd be interesting if all this effort, which got done because of the lobbying of major media rightsholders, was re-used for interpersonal communication.

[1] https://en.wikipedia.org/wiki/Protected_Media_Path [2] https://arstechnica.com/gadgets/2016/11/netflix-4k-streaming...


I think the solution is pretty simple: encrypt with a one-time pad and then store the pad in a box that will burn that pad at your desired time; make sure no one can ever see the pad and your secret will be pretty safe.


And how do you make sure the box will burn at the desired time? What if someone stomps on the fuse?


Integrated timer with small nuclear power source.


If you haven't done so already, try clicking on any of the text of the blog post.


Nice touch.


You might be surprised, but for some people this feature can be life or death. My team has been actually waiting for Keybase to have this. We work in countries where some of us are regularly taken aside by the local police or armed forces and our phones are being checked to see if we have anything against the current government. We have to constantly make sure our communication has no traces. We'll be moving to Keybase very soon. Thank you for this!


Why don’t you just use a client that deletes messages? Why would you wait for keybase to implement this?


We've been waiting for Keybase, it doesn't mean we've been waiting with no solution meanwhile. We want to use Keybase because many of its other features.


As a security layperson my initial reaction was "how can cryptography help with expiring messages, once it's decrypted it's decrypted, that doesn't sound right", but I'm curious if I'm understanding correctly that this is actually two separate features: 1) clients voluntarily respecting "please delete this message at X time" and 2) forward secrecy. And Keybase has tied them together for UX reasons since people tend to not have an intuitive understanding of when they might want to use forward secrecy, but they always want it in the case of exploding messages.


This is not functionality meant to make sure the other party will not have access to the massage after X time anymore, it's just convenience opsec: If you don't want someone to know X after some time, you should never tell him in the first place (he doesn't need to hijack the keybase client, he can simply "remember" the message). Instead this makes it easy to limit paper trace using a nice UX.


This is a good summary. Keybase supports adding new devices to your account at any time, and for that reason people often don't want forward secrecy. (Their message history would be gone on their new phone.) But that isn't a problem with exploding messages.


Of course this is not to address the case where the recipient would take a screenshot. I still find disappearing messages useful to reduce attack surface if say, the phone gets taken away and unlocked by someone else, or a backup of it is found and restored on something else, or if the phone gets compromised at some point at least not all previous messages are exposed, etc. A useful feature, not meant to address scenarios or malicious recipients or previous compromised devices.


I had to stop using Keybase after this issue [1] cropped up: when you re-install your OS, you lose access to the "device" and have to provision a new one, even though the machine is the same, and even in possession of an uncompromised private key.

Apparently this happens a lot [2-7... probably more]. Unfortunately, this renders Keybase unusable for me because, even though I still have my private key, I cannot access my laptop's Keybase when I install it.

[1]: https://github.com/keybase/client/issues/3460#issuecomment-2...

[2]: https://github.com/keybase/client/issues/3559

[3]: https://github.com/keybase/keybase-issues/issues/1952

[4]: https://github.com/keybase/keybase-issues/issues/1985

[5]: https://github.com/keybase/client/issues/4260

[6]: https://github.com/keybase/client/issues/2357

[7]: https://github.com/keybase/client/issues/2675


I'm not entirely sure what your issue is here. Why not reprovision (either with a different provisioned device or your paper key) and give it a new name?


Because it's the same device as before. I only have one name for it.


Just call it device2?


That makes it impossible to recover my original device.


You don't need to reuse the original device name, just append a number to it. Think of it as device incarnation 2.


and?


But it's not the same device, in removing your old operating system you've removed the device keys keybase uses for encryption.


A comment in your [1] link points at https://github.com/keybase/keybase-issues/issues/1952#issuec... which explains why device names are immutable.

> The reason of this restriction is pretty important: we want people to be able to think of devices by name - say, when seeing them in a list, or when talking about them - and never have to think about key fingerprints or id's. The only way to achieve this safely is to make a device<->name map immutable and global, using the sig chain. Otherwise there are endless caveats and visual explanations needed showing the evolution of a device name over time.

> Consider: if an intruder steals one or more of your keys and starts doing crap to your sig chain, they still can't change the definition of "iphone6s-white". It is set in stone, which is crucial to maintain the abstraction that "iphone6s-white" is a certain key.

I agree that it sucks to have your device name become permanently unusable; I've hit this myself a few times, and it's mildly annoying to find that I have to pick a new device name in Keybase even though my local name for the device hasn't changed. But removing this restriction opens up a security vulnerability.


> corporate messages

In this day and age, I do not advise this. Check with your company's compliance officer or corporate council before doing anything that is designed to remove evidence of communication.


Consider this:

  I send an exploding message, set for 1 day, to Bob.

  Bob checks his chat a week from now.

  Does Bob get the message? Or has it already exploded?
I guess I'm asking when the actual explosion timer starts - when the message is sent, or when it is read? For group messages, do all parties need to read it before the timer starts?


From the FAQ:

Does the timer begin when the message is sent or received?

Sent.

This seems like the only sensible answer for group chats. And we can't have a different answer for 1-on-1 chats and group chats. That would confuse people. Not the kind of person who reads an FAQ such as yourself, of course.

So our answer is simple: you set a timer and the message is gone after that time.


I believe Signal uses the other policy, that the timer starts when the message is read rather than when it's sent.


>And we can't have a different answer for 1-on-1 chats and group chats.

What might be the reasoning here that Keybase won't do it yet Signal messenger does start the timer on the "receive" end?


It narrows the attack window. It's not hard to imagine a scenario where something happens to the receiver and as the sender you didn't realize this until you sent a few messages but the third party who got the receiver won't be able to access the device for some time. Hopefully by the time they get in they have self destroyed.


A valid point! Thanks for postulating!


There are a couple of important differences, though someone from Signal might need to correct me if I get this wrong:

- Signal never deletes keys until after a message is read, because the key schedule and the message history are closely integrated. So if I send a "30 seconds" disappearing message, but you read it a month later, that will work. Keybase doesn't work that way; we delete keys on a fixed schedule, generally about a week. The "start the clock after sending" rule fits our key schedule better, without creating confusing cases at the one week boundary.

- Keybase is designed to support "very large" groups, like thousands of people. In that setting, a "start the clock after reading" rule would be a problem. It's unrealistic that all N thousand members will ever read a given message, and that would make deletion less reliable.


This is like snapchat, they're offering something that they can't actually guarantee. Of course keybase users are going to be generally more knowledgeable than snapchat users and most will understand the limitations.


It's still better to delete the message than to leave it there "because you can't guarantee it 100% either way". Defense in depth.


I quote this far too much on security but "Perfect is the enemy of good." Meaning, people often dismiss improvements because they're technically imperfect.

And it isn't just random forum discussions, major technologies like secure DNS, secure email, etc were held back years because nobody could agree on compromises needed to make improvements.


Yeah, I agree. The fact still remains though. Snap chatters, through ignorance, were under the impression that it was impossible for their photos to be shared with others. This was due to snapchat's failure to inform their customers and the experience level of their customers. It's notable that the blog post doesn't mention this at all, but on the other hand I would think that the percentage of keybase users that don't immediately understand this is extremely small.


This very article shows you the problem with "features" like this.

You see that video demonstrating the feature? Notice how you can read the content of the message which was supposedly deleted?


If you don't trust the receiver you should not be sending them anything sensitive to begin with.

This feature just eliminates the messages in case something happens to the recipent or their device if either become compromised.

Let's say I work IT and user Bob forgot their password again and needs a temporary reset. I can message him his temporary password that expires in 3 minutes so that they can login and set a new one.


I trust the receiver.

I don't trust future bad actors who get hold of the device, including the receiver should they turn.


In theory not having the signatures means the messages could be falsified, but many people would trust a screen capture still. Regardless, the signatures themselves could be extracted before "exploding" so the feature really is promoting unsafe security assumptions


There is a section about repudiability near the bottom of the article.


I want all my data on the internet to explode after N days.

I keep asking for this. Google, Facebook, where is that feature?


click on some text in the article, it "explodes" :)


Did you see the hidden message? I'm not sure what it means. (Treasure at the root of esabyeK)


https://keybase.io/esabyeK which is keybase backwards


I saw that too! (nothing in /keybase/publick/esabkey I could see). Let me know what you find!


That was pretty annoying. I tried to copy some text but didn't drag correctly so it was interpreted as just a click and everything went away.


Can someone explain the FAQ item around "My team uses Telegram and I'm scared shitless."? I musta missed the joke


How does this differ from the existing "Message deletion/retention" feature that was present before this? Did the existing feature delete messages, but not keys? Is the old feature and the new feature combined, or are they still separate?


anyone care to expand on the practical applications/implications/threat model where this makes sense?


It's for when you trust someone but don't want to risk either your or their device being compromised in the future. For instance:

- if your or their phone gets taken at a border crossing

- if your or their phone gets taken by the FBI (which is how they recovered encrypted WhatsApp/Signal chats from Michael Cohen's phone)

- etc


Or in a much simpler and common case: you trust someone right now to run a non-modified client and not to take screenshots, for example based on a corporate policy.

Months later the relationship turns sour, and they are fired or denied a promotion. They can't then go through the archives any more to take the screenshots.


thank you - this is what i was looking for


It works as long as no one takes a screenshot.


Or even extract the text + signature before it "explodes". Keybase messages don't have repudiability so anybody who has received a sign message should be considered to always have that message.


Though the FAQ at the bottom of the article suggests some sort of minimal repudiation support is in the works.


Apparently those dank memes and corporate espionage according to the article are some of the uses.


now that their is a exploding feature, can i finally explode this annoying notification about not providing my full name and description.


[flagged]


What non-trivial threat model? Requires one side to have already been compromised and for a new client to be written to not throw away the message. For any secure messaging scheme or encryption protocol how do you actually protect against that?

Without authentication and keeping remote party secure NOTHING will protect you (besides the threat of violence I guess).

<science-fiction> There is currently no way to encrypt a message using a secure DNA-based public-key encryption that makes it so only that person can read the message. </science-fiction>

You have to show it to whoever is "authorized" at some point. We hope the keys are kept safe to assure us of that.


I think this is precisely the target demographic--rather than cluelessly keeping a log of sensitive messages, the end user can cluelessly "explode" their sensitive messages. Of course, the clueless user will probably forget to do this anyways and it should probably be a default feature for the clueless organization.


Your languages is stronger than I'd use, but this feature really is irresponsibly promoted at best and maybe just flat out irresponsible altogether for a "security" product.


Heard about this last week from a friend on the keybase team, happy to see it launch on time :) congrats!


Seriously, do we need a stupid animation and a silly-looking "ka-boom" image? It just comes across as trying too hard to be cute and ends up looking childish and stupid.


Metaphors like this help people understand what's happening. If the message just vanishes that could be for any number of reasons. But with this animation it's clear that the message is being erased. Keep in kind not everyone is a hackernews-reading computer expert.


Though I am not personally a fan of cute stuff like that, if it helps in making secure messaging usable for everyone from IT experts to their grandmothers who have never touched a computer I am all for it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: