Hacker News new | past | comments | ask | show | jobs | submit login
Someone is stealing unpublished book manuscripts in a phishing scam (nytimes.com)
147 points by ruddct on Dec 26, 2020 | hide | past | favorite | 94 comments



It is almost as if e-mail isn’t a secure medium for communication...

Look, I am not a security expert so please correct me if I am wrong about any of this:

Every time PGP for the masses is suggested as a solution it gets dismissed as being to complex or difficult to wrap your head around, but all these scams would not work in a world where authors and publishers only trust signed e-mails.

In my mother’s case I am sure pgp is too complex, but shouldn’t we demand it in a professional context?

Whenever some company gets socially engineered through e-mail the response is “we were targeted by a super tailored phishing attack, bla bla” as an excuse for digital negligence which not using digital signatures for e-mails basically is.

Or are there things I am overlooking?


> Every time PGP for the masses is suggested as a solution it gets dismissed as being to complex or difficult to wrap your head around, but all these scams would not work in a world where authors and publishers only trust signed e-mails.

No, that's not why it's dismissed. Security experts don't advocate for PGP for two reasons:

1. It requires constant vigilance. If humans en masse were capable of constant vigilance in a security context, we wouldn't really have a problem with phishing in the first place.

2. PGP uses relatively old cryptography which is easily misimplemented and doesn't feature forward secrecy. So again to maximize security you have to rely on your users doing something manual: in this case, generating and sharing new keys at some interval, which then need to be verified on the other end etc.

Speaking to the more general point on phishing: people on HN are usually very overconfident in their ability to spot phishing emails. It's easy to spot simple ones reliably. It's fairly easy to spot pretty good ones when you're expecting it. It's functionally impossible to consistently spot very good phishing emails. I say this as someone who has run simulated phishing email campaigns on software engineers in a large tech company. Even many security engineers would get caught out by the best phishing emails, and what's worse: the set of engineers who would be fooled would change depending on the day you ran it.

Humans are fallible. Attackers do not need to compromise most people in your org. They need just one person with privileged credentials to have an off day and not run through an unrealistic checklist, in an organization with hundreds to tens of thousands of people. The best phishing emails will combine excellent social engineering with a legitimate technical break that privileges them to send email on behalf of a domain. You will not spot this reliably, no matter how technically savvy you are. It will pass the technical checks you have.

The best approach to mitigating phishing campaigns is endpoint security. You should obviate the need for employees to even use passwords in a corporate context, and you should use authentication systems which aren't phishable.


> Humans are fallible.

This is the real problem that techies tend to ignore. You can have the most technologically secure communication platform in the world, but it all falls apart the second someone circumvents it. Phishing attacks are usually engineered to convince people to circumvent security protocols.

For example, a phishing e-mail might claim to be the person's boss, claim that the boss lost their phone, and ask someone to send the documents via e-mail "just this once" to close an urgent deal until the boss can get back to the office and work with I.T. to fix their phone. Underlings don't want to get fired for ruining the deal by ignoring direct orders, so they send the documents over.

Past a certain point, increasingly sophisticated security measures begin to increase the chances that someone will choose to circumvent the security protocols. At the extremes, people become so accustomed to the idea that the security protocols are too slow, complex, and failure-prone that circumventing them becomes a weekly or monthly occurrence just to get their jobs done on time. Once you reach this point, it's easier than ever for phishing attacks to convince people to do bad things.

If you try to force everyone to use PGP all the time, you're going to end up with a lot of employees communicating on unofficial channels simply because they want to get their jobs done and move on with life.


I don't disagree with any of those points (except perhaps to point out that always verifying through another channel can still be a viable solution if any breaches aren't too far reaching).

I would like to add that it's easy to be overconfident when running simulated phishing attacks too. At one place, reporting an email as suspicious would trigger a scan of the email which crawled every link, including the /phished/<my-unique-guid> links. The people with the worst scores were precisely those who followed corporate policy to the letter and reported every suspicious email. It took about one cycle of that nonsense before everyone had email rules deleting the simulated phishing emails.


> You should obviate the need for employees to even use passwords in a corporate context, and you should use authentication systems which aren't phishable.

What are best practices or products for this?


1. Two-factor authentication enforced on everything.

2. Access control policies which minimize privileges to only those which are needed.

3. Put everything behind corporate SSO instead of per-account passwords and login pages.

4. An authenticating proxy that enforces policies on each individual machine using something like webauthn.

The first recommendation limits the extent to which accounts can be compromised if users mistakenly enter their passwords. The second limits the damage which can be done when accounts are compromised. The third limits the general attack surface of passwords entirely, as does the fourth. In particular: the fourth also allows you to remove your corporate VPN while leveraging SSO. It also makes it easier to enforce the granular access control policies.

These things can be a heavy upfront technical investment, but there are off the shelf solutions and products for each of them. They're also easier than trying to train humans to be superhuman.



security keys maybe? Seems hard to phish a physical key touch.


The key touch is mostly irrelevant, much bigger win is that the protocol actually verifies the domain it’s authenticating against.

A key touch on fakegoogle.com can’t be proxied to authenticate on google.com


WebAuthn is the mechanism that cares about DNS names. The Security Key - as a separate piece of hardware implementing CTAP2 (Client To Authenticator Protocol 2) - is not able to itself verify which web site you're visiting.

So, that verification step takes places inside your web browser. In this specific context this means a vulnerability is that your employees might install software which is also able to perform the same steps but unlike say, Chrome, does not do DNS name checks, perhaps it even intentionally always tries to generate authentications for google.com.

On a very locked down platform, such as an iPhone, or some employee laptops in corporations where employees aren't trusted to install software, there should be no way for that software to run. On Android, Google get to bless binaries with the privilege to do this, they bless the built-in Android browser, Chrome, production builds of Firefox, and I suppose it's likely there are similarly popular browsers I haven't heard of, but there is no way for a normal out-of-box Android device to just install some garbage adware that does WebAuthn even with a manual APK install. If you lack that privilege, when you ask Android to talk to a Security Key it fills out the parameter where a DNS name would go with a per-app ID. So you can use Security Keys to do cool stuff from an app, but you can't fake WebAuthn.

e.g. you can work out the ID assigned to your app, get the backend code your server people wrote for WebAuthn, and spin up another copy that checks that ID instead of your DNS name, now you've got a one touch in-app sign-in that works with fingerprint sensors on popular phones the same way WebAuthn does for your web site.

The user presence detection means that even if the DNS name matching mechanism is defeated (e.g. you have software install privileges because you work in the IT division, you install software from a phishing mail) the user needs to actually touch the sensor/ press the button/ whatever to signify presence, which requires further social engineering each time. In a spear phishing attack this is definitely conceivable but it's one last opportunity for your victim to say, hang on, what's going on here?

User Presence is a signed bitflag (it's also mandatory on most of the cheap devices anyway) which means it can't be faked even on the cheapest possible CTAP2 implementation, and a WebAuthn server implementation can have confidence in the value of that bitflag when received, the WebAuthn spec. says the implementations should reject authentications where that bitflag is not set.


right, good point


> In my mother’s case I am sure pgp is too complex, but shouldn’t we demand it in a professional context?

Professional context is tons of people like your mother.

But also, PGP web of trust would break in larger adoption.


I always have a wry smile on my face when I read the lazy assumption that a large proportion of the workforce is not made up of large numbers of “someone’s mother” or “someone’s grandmother”, let alone the fact that for some reason it always has to be a woman in these contexts.


Our fathers can't even turn the blasted things on.


> But also, PGP web of trust would break in larger adoption.

Agree with you. My guess is very quickly people would start to ultimately trust all they friends and colleagues. There would be phishing bots to convince you to trust other bots and fake profiles, etc.


In cases like this it doesn't matter so much, this attack wouldn't even work over LinkedIn's web of trust.


Why wouldn't it? If you ultimately trusted a colleague that was phished into trusting the attacker, then you would trust the attacker to be who he claims to be.


They would need less believable stories and to impersonate interns, subcontractors, etc.

The pattern in the attacks could be extremely brazen, like impersonation of the editor themself or a higher position colleague to resist social checks.

Pretty strange for a long term editor to have few paths to themselves within their own publisher. (Since each attempt to gain a new connection in the publisher risks someone noticing the email address is a fraud of their own company and cause revocations.)

If the attack is 100 or 1000 times less effective it is probably written off as a complete waste of resources.


Professional context would be demanding PDF, HTML or Word Docs over plain text, relying on a Virus Scanner. Everything on MS Windows. That's easy enough for the average phishing campaign.

Enigmail or average joe PGP integration does help, but your phisher won't use PGP. What helps is proper cooperate culture. No html, no pdf, no office docs over email. Best, no email at all.


Whats worse is it doesnt have to be complex if all standard email clients let you manage PGP keys in a simple and user friendly context with an option for ‘advanced settings’ just give users a padlock icon and show any emails not sent using PGP with a padlock thats red if they go over HTTPS. Hell something like Signal for email could work with its forward secrecy I am not saying Signal itself just the approach.

I am still sour that KeyBase got sold off I was hoping they would add an email client and we could all get @keybase email addresses and they would charge for the service finally. I would easily have given them $15 a month for a small family plan (for me and my wife) just cause KeyBase is beautifully done. It could probably be fine tuned but it was great.

Edit:

Please if someone from KeyBase ever reads this, please make your efforts open, some sort of open source foundation that fully owns the rights to KeyBase and is allowed to later on go commercial. It's a damn shame a decent tech is going to just perish. Do not open source it when it is far too late! All your users who know you've sold off the project have no strong confidence in it anymore.


Haven't looked into it enough...

https://keys.pub/

not sure if you can self-serve.


You aren’t, also of email clients show the actual from address then it would stop most of these.


As tech savvy users, showing actual from addresses would be a big step forward from the modern trend (thanks, Apple?) of showing only the display name of the sender. Sadly recent versions of thunderbird seem to default to this as well, but an extension fixes that.

The problem for most users is that they won't recognise people's email addresses as signal - it's just noise to them. They don't know how to parse a domain backwards from TLD/ccTLD to determine where it goes, and even if they did, homographs and other international characters can fool them fairly easily.

Maybe the solution is something like SSH with trust on first use? Where users get alerted to a new display name and are asked to approve it the first time they send an email. Then a bundle of sender email, DKIM/DMARC/SPF success is stored locally, and future emails need to match that, otherwise the user will be warned this might not be the right user.

It seems without cryptographic identities (vouched for through PKI, a la S/MIME) that this is a hard problem to solve when you take into account the human factors and how much the existing solutions rely on the user.


In a lot of contexts I wouldn't mind an explicit whitelist. If somebody wants my email, they can let me know ahead of time how they'll be using it, and I can grant them permission to send me emails from a given source. Combined with DKIM et al it would eliminate many classes of phishing attacks and also deal with the email harvesting and spam problem.


I can definitely see this working for tech-savvy users on additional accounts. The problem with email is that it's still used/needed as the "universal" online way to establish/initiate correspondence with someone.

It's become used for person-to-person communications, as well as computer-to-person communications, and is probably one of the few standard, interoperable, near-ubiquitous means of reaching people, short of postal mail and carrier-provided services like SMS and voice calling.

While requiring people to know who they want to correspond with before the fact could be helpful, it also breaks a lot of popular use-cases for email - email is often written into contracts as a valid means of serving notice etc pursuant to the contract. Locking email down to only receive mail from approved senders would break a lot of things, but certainly help to prevent this kind of spam.

I wonder to what extent this could be reduced however by using unique per-use aliases - in a sense you can get a softer version of this whitelist by giving out per-sender inbox aliases, and therefore seeing which alias was used to send a message. Not quite sender authentication, but perhaps better than nothing. Doesn't prevent compromise of the underlying (human-to-human) email address, but would certainly help with a lot of phishing-type scenarios using breached customer records.


Yep, it wouldn't suffice for every use case. It would however have completely satisfied every use I've had for email so far (supposing the other parties would have been willing to furnish their addresses). Combined with your unique alias idea, you could have one or more whitelist-only aliases guaranteed to not have some kinds of spam and phishing alongside other aliases for other purposes.


Could you implement this whitelist on your side by simply having two inboxes? E-mails from addresses in your whitelist go to "Messages from known people", other e-mails go to "Messages from unknown people".

This wouldn't require any special action from other people. You would automatically be a bit more suspicious when looking into the "unknown people" inbox, and if you verify the author, you add their name into whitelist and automatically move all their e-mailt to the "known people" inbox.

You could also set it up this way for your less tech-savvy relatives, and ask them to call you immediately if they find something in the "unknown people" inbox.


Digital signatures definitely could have helped.

However, the email addresses/domains were very carefully chosen, as was the text of the emails.

It's pretty clear that whoever is involved either knows most of these people or had extended access to their emails.

You should read the article.


> In my mother’s case I am sure pgp is too complex, but shouldn’t we demand it in a professional context?

HTTPS has been successful in being both highly secure and requiring no real attention (or technical knowledge) from the average user. Could this not be done with email too? Isn't this a job for the IT department rather than for the user themselves?


The (rough) equivalent of HTTPS is DMARC/DKIM.

It doesn’t solve this problem, in the same way that Amazon using HTTPS doesn’t stop people visiting non-Amazon phishing websites.


If a person gets fooled in responding to email from another account then I'm not sure they'll be protected by pgp. What's really needed is for email clients to clearly say who the email is from - is it a new account, have they got email from the same domain etc.


Yeah, it seems clients are lagging behind with this. Visually marking emails which are from new domains or addresses is a nice idea.

The From header should be much more prominent too.

For example, perhaps a user should be presented with the email address in large text and explicitly asked to “trust” it before viewing emails for that address (similar to SSH).

People would probably get fatigued of that and click through though, of course (similar to SSH... although it’s much easier to quickly check whether an email address is as expected compared to a SSH fingerprint).


The client doesn't have to implement it, you can write a plugin for most clients and charge $$$ for businesses to install it. The fact that it's not been done yet is just a missed business opportunity.


The people who need it won’t understand what it does or why they need it and are unlikely to buy a plug-in.


>It is almost as if e-mail isn’t a secure medium for communication...

Have humans ever had a truly secure medium for communication? As long as we're communicating, there's always a chance our communications can be intercepted in some way by a third party. Whether in the physical world or digital world.

>Or are there things I am overlooking?

The human factor. You can have all the security in the world, all it takes is for one person to slip up once. Maybe the person answering the email was in a rush? Maybe they were tired and didn't really pay attention? Maybe someone was talking to them while they read the email and they were distracted and didn't notice it seemed shady?


> where authors and publishers only trust signed e-mails.

Signing emails isn't the problem. Establishing identity is the problem.

Where I worked, there were a lot of phishing scams going out. Initially, they spoofed our email address, and DMARC helped stop that; but people would still respond to scams coming from webmaster@johnshouseofcontracting.example.org or whatever (lots of random website email forms turned into open relays). If you're getting emails from publishing@randonnhouse or wi1ey or whatever lookalike domains, it's going to be hard to tell, and verifying it came from the someone authorized by the domain owner doesn't help.


How would digital signatures fix this problem? The issue isn’t that mail is coming in with fraudulent From headers - my understanding is that that is very rare these days, thanks to DMARC/DKIM.


Not just email / text comms -- it's not hard to fake voice and video now either. Maybe PGP in its current form is too complex for your mother, but that is one of the reasons her generation needs it the most.

I agree that it's a no brainer in a professional context, however I can see there being a ton of value in having a simple / cross-platform / cross-medium personal signing mechanism as well. It's something that we could have used yesterday.


PGP might make the scams worse where the account was taken over (and then help me email send out)


> all these scams would not work in a world where authors and publishers only trust signed e-mails.

Minor nitpick: signing is something different from encrypting, and in itself would not protect against eavesdropping.


Check out https://vereign.com. One of the portfolio companies of CV VC, very solid tech.


PGP is not the only solution, therein lies the answer

Prosecuting the people that do this is also a way of helping with the problem, acting like a coward and paying extortion money only makes you an easier target


>but all these scams would not work in a world where authors and publishers only trust signed e-mails.

It would be like https: worthless because of Let's Encrypt


To be clear, the reason the above comment is downvoted, is because the issue in the article will not be solved by signing emails. Signing is a way of ensuring the email has not been tampered with. The attacker created a domain that looks similar to the original (e.g. using gooogle.com instead of google.com). Google gets around this by owning all the alternatives but many other companies cannot realistically do this). Even if we had secure end-to-end email, their emails would still be signed because the scammers are using their domain, and can set DKIM, SPF, DMARC records to ensure their emails are not tampered with. In a future where our emails are encrypted with PGP, email recipients would look up the PGP public key of a email sender, and would get the one the scammer had uploaded.

The reason why people hate on LetsEncrypt somewhat, is that these scammers can create a TLS certificate immediately and for free for their new phishing domain (e.g. gooooogle.com). It would have otherwise been a small financial barrier for scammers to get this setup, and some of these scammers operate on volume (trying many different domains, getting only a few victims). I think LetsEncrypt does a great job, yes there is a small price to pay to allow the rest of the internet to have secure http traffic (https).

I think the real issue is of user experience. There is no easy way to check who your emails came from except from checking the from: field in the email. Reading these emails is boring and tiring. I think if email clients warned users to validate sender email addresses when receiving emails for the first time, it would make it safer. Therefore, bob@goooogle.com will show up for validation again, and the user has to read and validate it.

Effectively, I suggest using "whitelists" instead of "blacklists". If I had a startup, I would think deeply about not provide emails to my employees. That's how bad I think it is. But then again, to communicate with other companies, its either email or linkedIn...


This analogy is flawed in two points:

1. Let's Encrypt isn't useless

because

2. With certificates you can be sure, the message you received, is from the certificate owner. This applies to websites and emails.


The point here is that I could sign any email as long as I control the address. The fact that an email is signed does not mean it's to be trusted. Same with https.


From where would you get the private PGP key of the person who owns the email address?


That is not what the comment says. The comment says:

>but all these scams would not work in a world where authors and publishers only trust signed e-mails.

Not that people will take the time and effort to verify the signatures. Not to mention that you can still call them and say "eeeh I'm Jones, you know, I just had to renew my signature, so it won't check, but it's me. bye"


> It would be like https: worthless because of Let's Encrypt

Can you please elaborate why LE.org makes https worthless? I don’t want to make improper assumptions as to your meaning


It's time privacy sensitive governments started doing something about the lack of privacy in email.

How is it acceptable that my email to Bob Smith can travel across the internet unencrypted with my and his name plastered on the top? Thats a privacy problem!

The EU regulators should fine anyone who sends an unencrypted email within the EU... Start with big mail providers to get change in motion.


If your email goes between the major email providers, it goes encrypted. (At least according to the headers)


It goes encrypted, but hardly anyone checks TLS certificates on SMTP connections. That means you're not safe against any ISP on the route who could simply proxy with a self signed cert...

Thats barely better than unencrypted.


Hardly anyone who? Email clients are supposed to check or they're internally bound (Gmail web client etc).

Yes I think you can bypass the checks but doesn't mean they aren't checked


Server to server connections.

Eg. Gmail.com sending email to cnn.com. That would be TLS encrypted, but the server certificate wouldn't be checked.


is this satire? Can’t tell due to how strange these forums have become in the past few years.


B2B companies of HN, how do you communicate with each other privately? I'm struggling to find a secure communication medium.

Setting up PGP is annoying and also requires recipients to have it. Emails are clearly not private. Whatsapp, Messenger, Signal and Telegram are a bit personal (most require a phone number, and companies don't provide phone numbers to all people). SMS/ phones are also not secure. LinkedIn premium is expensive monthly and doesn't provide a good messaging UI.

Oh, the reason why I ask B2B specifically is because consumer products can communicate through their platforms where users already have accounts. Their either enmeshed in platforms or have their own platforms.


Frankly, focusing on the absolute security of the communication medium isn't a real issue for 99% of business purposes.

Phone calls are secure enough for most purposes. At the upper levels of business, e-mail is used for quick notes and corrections, but the heavy lifting is going to happen in phone calls and other real-time communications.

Techies some times put too much emphasis on things like cryptographic security of the communication channel or strength of encryption, when in reality it doesn't matter for phishing attacks like these. You could go to great lengths to get your customers set up on Signal or Telegram, but it doesn't matter the second they get an e-mail phishing attack that says "Hey, I got a new phone, locked out of my account, can you just attach the document here?"


I've worked across companies in shared Slack channels. And also linked Facebook Workplace instances.


> When I was an investment banker, I once negotiated a billion-dollar swap deal with the chief financial officer of a foreign company. I was pretty sure he was the CFO. He had business cards. He was smart and knowledgeable. I met him, once, at the company’s offices, though after that we only spoke by phone. Our local banker knew him. When we signed the deal we got representations of authority and so forth. But at some point someone on my desk asked how I knew that he was really the CFO of this company. What if he was just some guy, taking my bank for a billion dollars? What if he snuck into their offices to meet with me? What if the office I went to, on a brief and busy visit to a foreign city, was fake? What if he was the company’s janitor? What if our local banker—a relatively new hire—was in on it too?

- Matt Levine https://www.bloomberg.com/opinion/articles/2020-01-14/blackr...

So the answer seems to be normal communication tools and methods plus lots of trust and prayers.


Firstly, a swap is generally traded "on market", i.e at nil market value (the value of the two legs in the swap are the same), so his bank would never have been a billion at risk.

Secondly, banks have unbelievably onerous KYC processes, and he would not have been able to trade with any counterparty that hadn't been through that (there'd be no legal master agreement, no collateral support or margin agreement, no payment authorisation, no way to even book the trade in the banks systems)

So that anecdote is just... bullshit.

(Source: used to be a swap trader at a big bank)


Nobody does a billion dollar deal without a very thorough investigation. They do this even when you're hiring someone, imagine for deals like this. It would be virtually impossible to fake everything needed for a deal of this size to go through.


The article is about a company which fell for a fraud like this. They did not lose a lot of money, but they lost a lot of credibility. I, for one, found it interesting that all the investigation and due diligence didn't catch the fraud before investor relations published their press release. The checks and balances definitely failed in this instance, and I am sure this is not the worst case of such fraud in the history.


You might try a Matrix chat app, https://element.io/ or some such. It's encrypted, you could run your own Matrix homeserver for maximum control too.


For large customers, we usually set up a shared slack channel. It’s strongly authenticated when compared with email (where spoofing or impersonation is easy).


Do you mean e.g. how company X (vendor) communicates with company Y (client to company X)?


keybase is not linked to phone numbers and has a mechanism for authentication via linking twitter/github accounts (no linkedin though)


Keybase supports file transfers also. I highly recommend it.


I was extremely happy when Keybase arrived, it seemed perfect for exchanging encrypted secrets with co-workers or clients.

But suddently they insisted I install a stupid chat client that wants to update every other day, and run on startup. I've stopped using it.


LinkedIn but not Signal? Isn't this a problem with a clear solution? Use a phone number with Signal.


Wire is more popular in the EU but does this.


Oddly, a version of this is also happening in my wife’s world, but with music composers instead of authors.

The scheme is a little less sophisticated, but the themes are the same. The phisher knows the parties involved and their relationships, they know the lingo and the process of commissioning a composition, it isn’t limited to famous/well known people/groups (e.g. they target grad students), and it’s very unclear what they’re attempting to achieve (or how they might monetize it).


Does your wife have a working theory on who or why someone is behind it?


Even we translators have been targeted! A couple of my colleagues have sent manuscripts of translated Chinese fiction to bogus editors. If you thought unpublished original fiction was unlikely to be a profitable ripoff, unpublished translated foreign fiction is even more head-scratching.

I asked a friend to send me one of the emails so I could look at the headers. All I could get was that it appeared to be sent from an Italian-language webmail setup, no other clues I could find.


I also looked at the (recently-registered) domain name, which was meant to spoof FSG, but couldn't see anything meaningful from whois, etc. I actually knew the FSG editor who'd ostensibly sent the email and wrote to him: he already knew about his doppelganger. He was both confused and slightly amused.


Since (so far) nobody has been harmed, this is such a fascinating little story. It reminds me of the Adam Pisces episode of Reply All: https://gimletmedia.com/shows/reply-all/z3hgd2

My best guess is it's just bored people with private collections, similar to how people were privately collecting and trading pictures off celebrities iClouds (before they were all leaked publicly).


Or the recent Twitter hack: they hacked the most famous people in the world like Jeff Bezos, all because they were powered by people obsessed with getting, of all things, usernames like 'A' etc. Who would dare to write that up as a serious proposal for how computer security would work in 2020? And yet... (I take it as yet another example of Littlewood's law: https://ww.gwern.net/Littlewood )


Most likely a super-fan.

The only possible nefarious scenario I can figure out is: phisher is connected to a more dodgy publishing outfit -- either piracy sites that offer access to PDFs of books for a monthly subscription, or (much less likely) a not-very-scrupulous publisher in a foreign language territory who would like to publish a translation without paying royalties and before their Anglophone population get access to the official ebook (this is a thing, it cannibalizes translation sales in markets with a big English-literate population).

(Ebook piracy sites are a pain in the ass: I've seen novels that I've written advertised for download before publication date, presumably because somebody leaked an early review copy, complete with pre-edit typos.)


Why would a super-fan be targeting so many authors?

I could imagine one, or a small group of, obsessive book collector(s) trying to collect obscure literary content within a narrow focus or genre, but I have trouble believing they'd put so much effort into acquiring drafts by a seemingly random assortment of authors.

I'm partial to your nefarious scenario.

Or here's another one: GPT-4 has escaped the lab and is collecting more material to learn from in its quest to be the world's best predictor of human narratives.


I like the idea it's someone testing if GPT-2 can phish.

There's a few GPT-2s running around.


Given the level of dedication of some completely (at least seemingly) non-profit piracy groups make me think it may be a small release group gone just a bit too far.


That's entirely possible. But don't discount covert monetization channels, such as supposedly "free" download sites that provide a vector for bitcoin miners and other malware.


Several torrent trackers have referal programs with VPN and seedbox providers. I wouldn't be surprised if there's a monetary channel between trackers and release groups also.

I really don't think these kinds of groups are spreading malware through releases, though. It'd be detected quickly so would only be possible as a one-time exit-scam kind of thing.


Almost certainly a super fan of some kind. The Literary business world is all about insider info, gossip, worthless prestige of one kind or another.

If I had to guess they sat down to write their novel and this is the ultimate act of procrastination.



That one didn't work for me, but this does: https://outline.com/JWdxDm


I can't find it now, but there's a new standard to associate a logo image width a domain for use in email, using a DNS record. From memory one of the big certificate providers is acting as the official verifier and there will be a fee, when it comes out of trial.

On its own it's a silly little thing, but confidence scans are all about lots of little things which add up to a greater whole. I think it could help those who embrace it.

I remember working at a company where someone in finance gave away a large 5 figure sum to an unknown bank account, because a Hotmail address set up with the MD's name, who was on holiday at the time, asked them to do so. They were lucky that their bank agreed to cancel the payment an hour after they made it. This could have helped.


I'd give that about as much a chance as the EV certificates had.


Two possible scenarios come to my mind, but both are a bit far fetched. First one being that this is some kind of phishing training & seeing what kind of techniques could be utilized when doing attacks against high-value targets later (or maybe even being a "final test" in some phishing course for some of the more professional groups (those that are run by nation states)).

Another one being that they are doing it to obtain blackmailing material. Maybe they are hoping that there are things in some of the drafts that could be used to blackmail the author (e.g. in some cases there could be things that might be considered to be racist/sexist/similar, but would normally be caught by the editor).


Does Ian McEwan know what it might be about?


Even the Japanese authors are targeted and it looks like it's not a cheap job. They researched the author, publishers, the agent, and translator of the book to English and wrote an phishing mails that looks like authentic.

The effort and resources they put on this scam doesn't justify for the cost of pirate. Besides, why do they want draft that isn't finished yet?

There's only handful of authors whose upcoming book is so valuable and worth leaking.


"steal" is a misleading verb here. Even the content of the article agrees with me--

> tricking writers, editors, agents and anyone in their orbit into sharing unpublished book manuscripts

This is "copying without permission," "illegal access," or simply "phishing" which everybody in 2021 understands.

The sexy but ambiguous "steal" doesn't make clear whether the author still had access to the manuscripts.


Putin!


Is this news? And if so, why bad news?


It's interesting to me to learn about novel targets for phishing attacks, and it's probably even educational for book authors and other trusting folks to learn that they can be targeted this way.

The answer to your second question of course depends on your own idea of what constitutes "bad news"; but I think it's not hard to imagine why some people might not consider it a good thing when folks are getting duped into revealing information outside its intended audience.

It's curious that I can't resist writing this when you likely already knew and ignored it when you wrote your comment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: