which grabs the toString function off the Function prototype without relying on explicit/modifiable globals.
However I'm not sure if the testing for a native method idea works in general (it might be possible to say something like `window.crypto.getRandomBytes = Array.prototype.slice`, which would show up as a native function, but leave the original, likely 0, bytes in the input array). This might still be okay, because in chrome that shows up as "function slice() { [native code] }" instead of "function getRandomValues() { [native code] }", but it might not; I'm not sure I have the appropriate js/security background to say.
Cute idea, but thats not secure. You can edit the function prototype object to return anything you want:
(function() {}).__proto__.toString = () => "Hi!"
All functions use same __proto__ object (including functions that haven't been written yet), and it can be edited from anywhere in your program. (Tested in chrome 54).
At a meta level, if you're trying to run trusted code in a JS environment that has some untrusted code in it too, you're going to have a bad time. The same is true in native programs by the way - you can't protect your program from a malicious library you're running in process.
The right way to solve this is to stop sharing a JS environment with libraries you don't trust. I don't know how you can protect yourself from malicious extensions, but you can stop pulling in a kitchen sink of JS libraries by being super selective about what you pull in from NPM. (Which you really should be doing anyway.)
> The right way to solve this is to stop sharing a JS environment with libraries you don't trust. I don't know how you can protect yourself from malicious extensions, but you can stop pulling in a kitchen sink of JS libraries by being super selective about what you pull in from NPM. (Which you really should be doing anyway.)
Well that's just the thing; it's far more likely that a user would encounter either a malicious script on the web, a virus that modifies the browser environment, or a browser that doesn't implement the Crypto API. Relying on the Crypto API for security is irresponsible in a production environment.
>it's far more likely that a user would encounter either a malicious script on the web
If it's a script on a different website (and no privilege-escalating-zeroday is involved), it doesn't matter.
If their computer does get a virus, then it may just keylog everything. If it does hook into a browser, it'll probably be made to log interesting plaintext bits straight out of the DOM before targeting the crypto API. If a virus is targeting users of a specific website and is able to inject code into a browser and fully control the environment that the website's code runs in, then it doesn't need to rely on the website using the crypto API to extract data from it. If the site keeps the key in localStorage, then any code running in that context could read from there too. If the site prompts the users for the password encrypting the key, then any code running in that context could read the password from DOM as it's entered, or prompt the user again. If the site's code is known to stick the key into a 256-byte array, then depending on the browser and type of attack then it could wrap the array constructor and log whenever it sees a 256-byte array get made.
The crypto API actually provides a good defense from some types of attacks. It allows you to create a crypto key that is handled by the browser and never has its key material exposed to page javascript.
However I'm not sure if the testing for a native method idea works in general (it might be possible to say something like `window.crypto.getRandomBytes = Array.prototype.slice`, which would show up as a native function, but leave the original, likely 0, bytes in the input array). This might still be okay, because in chrome that shows up as "function slice() { [native code] }" instead of "function getRandomValues() { [native code] }", but it might not; I'm not sure I have the appropriate js/security background to say.