Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> there's definitely potential for it to be modified to do things like drop safety checks for optimizations

Can you please give some link? It sounds strange, I’ve never read about something like that happening in assembly (“drop safety checks for optimization” — which checks, how can that even happen? Maybe you confused that with the writings about what C compilers do?).



Well, let me first say how high-assurance security makes us think on these things: everything is incorrect, insecure, and backdoored by design until proven otherwise. Then, there's rigorous methods for attempting to prove otherwise. We also try to speculate about attacks using a mix of prior attacks (known knowns) and variations of prior problems (known unknowns). So, here's a few things that came to my mind:

The assembler or linker are programmed to look for and drop checks for stuff like buffer overflow.

A "macro" assembler has some code generation. It might generate the code in an insecure way.

An malicious assembler might swap out instructions that prevent side channels for equivalent instructions that cause them.

There's lots of ways to screw up assembly code in general on correctness side that might at least lead to DOS. The malicious assembler might do any of them.

The linker might do any of that. It could even be a better place for it since few people understand or look at linkers. Although memory is fuzzy right now, you could also look for any security errors that started with the linker. They could be done on purpose.

So, I would expect a formal specification of each component, their correctness/safety/security requirements (policy), the implementation, and evidence the implementation maintains the spec and policy. CompCert and CakeML are example compilers that do that for correctness. I can't recall if someone verified a x86 assembler and linker. Rockwell-Collins is best example of your CPU/microcode being verified in itself and against specs of your application code function by function. I've also seen formal work on linking, mainly with type systems, in Standard ML that a lot of proof systems output. Most just ignore assemblers and linkers for x86 for some reason. Be a great project for some people to jump on! :)

http://www.ccs.neu.edu/home/pete/acl206/slides/hardin.pdf


Thanks for the Rockwell-Collins link!

What I understand there verified is however not that the adversary can't add malicious code but that there are formal verification steps that no bugs are introduced as the non-subverted implementation is introduced.

I agree with you that the linkers are a nice place for the malicious code to be hidden. The malicious code can be anywhere in practice... the way I see it, the accidental errors that introduce security issues are much more probable in compilers and the modern linkers (especially as they do more of a job that formerly only compilers did) than in assemblers.

The assemblers are surely the easiest to be verified on producing what is expected of them to produce: a separately developed disassembler is enough.


"What I understand there verified is however not that the adversary can't add malicious code but that there are formal verification steps that no bugs are introduced as the non-subverted implementation is introduced."

What they did is a high-level view (formal spec) of exactly what it does, one of what it means to be secure, and proof the design matches that. It's harder to subvert. It can also prove the system can't be attacked if they model those properties. For instance, they do show some protection against leaks from one process to another with the non-interference property of their partitioning mechanism. I just posted a lot more stuff like that in hardware and software here if you're interested:

https://news.ycombinator.com/item?id=17530829

" the way I see it, the accidental errors that introduce security issues are much more probable in compilers and the modern linkers (especially as they do more of a job that formerly only compilers did) than in assemblers."

I agree. I bring it up every time someone erroneously focuses on the Karger/Thompson attack on compilers. The more common error is the compiler messing the code up. An easier subversion would be to put a bug into a compiler that did that which looked like a normal bug. It's why certifying compilers are necessary. Note that TALC is part of a bigger picture at FLINT team where they do certifying compilers for stuff like ML, safe versions of C, intermediate/assembly languages that are typed for extra layer of defense, I/O, interrupts, and many other things. Here's their work:

http://flint.cs.yale.edu/flint/publications/

http://flint.cs.yale.edu/flint/software.html

Yeah, assemblers should be pretty easy if they're just doing straight-forward stuff preserving the data and structure. Those like Hyde's High-Level Assembly (HLA) might be trickier with high-level functionality. However, even those might be divided between a first-pass acting as a compiler for high-level stuff and a simpler assembler.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: