Somewhat related, I had some fun with the K3V2 SoC on Huawei's Ascend P6. It wasn't the best by any measure, far from it, but I liked it, just because it was extremely easy to modify everything.
They advertised it as having a 1.5GHz CPU, but the governor was set up to practically never let it go beyond 1.2 GHz - most of the time it would top at 1GHz. Shenanigans.
The clock/voltage configuration for everything (CPU, GPU, RAM, some other stuff) was stored in a password protected Excel file (.xls!), and removing the protection was laughably easy. The kernel would compile with the data directly from that file, I haven't seen that before.
I was able to play with the settings to my heart's content. I got the CPU to run at a real 1.8Ghz, at which point the phone would literally burn through the battery at around 1% per second under full load. Of course, it would get extremely hot and shutdown in about half a minute :D
I got that phone super optimized, custom CPU,GPU and RAM clocks and voltages, got an extra hour of screen on time out of it and it was smooth as hell, a stark change from the default settings.
The damn GPS wouldn't work while moving though, which was pretty ridiculous.
Linux, the source code (unofficial iirc) and an editor. XDA-Developers is a great resource. GitHub is a godsend, of course. At the time, no one was interested in the P6, or Huawei in general.
Sadly, nowadays, interest in overclocking/undervolting/modding seems to have waned (I guess smartphones are powerful enough).
But there's still people doing this, if you can't find info/work on your own device, you can find another one with the same or similar hardware. That's how I do it; got a lot of extra life out of older tablets and phones.
And Sony is now embracing open devices? Wow, how the turntables :D, they used to be one of the biggest offenders. Even Huawei is locking down theirs nowadays.
Checking if a code is secure by counting the usage of memcpy is pretty stupid, but very easy, so it is done by checklist experts to check for security.
memcpy_s is not supported by glibc and most other libc implementations for Linux, I do not expect this to change in the future. Here is a good analyses of this optional extension to the C standard from a glibc developer:
http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1969.htm
All standard Linux tools use the "unsafe" functions from the libc.
You should not use gets(), there are better alternatives in your libc. ;-)
Changing an existing code base from the normal glibc C functions like memcpy to memcpy_s is not easy, you do it wrong in 10% or more if you are not the original author of this code or do not have very long experience with it. Even when you are an expert a lot of problems are getting introduced, this is from my own experience. This is not a search and replace task for a junior!
Having one team developing something and then an other team making it secure is not working from my point of view. You should teach all your developers what they have to look for and why. I think it is important to not only says, X, Y, Z, is banned, but also why exactly and how to solve the use case X, Y, Z were used for.
A lot of the security work is not to improve the security, but for compliance to some guidelines internally or externally. The compliance is checked with some tools, like checking if "grep memcpy(" finds a result. Then the engineer or his manager will use the cheapest solution to solve this like Huawei did here.
To improve the real security you need some experts looking with the original developer into the real code. These experts do need more experience than just good PPT and Excel skills, but they need some knowledge in such software, probably different people for an embedded controller than a node.js application.
I've heard "the _s stands for stupid" in reference to those functions. They've never made sense to me, and definitely look like the product of those "checklist experts" --- in which case it's no surprise that intelligent people will naturally find equally stupid ways around it.
To improve the real security you need some experts looking with the original developer into the real code. These experts do need more experience than just good PPT and Excel skills, but they need some knowledge in such software, probably different people for an embedded controller than a node.js application.
In short: There is no replacement for intelligence.
Yet, there is plenty of corporate propaganda (for lack of better term) that advocates the dumbing-down and treatment of developers like they're replaceable.
> Yet, there is plenty of corporate propaganda (for lack of better term) that advocates the dumbing-down and treatment of developers like they're replaceable.
why does this happen, and what can we do to stop it?
First, to simply replace strcpy with strcpy_s is not possible on purpose. He cites that as disadvantage, but the advantage is that the user has to check for the error value and do something. Before the code ignored any errors and was happy with either UB, SEGV's or silent overwrites. Now some action has to be performed in the error case. The _s functions are not to be used as nested expressions.
The 2nd argument, that error callbacks are insecure, is also bogus. The current practice of env callbacks and overrides are much more insecure and slower. Look at the myriad of nonsense glibc put into their dynamic linker or malloc. LD_ this and MALLOC_ that. Same as a hacker can redirect the cb function to his, he can overwrite the process env block to set some evil LD_LIBRARY_PATH and load the evil .so. There are about 20 of those, safeclib has two.
safeclib is also not slower on good compilers. On recent clang it's even faster than glibc. glibc likes to break the optimization boundaries with asm, safec uses macros for zero-cost compile time checks, and proper code which can be easily inlined and tree optimized. That's why it can be safer and faster than glibc.
glibc does not support strings, only raw memory buffers, length or zero delimited. Strings are unicode nowadays. glibc does not care about strings and about its security implications. safec does.
In general, I have found any sort of kneejerk attempts of auditing “unsafe” functions and replacing them with “safe” alternatives to not really go all that well. Often, the people pushing the replacements actually have no idea about the supposedly “safe” replacements (almost all of them suck in at least one major way) and the people doing the replacements even less so. So people waste time and make their programs slower and harder to read for little benefit.
See the safeclib docs and tests. I'm the maintainer. Only a couple do not conform.
Their SecureMemset variant is insecure. Most crypto memset_s implementations are unsafe, but they don't want to flush their cache, so attackers can look at the cache for the secrets.
I missed the part where the code change example was driven by a DevSecOps process. Maybe I haven't had enough coffee, I think I'm to assume that Huawei was following a DevSecOps process.
I'd say that having goals and achieving goals is not the same thing. And checking your achievements against your goals is important, however you do that.
With apologies for being off topic: "DevSecOps" is so transparently marketing FUD generated by consultants trying to sell hours to enterprises that I wince every time I read the phrase. Why is this industry so packed with vapid buzzwords like this?
So, I really dislike the word, and I would love it if the industry would ditch it because it sounds so dumb... but also, I 100% support the DevSecOps manifesto[0].
Security being extremely manual, extremely 'tool' driven, being policy based, being dictatorial, is awful and not working. We do need the industry to shift towards leveraging code, APIs, working with other teams, building alongside developers, etc.
And that's all DevSecOps is really saying.
Again, wish we could all just ditch the term, but I honestly think it's here to stay and we need to look past the marketing bullshit to see what's in front of us.
I agree that they are buzzwords but I don't think "it's transparently marketing." necessarily. What would you call the process of trying to integrate security testing/scanning/auditing/other workflows into a devops-oriented CI/CD process in an automated way?
Our industry is loaded with buzzwords that are used liberally and often needlessly by "thought leaders," but that doesn't mean they are useless. It just means that those people latch on to ideas and trumpet them.
Advertising aside, this makes some good points about general adoption of technology in large teams. It needs to be the path of least resistance or it won't happen.
Whether that "resistance" is red-squiggly lines in your editor, CI failures, code-review comments, or being told off by your boss for not following company policy, there needs to be some form of control.
From what I've read of Google's software development process they're pretty great at this. Editors warn about a lot of specific stuff (and adding more there is easy), automated code review bots point out issues (and again, adding more of these is easy enough), code-review has strict standards and a process to become a qualified reviewer, generated tests catch incorrect new uses of APIs, etc.
In my experience, teams don't invest in this sort of tooling enough, which means that rolling out changes like "let's not use memcpy" or "let's move to Python 3" become much harder, and this certainly impacts security.
With the standard practice where a 2nd team gets tasked with reducing code defects in existing code, with their only metric being reducing the number of defects (that the given tools generate), without regard to the quality of the fixes, and preferably staffed with high churn rate junior developers, the outcomes are not surprising.
it's even worse than that. semgrep wouldn't even discover the earlier mentioned problems in Huawei code, which purpose explicitly was to evade static analysis tools. They could pack all their calls to strlcpy (wrapping strcpy & friends) into a separate lib that is outside the scope of the security audit.
what makes this article still great IMO is:
1) the shout-out to the Huawei security problems which are not because we're dealing with a hostile malicious adversary with a strategy to change the channel frequency of 5G towers in a theater of war and fry our citizens brains[1] but because nobody even needs a malicious supply chain implants when you're dealing with poor quality code.
2) an accurate picture of why DevSecOps isn't a thing. Shoft-Left is a fools errand when most companies promote the experienced people who would be suited for a birds-eye-view / full-stack role, into management.
albeit both points were probably not the intention of the author.
They advertised it as having a 1.5GHz CPU, but the governor was set up to practically never let it go beyond 1.2 GHz - most of the time it would top at 1GHz. Shenanigans.
The clock/voltage configuration for everything (CPU, GPU, RAM, some other stuff) was stored in a password protected Excel file (.xls!), and removing the protection was laughably easy. The kernel would compile with the data directly from that file, I haven't seen that before.
I was able to play with the settings to my heart's content. I got the CPU to run at a real 1.8Ghz, at which point the phone would literally burn through the battery at around 1% per second under full load. Of course, it would get extremely hot and shutdown in about half a minute :D
I got that phone super optimized, custom CPU,GPU and RAM clocks and voltages, got an extra hour of screen on time out of it and it was smooth as hell, a stark change from the default settings.
The damn GPS wouldn't work while moving though, which was pretty ridiculous.