Unbelievable. Yet again, we have a post on finding x86 alternative that's most FOSS friendly. Yet again, the author is unaware of or ignores the only architecture that's open, has GPL cores, and an ecosystem. That's SPARC. Oracle's T1 and T2 cores are open-source to study. More appropriately, Cobham-Gaisler's Leon3 HW is dual-licensed under GPL and commercial. The Leon4 is 4-cores. SPARC ISA is open. Open Firmware exists.
So, why is SPARC left off in all these analyses? It's right there ready to pick up and deploy. More open, easy to acquire, and trustworthy (far as licensing) than than a POWER chip although slower for sure.
As far as I know, Oracle has not made any SPARC intellectual property available since the acquisition of Sun. The T1 and T2 lines were emphatically Sun-era products.
That's true. I'm also against buying Oracle's I.P. because they're too scheming and sue-happy. I'm listing those to show SPARC ISA has a series of implementations competitive with x86 on the high-end. It's actively developed and badass rather than dead. That's all.
No, that's not how open-source licensing usually works.
Assuming Oracle own all the IP rights (having purchased them from Sun), they aren't bound by the terms of the GPL. The GPL grants certain permissions to others if they comply with its terms, but the person who offers the license doesn't lose any rights they already have. They have no obligation to keep successive generations of derivative products open source.
True, that's even spelled out in the GPL itself (that's from GPL 2, but GPL 3 has similar content in section 9):
> 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License [...]
The copyright owner obviously doesn't need the license to have permission to modify the code, so they're not bound by it.
If all of the copyright holders agreed, I think they they could relicense under any licence. This would be especially straightforward for any revision for which Oracle is the sole copyright holder.
There could be a perception problem there. SPARC has been open for a long time with multiple vendors making SPARC chips. There's no restrictions except IRRC a $99 fee to use the trademark for a SPARC compatible chip.
But the SPARCs you mention have their drawbacks. LEON is not that competetive in the high end (in order single issue, low clock freq) and T1/T2 are only cores (i.e. without interesting "uncore" stuff) and not that good as general purpose "desktop like" CPU.
I have much higher hopes for RISC-V, the community is really booming and the architecture is better than
SPARC.
I say this as a former Gaisler employee and SPARC proponent :-)
"But the SPARCs you mention have their drawbacks. LEON is not that competetive in the high end (in order single issue, low clock freq) and T1/T2 are only cores (i.e. without interesting "uncore" stuff) and not that good as general purpose "desktop like" CPU."
There's definitely drawbacks. I've just not even seen interest in embedded sector of FOSS for SPARC even with open cores. I wouldn't argue stuff like Leon4 in its current form is suitable for replacing a Core Duo or anything. Yet, that it's suitable for many apps but ignored for all apps by FOSS in favor of proprietary MCU's/CPU's might reveal a problem on their side.
Far as RISC-V, the community is booming and I have high hopes for them. Maybe they'll make something. My recommendation was to create Pi-like board with RISC-V SOC by licensing Leon3 or Leon4, replacing SPARC components with RISC-V, and getting the rest w/out effort. I think we would anyway given it's designed for easy configuration/modification. In parallel, continue developing clean-slate replacements. Gives us a rich, interim product to use with full FOSS down the line. What you think of that idea?
I agree that LEON is overlooked in the MCU market.
Wrapping up one or multiple of the RISC-V cores in GRLIB is something I think would benefit both Gaisler and the RISC-V community and something I have thought of doing myself if I had the time!
That's about half the peer review I need on that given your background. Next I need a SPARC opponent with HW/SOC experience background giving same recommend haha. Might send it to some of the academics.
Another part of my plan was to get academics to build and public domain the source/verilog/whatever so we can benefit from their cheap EDA licensing and shuttle runs. Pick bare minimum I.P. we need, like DDR or PCI, to get SOC's working. Slowly crank them out at many universities to eventually arrive at a platform with ASIC-proven components. Then, startups can just do integrations with whatever little part is custom for them. Much cheaper. Also, I think analog academics doing open, cell libraries would be a good idea at 350, 180, 90, 45, and 28nm. As money comes in, can just shrink from one tech to another using pre-existing I.P. or cells. People could probably use Qflow OSS ASIC flow with 350nm (maybe 180nm) without or w/ little commercial tooling.
Always looking for HW people's review on these things. What you think?
Academics should absolutely open source their stuff to a larger degree and contribute it a common open source community. I don't think LEON/GRLIB will be that community but may be a part of it. OpenCores did not succeed but I have hopes for what's cooking around FOSSi foundation/LibreCores.
Most academic shuttle runs are still at 90nm, or bigger. There are a few reasons for this: cost of runs, cost of tooling (hundreds of k), and the fact that your assumptions about transistor action and modeling are exponentially more complicated at advanced process nodes.
Academics also sign NDAs about the processes they use, and can only make certain things available; the most open is probably MOSIS, but that's absolutely no good for advanced nodes.
I'd say throw low power and advanced anything out the window, demonstrate a working chip, then look for funding to advance it.
I'd agree that most runs are at 90nm or above. Yet, the rest is confusing given I have quite a few papers with competitive stuff done at 45-65nm with some at 28 or 32nm.
So, why you say forget about it or MOSIS below 90nm if academics are getting working chips done that low?
They sign NDAs for the process and get 100-thousand dollar layout packages for academic prices.
If you have a few spare million dollars, you still can't necessarily release a lot of data due to the NDAs - usually they give you models for the processes that are proprietary (and they invested a lot in developing, and so will consider any breach an act of war).
Nick, The larger context of all this issue is defense. So on one side there are the five eyes governments wanting it this way. On the other side(and probably very interested in 100% security), you might have various countries supporting terrorist organizations, terrorist organizations, crime syndicates, russia, china, etc.
Doesn't this context hints to us that 100% security would be much harder than creating some design and manufacturing it using standard fabs?
There won't be 100% security because underlying physics fights you and our field is too new. Best we can hope for is making attacks hard and physical. There's great work in secure HW/SW architectures that should knock out about all SW stuff with effort. Details published in all kinds of CompSci publications. HW, too, far as implementing it correctly with some security properties. The rest, esp tamper-resistence, is still in infancy far as having stuff that actually works.
Now, what we're talking about in this thread is having an ISA, chip implementation, firmware, and SW stack that is not a black box and is under your control. Preferably without built-in, convenient spyware. Mainstream FOSS users are currently so far away from this that it's a reasonable, interim goal. So, I had to bring up SPARC as an addition to the list that has side benefit of reducing legal risks.
Ok. Maybe that may work. But what about legal risks? extra-legal risks(like vanishing in the dead of night) ? soft risks - how would the wife of someone who is just the customer will respond when guys in black suits will come to her home ?
Or if you're method will work so well, are you sure TSMC/Samsung will even accept you as a customer ?
Because it doesn't seem like something that could scale without the legal/political side and that's really much harder than the tech(which is hard, no doubt).
Many big players have vested interest in hardware platforms that are not tampered with out-of-the-box, or open to easy tampering, by their adversaries.
The Chinese have an interest in having a hardware platform that doesn't have NSA code baked into it; the US government and major US corporations likewise want hardware that doesn't phone home to Unit 61398. The Russians don't want either but probably have their own ambitions. Etc.
I think that in the next few decades it will become quite accepted that you choose your platform based on who your perceived "adversary" is. If you're concerned about the NSA, you buy a system that's Chinese from soup to nuts. If you're concerned about the PLA, you buy from a vendor with the US Government seal of approval.
It remains to be seen -- and in truth, I am somewhat pessimistic -- about the availability of a hardware/software ecosystem that doesn't require compromise. Hardware fabrication is a capital intensive industry, and capital intensive industries are pretty vulnerable to coercion by the governments in which all their capital equipment sits. ("That's a real nice chip fab you have there. It'd be a shame if something...happened...to it. Maybe you want to reconsider your offer to help us out?")
An open architecture that you could get from any number of vendors, and perhaps use to keep the vendors honest, would be a huge step in the right direction, though. But the underlying problem is extremely hard.
> Hardware fabrication is a capital intensive industry, and capital intensive industries are pretty vulnerable to coercion by the governments in which all their capital equipment sits.
If the spec is open then it should be possible for a fancy lab to verify that the hardware is manufactured to spec, right? So if you have it manufactured in Taiwan but then have random samples verified by labs in the US, Japan and Europe, defectors could be detected. Then the manufacturer would have to risk destroying their business by getting caught inserting a backdoor.
All existing SPARC hardware is very old at this point and has horrible energy efficiency and poor performance compared to the other options, including POWER8 and ARM.
This is possible. I wonder how much of that is its design/I.P. and how much is what process node it's currently on? A port of proven I.P. with existing ecosystem to 28-65nm that ARM and RISC-V are using might fix a lot of that.
Those are pretty neat. Didn't know about the Apple many-core. Far as OpenFirmware, I think it should be mandatory along the lines of something like First Sale doctrine. If we bought a device, we should be able to control its use by law. We can't do that with software due to copyright. That implies an open, mandatory firmware available that lets us load our own software in.
I knew you'd like that. The temlib is impressive because of its completeness, though it seems like hobbyist thing, maybe useful for retrocomputing and software archaelogy ;) But this comment http://temlib.org/site/?p=567#comment-210 makes me realize that the design doesn't even fully utilize the almost EOLd Spartan-6 (a low end one, even in its heyday). Now imagine what could be done in something new? Combined with the Utleon3 implementation of the Microgrid concept from http://svp-home.org
On something like this f.e. http://www.achronix.com/products.html ?
AFAIUI this would smell like Soft Machines VISC, only better, because FREE!
"makes me realize that the design doesn't even fully utilize the almost EOLd Spartan-6 (a low end one, even in its heyday)"
Yeah, it's impressively efficient. Adds more evidence to our argument that SPARC implementations can be technologically competitive in efficiency with ARM, etc.
"AFAIUI this would smell like Soft Machines VISC, only better, because FREE!"
It could happen. Achronix's FPGA's are badass, too, hitting up to 1.5GHz. Their dev boards are actually cheaper than Oracle's SPARC servers, too, with added benefit of putting custom logic for accelerators in there w/ SPARC I.P.. I haven't studied much of VISC, though, so I have little comment there.
I'll comment on those other links later tonight as I'm off to do some more paying work. :)
And then there's Oracle, their badass chips, and their evil ass lawyers. We can stay away from all that. SPARC is better and safer than Oracle but very importantly SPARC != Oracle.
I'm pretty sure it's the catheral model. I'm not even sure that they're doing open-source with Leon4 onward as pretty much nothing happened with GPL'd Leon3 and GRLIB. Comp Sci people and companies doing rad-hard, space apps are still getting and building on it.
Best way to deal with them is to straight-up license their tech for a Pi- or router-style board. Then fab, assemble, and sell that joker. That gets the ecosystem going. CompSci people doing CPU's or RISC-V work can keep building reusable components both can use. Then we just pay for the integrations.
I've done school-size CPUs with http://www.clash-lang.org/ --- it would be fun to convert someone's "real-world" design into Haskell with it. 'Twould really show off that order of magnitude code size reduction :).
I suppose cathedral vs bazaar doesn't affect that at all, but experience has ingrained in me "source tarball ==> won't easily build" biases.
What is more unbelievable is the Management Engine itself, not a nitpick about a platform being left out of the list of alternatives. It did not seem like he was creating a comprehensive list, just a first attempt.
The management engine is a result of a steady stream of changes, enhancements, proposals, etc going back probably a decade. There was demand from business and government sectors for easier repair/management, better security, and lock-in from software/media segments. Consumers largely were apathetic and stayed out of those discussions as usual. However, there was demand among some of them for cheaper repairs and better malware protection. Management Engine was one of results of all that.
One could see it coming years in advance. Matter of fact, I fought solutions like that in favor of instrumented, robust coprocessors that did that. They could cost $20-30 more. They could even be in an embedded PCI card that also did I/O offloading, firewalls, and security monitoring with secure RTOS. Many extra benefits to justify extra $20-200 depending on form factor. Yet, people wanted dirt cheap, integrated solution.
I'm not sure I know what you mean by ABI here? ABI in this case to me would mean Application Binary Interface, IE the C ABI that's defined by the platform and not the processor.
A number of architectures have published standard ABIs. ARM, PowerPC, MIPS, Itanium are all in this category. In some cases these are explicitly embedded ABIs (sometimes EABI).
For ARM, all major OSes I'm aware of use the ARM EABI2. (Note both the Linux kernel and gcc support other ABIs, so there is a real practical choice here.)
For PowerPC, at least all the little-endian 64-bit work for POWER8 has been done targeting the standard ABI. (I have no memory of whether big-endian ABIs for PowerPC follow the standard.)
"Oracle's T1 and T2 cores are open-source to study."
If you had to pick a Oracle (Sun?) T2 based system to purchase off of ebay, with the interest in using it as a "more free, more open" system, what would you buy ?
I wouldn't. I'd use Gaisler's immediately because it's fully open and already FPGA qualified. I'd then buy a good FPGA board. Then I'd run it on there. It would probably run like a multi-core version of my old Pentium II. Yet, I programmed, hacked, gamed, and so on with it. Later, I'd put it on an eASIC Nextreme or actual ASIC if money came in for better performance, power, and unit pricing.
"I'd use Gaisler's immediately because it's fully open and already FPGA qualified. I'd then buy a good FPGA board. Then I'd run it on there. It would probably run like a multi-core version of my old Pentium II."
Sorry, let me clarify ...
Pretend you have three kids. But at the same time you'd like to tinker with a fully open system from loader on up.
Is there an old sun sparc that would make rms happy that I could buy on ebay ?
I think the last generation of SPARC-based workstations in wide production were the Ultra 45s. They were made until 2008, according to Wikipedia [1]. They sell for surprisingly high prices [2], for an almost-decade-old computer, on eBay.
You could probably get an old Apple PowerPC-based system for considerably less than that, and a LibreBoot-compatible x86 system for even less, but they do exist if you wanted to play around with the architecture.
[2]: See eBay item 121411279863, which is a Ultra 45 1x 1.6 GHz SPARC with 2GB RAM and 250GB HDD for almost $2k, asking price. Not sure if that's a realistic ask, but it's what they want for it.
There is nothing open about Ultra 45 workstations in the context of this thread (it uses Open Firmware, but that's about it).
Note that Ultra 45 workstations are extremely slow, much slower than you expect. They were very slow even when they were new. Think Pentium 2 performance.
There used to be a lot of competition between several types of RISC machine and several x86 vendors. I know about the consolidations on x86 side. I'm not sure why SPARC lost favor versus others, though. I wasn't able to afford RISC workstations back when all that was happening. It would be nice for one of older folks to chime in on what made SPARC unpopular back then.
It was the cost/performance ratio, not of the CPU itself, but of the entire thing, including software. Sun sold highly performing, but extremely overpriced hardware that rode in on the hype it built around the architecture but failed to deliver on flexibility and bang for the buck.
I was involved in launching an ISP where the whole shebang ran on Sun boxes, and which was over-dimensioned to the point where I once stepped into the data center and found a waist-high box full of E250/E450... Feet. The purple plastic ones you had to remove to rack mount the things.
Two years later we'd shrunk down most of the whole thing (except the storage bits, where SPARC still had good performance) to a few racks of Compaq and Dell boxes that were vastly cheaper to maintain (both because they were cheaper, period, and because we didn't need to wrestle with Solaris and the compilers of the time to get stuff working on them).
This was back in 1999 or so, and I never saw a SPARC system in production after 2005 (until a few months back when I visited a telco customer who still swears by them for a very specific purpose).
I still have one of those plastic feet on my desk at home, as a reminder of the folly of buying single-vendor solutions. It sucks as a paperweight. :)
Thanks for the enlightening comment. That makes plenty of sense. It's part of the reason I didn't try to buy one: price/performance numbers just didn't make sense.
Then you are missing the point he is making. LEON is a GPL implementation of SPARCv8 which you can download and use in an FPGA, tape out your own ASIC or buy one of the existing SoCs built with it (might not be that easy...). In other words, more open than the alternatives listed.
Exactly. There's a ton of implementations ranging from free for FPGA's to S-ASIC's from eASIC to embedded CPU's from Gaisler to cloud servers from Oracle to mainframes from Fujitsu & Russia. One can also legally clean-slate a SPARC chip without legal fears. Unlike POWER, ARM, and MIPS. The ISA, its docs, a firmware standard... all of that already open.
So, why is it not on the table for... anything in FOSS? Doesn't seem rational. Even a bit hypocritical given vendors like Gaisler and nonprofits like SPARC International have met FOSS halfway or almost wholly. Unlike the others that sue FOSS developers.
This makes me wonder why we don't see sparc chips in things like routers or other hardware that doesn't require lots of binary compatibility from 3rd party software.
Because the chip-makers and chip buyers are using ARM and MIPS in power-efficient SOC's. They could do the same with SPARC. They just didn't for whatever reasons.
Far as power efficiency, kristoffer might be able to chime in as it's not in the data sheets for Gaisler. That's suspicious: either the numbers are bad or they leave it off given its meant for customization. Anyway, the Leon4...
...uses 30,000 gates per core. Same ballpark as ARM and MIPS. Power use should be similar or at least acceptable if comparing ARM, MIPS, and Leon on same ASIC process. They often do rad-hard given it makes it resistant to SEU errors. That takes plenty of extra circuitry. Numbers I have for those, the high end, are 15mW per 1Mhz for Leon3RadHard and for Leon4RadHardQuadCore max was 6watts per one slideshow.
I'll take 6watts consumption in a router in exchange for quad-core, IOMMU-enabled, fault-tolerant, open CPU. What about you? Would 6 watts kill it for you?
Power consumption is of course very much dependent on the chosen fabrication process and SoC configuration. A LEON3/4 core is comparable to something like a ARM Cortex-M7 and it is not the ISA (when comparing ARMv7 vs SPARCv8) but the implementation that will affect power most. LEON is quite small and power efficient.
Node selected for fabrication does not equalize power consumption. You could go to the same node with other, inherently more power efficient architecture and gain even more oomph per watt. Power efficiency stems from the architecture itself; manufacturing process is a red herring (and a costly one). What you're saying is pretty much like "seasoned bodybuilder would kick white-belt karate practitioner's ass, so it's clear that bodybuilding is better than karate."
Compare things that are alike. If you take a manufacturer (say: TSMC), pick its node (say: 16nm FF+) and you decide on a package (physical manifestation of RTL primitives in the silicon) you get better performance per watt on one architecture over some other. ARM and MIPS are inherently very power efficient. You can't just take SPARC and make it more power efficient than these two. It doesn't work like that.
It's also not true that ISA doesn't matter. ISA impacts bandwidth requirements heavily. This in turn impacts latency and latency hiding, cache requirements and many other things. In fact data transfer is typically as costly as (if not more expensive than) computation. Getting data to all the right places on the schedule eats power like crazy. This is exactly why ARM has Thumb. It's not like internally core does different things than it would do with wide ISA. It's just that stuff's more densely packed, which helps tremendously.
Which brings me to my last point. There's an open architecture that's quite nice. It's SuperH (or SH2 in its open source form), which in turn is what ARM's Thumb is based on. It's not perfect, but it's pretty solid. Omitting it in the OP makes me think author isn't very thorough with his research. But everything has to start somewhere. ;)
I said fabrication process AND RTL architecture matters more than ARMv7 vs SPARCv8. They are both quite nice RISC architectures. Nothing in either one is especially power hungry.
Sure you have Thumb, that saves a little on memory bandwidth which is good. But nobody uses it anyways, and you might run slower so you can't sleep as fast.
I like the J2 (the open source sh) project as well but it doesn't have a MMU which rules it out for anything but simpler embedded projects.
Remember, I mentioned the SOC vendors as well. They currently license MIPS and ARM. Many choose MIPS due to cheap license. ARM's license, royalties, and restrictions are ridiculously expensive. SPARC has a cost advantage over it. So, once again, it's neither energy nor costs that are reasons they chose MIPS and ARM over SPARC.
I'm not so sure: Because currently ARM processors are currently sold in a large volume (if not for a technical reason then by momentum) they can be made a cheaper by economy of scale. This does not mean that this has to stay in the future, but currently this seems to be the case.
Now there's a good argument. Here's the real value, though, straight from ARM themselves: the ecosystem. They've built a whole ecosystem of boards, firmware, software, everything around ARM you get when you license their tech. It might be cheaper with economy of scale as well although licensing and royalties have to factor in. I'd default on MIPS there since they can be up to 10x cheaper than ARM. But yeah, a Freescale iMX ARM was like $4 per 100 units last time I looked it up.
So, it's mainly the ecosystem with companies and FOSS people wanting to benefit from what's already there instead of improve FOSS HW ecosystems. There's currently, but not indefinitely as you said, a cost advantage for the mass market SOC's as well for PPC, ARM, MIPS, and possibly SuperH.
Basically, the things mentioned on the text, made free software bios and firmwares impossible, some of the free software projects that exist now are mostly "binary blobs loaders", having more binary blob than free software code running.
There is some good analysis on why even Intel can't fix this if they wanted to, unless they stopped shipping some features entirely, their Intel ME system rely on a couple of proprietary third party code, that has on contract with Intel explicit prohibitions of Intel ever letting anyone seeing their source, or the keys needed to sign them.
Also, Intel ME can't be really trusted, the code is not really "reverse-engineeringable", and it works as a full second OS of sorts, it even has its own JVM running, if someone somehow decide to inject spy software into it, you will never know, also I assume that the first destructive virus to latch into that stuff, will take the world truly by surprise depending on when it triggers (for example if it spreads silently but triggers the destructive payload on a specific date).
Also, these features can be abused to abuse the market itself, for example by intentionally making the hardware underperform, and then sell "superior" hardware that has the only difference some software.
I think you're supposed to be able to distinguish the message from the medium. Certain styles can definitely make that harder, but ultimately if you can't examine an issue by the facts presented, the failure falls on you, as do the consequences.
To be clear, I also think the referenced bit is childish and detracts from the message. I just don't think that should affect your belief in whether it's important.
If a piece is written with a confusingly inappropriate tone for the subject matter, you can't solely blame the reader for being confused since it was the expressed intent of the author to instill that state.
Well, it's a wiki, so "author" is very loose (and when I checked at the time of my original comment, the change to add some of that verbiage was the most recent change, if still quite old). Ultimately, much of the information on the internet is presented without reference, so tone is the least of our problems. We need to be able to read what is being presented, and decide whether it's important enough to use that we should verify it. In this case, the tone shifts, but the message is along the same lines (the ME is your adversary), if very crudely done.
I do think you have a point though. It's not entirely up to the reader, there is a minimum threshold of clearly communicating facts that needs to be met by the author. But I don't think it's safe to say something that's unclear in tone means it was the expressed intent of the author to cause confusion. Humor can add quite a bit to an argument if done right, as humor often has the ability to cut through some of our preconceptions. Humor done wrong might be confusing, but that could very well be unintentional.
It seems there are just some basic begining steps on the site. It's far from the fully dissaembled ME. So it seems we still don't even know what's inside of ME.
Sounds like rich, fertile ground for the NSA, KGB, and other state agencies. They could be deploying such code right now and I'm not sure we would know it.
Not entirely---it's still around in Belarus. And Transnistria and South Ossetia, too, though those may be more of imitators than actual remnants of the original.
That would be far too provable and suspicious, since it'd have to be done 24/7 and would rule out the heisenbug route.
If you were to do such a thing, you'd do it by making the machine overclock itself too much occasionally (after the 'artificial use-by date' has passed, thereby incurring physical damage at random intervals. Although really, it would be easy to do without blob, too, if you're doing the design of the physical chip.
I'd think it would be easier and more profitable (more convenient for "customers" than buying new hardware) to sell the "speed unlock" solution, much like cryptolocker does with your personal data.
"The ME firmware is compressed and consists of modules that are listed in the manifest along with secure cryptographic hashes of their contents. One module is the operating system kernel, which is based on a proprietary real-time operating system (RTOS) kernel called "ThreadX". The developer, Express Logic, sells licenses and source code for ThreadX. Customers such as Intel are forbidden from disclosing or sublicensing the ThreadX source code. Another module is the Dynamic Application Loader (DAL), which consists of a Java virtual machine and set of preinstalled Java classes for cryptography, secure storage, etc. The DAL module can load and execute additional ME modules from the PC's HDD or SSD. The ME firmware also includes a number of native application modules within its flash memory space, including Intel Active Management Technology (AMT), an implementation of a Trusted Platform Module (TPM), Intel Boot Guard, and audio and video DRM systems."
Java has precedent for running on low-performing devices; see J2ME[0], which ran on a whole bunch of old mobile phones, and Java Card[1], which ran on smart cards.
I doubt it. The powerpc to Intel switch was really painful because the desktop platform has the perpetual ball and chain of backward compatability. I doubt Apple would try to beat Intel at their own high performance game anyway.
I don't think that's necessarily the case - I think that if Apple switched architectures now, there would be a lot fewer issues than there were with the Intel switch.
Over the past few years, Apple's done a lot of work in making the same system APIs available across multiple processor architectures; at a base level, iOS and OS X have very similar cores. You can see this with the ease of the transition from ARMv7 to ARMv8, which in most cases just required a new compilation.
As general-purpose applications have been migrated to higher-level APIs, the difficulty of porting those applications to a new processor architecture decreases; if an application is Cocoa-based and compiled for x64, then if those Cocoa APIs are available on an ARMv8 platform, they can be compiled natively for that platform.
Apple has started requiring Mac App Store apps to be submitted in the immediate representation form, allowing Apple to recompile. If that's not a glaring hint at working towards ARM, what would be?
Having previously gone through the 68k-PowerPC switch, the PowerPC-x86 switch seemed to me like a non-event. The desktop platform most certainly does not have a perpetual backward compatibility obligation; Apple has always been far more willing than Microsoft to break old stuff after a few years if it happens to conflict with their new stuff.
I would expect that they have already been building OS X for ARM internally for several years, and that they'd prefer to avoid switching again but would certainly do it if they ever felt like the use of Intel's architecture was creating a problem for their business.
> Apple has always been far more willing than Microsoft to break old stuff after a few years if it happens to conflict with their new stuff.
Sidetracking this conversation a little, but I'm more and more wondering whether MS actually has a retrocompatibility track record that is that good, or if it is just a nice story. Granted, they communicate a lot on how hard they work on that subject, and they even have a guy who blogs about that and about how great he is because he injects patches in third parties programs to let them work on new OS versions, but the end result is just... random. -- Well, maybe that guy should work on compat between MS products before those of others...
First no medium/big company would think of upgrading the OS without months or even years of studies and tries -- they likely could do and already are doing the same with OS X, then MS actually actively deprecate a lot of stuff all the time (and even whole subarchs, like Win16 not avail on Win64 installs), they also have so much tech and product birth fail it is not even funny anymore, and finally even when they don't mean to, their very own products in more or less the same line are often broken by next versions supposed to be able to install side-by-side or even just patches (example: Windows SDK 7.1, which is upset if you try to install it with anything else than the .NET 4 RTM preinstalled, and then is very upset again during builds if you upgrade your .NET to 4.6 -- or on a completely different subject compat of recent Words with old .doc which is not stellar)
And finally on the technical design side, some choices are just plain complete crap and stupid. Why would you, I don't know, leverage UMDF (that is especially well suited for USB drivers, for example) to allow 32 bits drivers to run on 64 bits Windows when you can just not give a fuck and force people to use their old consumer or dedicated pro hardware with their old computer, and let them throw everything in the trash when it eventually fails. I mean, during the 16 -> 32 bits transition they actually made far more insane things working (at least kind of working) while here everything would be neatly isolated yet they manage to... not even attempt to do it.
I'll not even begin to talk about the .dll story, which is just even more complicated each time they try to fix it because you still have to support the old methods, sometime by some kind of virtualisation. And then, like I said, they just decide to change their mind and use the old replacement method again, (ex: the .NET 4 => 4.5/4.6 mess explained before) which breaks again because they are still not THAT good at backward compat. (In a cringeful way: have anybody heard about symbol versioning?)
So maybe Apple is doing worse (I don't know much about them), but on a Linux system you can actually administer it carefully if you are skilled enough to make any random old application crap REALLY work on a modern install (you might need to duplicate a complete userspace to do that, but not all the time thanks to symbol versioning, and it is not necessarily huge when you do, and at least you can).
At one point AppKit and FoundationKit supported 4 architectures (68k, x86, HP, and Sparc), most Cocoa based applications were just a recompile away.
The main issue with the PowerPC to x86 was Carbon which was never designed to be a cross-platform toolkit in the same away that Cocoa was. Given that Carbon 64 never got off the starting blocks and Carbon 32 was deprecated way back in 10.8 switching architectures will be less painful this time around.
The macbooks were never supposed to be a "power" machine. The whole purpose was a very portable, long batter life laptop. And it dose that very very well.
I would not call that giving up on performance...they just designed something for a purpose and it fits that purpose well.
Dude it has a 5 watt processor and a mobo the size of a raspi. I don't think their featherweight offering is appropriate to enter in the performance fight.
ARM architectures also suffer from this. You'll be hard pressed to find a board that doesn't require a propriety board support package somewhere in the stack.
Ironically, it is usually the bootloader that is/requires a blob or it is the DTB.
I remember being in middle school and reading Stallman's articles on the dangers of a TPM-oriented push by manufacturers. As cliche as it is, Stallman was right.
The push for platform security is also a push for platform ownership. Tinkering/hacking/your ability as a hardware owner is at ends with corporate security needs and that is a shame.
Most Allwinner chips can run an entirely open-source stack and there's quite a few hobbyist-oriented boards out there based on them. (Technically the ROM bootloader is closed source, but all it does is load your choice of bootloader into RAM and execute it. After that you have full control, including the ability to run code in TrustZone mode and on the supervisor CPU core if one exists.)
IIRC Allwinner does not have public datasheets, and honestly I prefer to have an half-documented half-accessible x86 platform rather than a theoretically fully-accessible ARM one that in practice is 1/10 publicly usable (the cpu alone is not enough to have fun with a platform... especially in the ARM SoC world -- and existing drivers have never replaced a good documentation of the hardware they drive.)
While TrustZone can implement DRM, it is not a closed management engine. If you control the board, you can load your own OS there (but, conversely, if you cannot load your own OS there, you do not control the board).
The boot ROMs on these things are proprietary, though. The vendors I've talked with have been extremely coy about what's in them (got an overview from Marvell once, spent a day looking at code on a projector screen and getting a walkthrough. They could have hidden much).
The boot ROM typically gets out of your way pretty quickly, though. At worst it means you have to deal with some firmware-signing nonsense before chaining into a Linux kernel (or U-Boot); it isn't active in a running system.
I should add that the code we saw may not have been the code that was actually run. It had reset vectors and whatnot, but that's no guarantee there were no hidden ROMs that ran code before the code we reviewed got run. And it's certainly no guarantee there are no hidden hardware-level state machines that unlock . . . things. Things like ignoring X bits in pages, or being able to do some low-bandwidth computation with code embedded at the stenographic level.
I don't really do GPU-intensive stuff on it. It renders webpages & youtube fine, which is probably the most graphical intensive stuff. I'm currently using it mostly for porting MuseScore to arm, mainly using qtcreator for development.
The external hdmi doesn't really work properly...I remember not being able to run external monitor in full screen, or at the same time as the lcd screen.
I just installed and ran glxgears. It gets 250 FPS and a default window size. But I do get an command line error libGL error: unable to load driver: rockchip_dri.so, so I don't think I'm getting GPU.
so jealous... I've tried to replicate that setup on my C201 but kept hitting issues. I might give it another shot some day soon. Does Libreboot make it easier to boot from SD/USB?
Lucky. I have to hunt around for DTBs based on modified Rockchip and Freescale SoCs to update my devices.
I am unwilling to distribute said binaries to make it easier for people with the same hardware as I to update their 3 year old software. That, too, is a shame.
You don't even need a secondary tool. The standard devicetree compiler, dtc, can convert a dtb back into a dts no problem. The dtb format is a pretty basic serialisation of the dts file, so fairly little information is lost (mainly pre-processor stuff like includes and some macros for constants) and you can easily round-trip.
Intel ME checks to see if a certain portion of the BIOS flash memory is writable before it allows the main OS to boot.
What x86 Chromebooks do is they allow that region to be writeable but then zero that region on every boot. If your ME was backdoored, it was shipped that way from the factory.
It's so disappointing that Intel undermined the entire trusted computing stack for some unproven ideas of around ME revenue generating opportunities.
There was an excellent talk related to this that Joanna Rutkowska gave at the 32c3 conference (she talked quite a bit about Intel's ME too, I was completely unaware of its existence up to that point):
https://media.ccc.de/v/32c3-7352-towards_reasonably_trustwor...
Given that chip development has been hitting diminishing returns for a few years it might be time for Open Source to eat the world of processors as well.
It feels like the sort of opportune market that server operating systems, databases and web servers occupied: less of a visual aesthetic and more of a better-design-wins market.
It's not going to be easy - I'd guess that it would take at least 10 years for a project to get any sort of traction outside of a very small niche group.
Yep. But I stick to my guess that it'll take a decade for real change to happen. Obviously the goalposts are a bit fuzzy, but I feel like you have to give the hardware a chance to make it through three generations (assuming three-year lifespan on devices) before someone launches something that is within the ballpark of devices shipping at the same time.
I was at FOSDEM this year (2016) and there was a talk from the leader of LibreBoot.
Honestly, his talk on the state of the project was very bitter. He literally said that there is absolutely no hope that LibreBoot will ever be able to cope with ME, and that the fight is over since 2008.
As much as I would absolutely love to be able to run a free firmware, unless there is a major change/outsider in the hardware manufacturer world, it seems very unlikely that it will be possible on current x86 architectures.
I've been using a Raspberry Pi 3 for the past week, and have been pleasantly surprised by the performance. It's no speed demon, to be sure, but it's good enough for all my basic tasks. I wish there were a general "open computing" branch of the Raspberry Pi Foundation that would produce a $50-$100 "pro" version with more RAM and faster bus+peripherals.
The Raspberry Pi still relies on a closed-source blob running on a CPU core whose instruction set isn't publicly documented to even boot, but I suppose at least it's possible to reverse-engineer that unlike Intel ME.
Broadcom released a considerable amount regarding the Videocore IV a couple years ago. Nobody's finished writing an RTOS for it quite yet, but the ISA is now documented.
There are more "pro" oriented boards than the Pi series. ODROID, e.g. It indeed has faster I/O and better CPU. You lose out on the scale benefits from Pi.
Few if any of them have "enterprise" grade quality IMO. The ones that strive for this (HP Moonshot, e.g.) are significantly more expensive than $50-100.
Note that the topic at hand is binary blobs and trust and Pi and other ARM SoC fall short there.
Hmm. OK, I have two questions - maybe somebody here has answers:
1) "...these proprietary blobs could easily contain code to
exfiltrate encryption keys, remotely activate microphones and cameras..."
This seems basically impossible to actually achieve in reality though, because there will still associated network traffic that can be sniffed, and will have been by now, right? I mean, it is plausible that somehow we all just failed to notice that our computers are sending video traffic to the NSA without our noticing it?
I can imagine this happening on phones, where the baseband chip is much harder to actually sniff. But through my LAN? I doubt that.
2) Let's imagine that this post is entirely true. Why do Intel and AMD do this? If it's not part of a grand conspiracy, then why? Clearly there are far easier and cheaper ways to achieve what they view as security that don't require such a crippling approach. What's the upside to them?
I agree that if these things were by default constantly exfiltrating, say, webcam data, someone would have noticed. But their use is likely much more insidious.
Any keyboard event generates some kind of interrupt on x86, right ? Suppose this "ME" happens to record the last 64k keystrokes into a rolling buffer. Furthermore, suppose there is a very, very particular sequence of instructions that can be sent to retrieve this rolling buffer ? Boom. No more encryption (at least none that requires typing a key in from the kb).
Oh, you use binary keys ? Cool, intel has special x86 instructions for doing AES. I'm sure they wouldn't do anything like copy the, oh...KEYS, into the hypothetical rolling buffer, would they ? No, that would be "dishonest", and we all know everyone who makes computer hardware and software would never think of doing something so egregiously deceitful, don't we ?
The fact that the NSA has implants (from the various Snowden files) that do exactly this and exfiltrate over a network should tell us that this is not so far fetched. The bulk of the data volume is only used when the capability is being exploited. It would not be so hard to send out marker of exploitability over innocuous traffic (say tweaking an HTTP header) meant to be picked up by sniffing / MotS.
It's called covert channels. It could be done by flipping some unused/ignored bits in ip4/tcp headers in a stream of traffic that goes past a collection point.
How would Wireshark reveal this kind of attack? If the management chip has direct hardware access, it can hide data in innocuous-looking packets that the host machine never sees. You would have to monitor both the packets that the OS thinks it's sending, and the packets actually received by the switch, and constantly compare them for mismatches. Given the performance cost, I find it hard to believe that anyone except the most paranoid organizations would actually do this.
And of course, if you block the obvious exfiltration methods, all you do is force the attacker to do something more creative. Like modulating inter-packet timings, or even sending data to a nearby radio receiver by using the system bus as an antenna.
> How would Wireshark reveal this kind of attack? If the management chip has direct hardware access, it can hide data in innocuous-looking packets that the host machine never sees.
Lots of organizations use various forms of intrusion detection. A network intrusion detection system (NIDS) would be an off-device system which monitors network traffic for suspicious or obviously malicious packets.
It's certainly no guarantee, but somewhere along the line someone probably would have noticed something if these systems were exfiltrating data via the network using something like IPv4 headers. Specifically, a quick look makes it look like Snort (an open source NIDS) may actually be distributed with rules to alert on IPv4 reserved bits being set.
You keep saying that "someone should have noticed something" but as the old adage goes, absence of evidence is not evidence of absence
What you seem to keep missing is that we know from the Snowden leaks that the capability already exists, and NSA has successfully used implants to do data exfil in the past.
There are ways of doing it invisibly. Change timestamps in very subtle ways,
Embed data in lossy media formats, etc.
If the code says "phone home if anywhere on the screen you see one of the following email addresses" then it won't show up in a normal security audit, unless you email one of those people during the audit. All the NSA has to do is make the phoning home rare enough that it's probabilisticly unlikely to be observed.
He's not saying that all machines are actively doing any of that, or even that Intel/AMD/anybody have already developed code to do so. He's just pointing out that this chip exists, it has to capability to do what he's described, and there's nothing that we can currently do to stop it if we're using an affected machine, as we don't have any control over the code being run. As to why the companies would do this ... it wouldn't necessarily be them if they lost control of the signing keys. And you can't ignore the fact that the FBI just tried to get Apple to do something pretty damn similar.
> And you can't ignore the fact that the FBI just tried to get Apple to do something pretty damn similar.
I was very against the FBI's reasoning in the Apple case (and in fact, I'm against their existence generally).
But I don't think that bypassing an unlock retry limit is "pretty damn similar" morally, legally, or technologically to a solution that can arbitrarily execute code remotely, on demand, and with root privileges on nearly any PC and game console in the world.
Have to quite strongly disagree with you there. The FBI wanted Apple to create and sign software that they could forcibly push onto the phone in order to get it to do what they wanted it to do. In the recent case it was about bruteforcing a passcode, but the concept is identical regardless of the payload. It's exactly the same scenario Intel or AMD could be faced with. The entire Apple situation hinged on the fact that it was possible for Apple to comply, without that there would be no situation.
To your 1): there would indeed be network traffic, but how many people have a machine they can truly trust capturing and analysing enough of the traffic going in and out of their LAN? Unfortunately there are few, if any, machines we can truly trust.
Re. 1 - depends what the data and the receiver looks like and what is your end goal. If you want to "phone home" over normal network, then sure, it's going to be obvious. But if you want to preserve all the generated private keys and just send them over RF on bootup using the chip itself as an antenna? That's in the easy territory.
And it's getting worse, SGX[1] allows 3rd party encrypted binary blobs to run on your CPU without being inspectable.
It's sold as way to protect your secrets from malware. But it more likely will be used to run DRM code on the user's computer while treating the user as a hostile entity.
SGX has the potential to be amazing though. With it you can build "trusted" applications. For example, a Bitcoin mixer that's provably secure. (Well as secure as trusting Intel and users not to be able to break the chip.)
It's really a question of who you trust. There are lots of scenarios where you might trust the developer of a particular piece of software more than you trust the entire software stack running on your PC. This is especially true for a nontechnical / casual / grandma user, who has no hope of ever auditing or even having more than a vague idea of what's running on their computer at a given time, and probably is running (or at least needs to be assumed to be running) six different kinds of malware all the time. To someone like that, the PC itself is a hostile environment which they don't want to share certain information (e.g. their banking details, crypto keys, etc.) with. SGX allows you to ensure that.
If you take on premise that the PC is not safe and under your control, but is instead hostile and compromised, basically an outpost of the Internet in your house, then SGX and similar start to make sense. For many people, their computer is always going to be hostile; it was never "theirs" to begin with, so SGX doesn't really cost them anything, and the ability to let a single application basically force its way down to the hardware and elbow everything else in the stack out of the way is an improvement over having to trust the OS, browser, etc.
In a way it represents an abject failure on the part of the dominant OS developer (Microsoft) to produce a consumer computing platform that the average user can trust, as well as the failure of most other alternatives (e.g. DoD-style smartcards) to take off in the consumer market.
Last I spoke to Intel representatives, SGX enclaves couldn't be taken out of debug mode without having a contract and signing key from Intel.
In other words, those amazing applications appear to require Intel to approve the software author. Their keying mechanism allows revocation too.
I hope this changes or that the information I received was in error, but if not then SGX is mostly only useful for DRM. A shame because there really are a lot of productive applications.
What's their justification? I've heard that too, but it sounds too stupid to be true. "Here's an amazing feature built in to all our CPUs. Except you can't use it."
SGX is a major point and one I thought the linked post would deal with from its title.
For a user-owner point of view, I agree with your assessment of SGX. I imagine that, once it becomes used for things like media DRM and games copy protection, users will start turning it off in their BIOS, or managing the signing key whitelist manually. And I wouldn't blame them.
But from a user-not-owner point of view (ie, cloud computing), SGX offers the user more security, and a degree of protection against some cloud computing risks.
If you don't trust your cloud provider i'm not sure whether SGX is the solution. Consider all those side-channel attacks.
It might provide an additional defense barrier, but you'd still want to run on trusted hardware. And if you have trusted hardware then it should be ok to use user-provided signing keys, just as you can do with secure boot configurations (at least the acceptable kind).
So as long as you're the exclusive user of a machine it should be sufficient to also hand your public key to the cloud provider so they can put it in the BIOS.
The only reason for SGX to not support that is DRM&Co.
The way to blow this wide open is to catch Intel's "management engine" doing something really bad and publicize it. It could do for Intel what John German did for Volkswagen AG.[1]
One approach would be to build some honeypots likely to attract attention. Give them a job that's not too traffic intensive but is suspicious, such as encrypted IRC. Record all traffic in and out of the box using external hardware. Get them fake encrypted traffic from suspicious sources (Tor, strange sites in suspicious countries, etc.) Wait for strange packets to show up that are not meaningful to the host software but cause something to happen on the target.
There is also an additional possibility: Recycle old computers. A Intel 2008 laptop performs OK with a modern GNU/Linux with an efficient Desktop (for example XFCE4). This also helps avoiding CO2 emissions, saves rare earths and energy. And it is a statement against a unsustainable throwaway society.
The problem with such old devices is that some of them can be impressively reliable, at an age of ~8 years, one has to worry about the device starting to fail. If you want to keep the device going once something breaks, getting replacement parts can become interesting. Not impossible per se, but depending on how popular a particular device was in its day, finding spare parts can be very time-consuming.
Still, I agree. I have a 2008 netbook at home I still use regularly, and I hardly throw away a computer that still basically works.
The fight is increasingly political, so advocate and donate where you can.
We lose when we give up, I suppose. I know what the Libreboot guy said before on his blog, alluded to here, but this is why, as crusty as some might find him, we most generally support Stallman's politics.
I wish the FSF was a bit more nuanced. For example, if DRM causes ordinary computers to come with proprietary code that is impossible to remove, then that is bad indeed. Then you no longer control your own computer. The same computer that you might use for political activities, for example.
On the other hand, if entertainment computers, such as blu-ray players or gaming consoles are locked-down and full of DRM, then I don't see a big problem. Sure, the government could potentially ban some movies in the future, and require the manufacturers to update the firmware on your machine so that it will no longer play those movies. But movies and games are expensive to produce, and without DRM most of them probably wouldn't get produced in the first place. In any case, movies and games aren't really that important, compared to say books and articles.
The FSF seems to be against DRM EVERYWHERE! They don't seem to realize that DRM might actually be a good thing for some things. Are there any organizations out there that I could donate to, that fight/work for open hardware for general purpose computers, without trying to prevent locked-down entertainment computers?
Disclosure: I became a member days ago, and did not want to mention it. Reading this article made me super proud to have chosen to fork the cash over before reading this crap as it gets worse all the time.
At the end of the day, I want a hardliner in this space pushing that line because I know, practically, he cannot win. But if consensus is drawn between him and the other extremes I find distasteful, I want him to pull the resolutions and positions as far left as possible, even if that is slightly left of center.
I worry, as current events show, anything less means that counter-forces to the free software movement will wear you down with abject greed by slowly going right of center and taking as much time as it takes to restore the balance back to their proprietary interests after the initial battle has gone to free software advocates. And that is how I see it. Very few of my friends understand the value of highly technical manuals, and that is what open source is about. My brother recently saw my side with automative hacking and experimentation countermeasures on the rise, as reported today on HN. But when I tell non-technical people these manufacturers hide secrets in their faulty designs and let you pay for their ineptitude, even if you want to fix it for yourself on your individual unit without harm or influence on them, they do not get the argument and ask why I think I know better than the compant. They only get the argument when they are locked out of a system they need for their very personal context.
Oh well. This is a very personal choice. I love GPL, I love MIT, and I smile when I think how all these hippies made a world for me in the 60s and 70s I could not live without today.
I totally agree with you that many corporations seem rather greedy. I wish those corporations were more nuanced, too. For example, I don't mind if Intel and AMD make locked down CPU and GPUs for entertainment computers, but it would have been nice if they also made some open CPUs and GPUs.
I wonder if this greed will be profitable for them in the long run? Obviously, most people don't care whether their hardware is open or not. However, a tiny minority of (very) computer literate users do care (a lot). Will it have any impact if this tiny minority abandons the x86 platform?
To the best of my knowledge, VIA CPUs have no secure boot, management engine, or any other proprietary secondary hardware.
Coreboot supports many VIA CPUs and motherboards[1], though it's unclear if it uses any binary blobs the FSF seems alright with VIA Technologies and apparently they're cooperative with open-source BIOS[2].
It's great that these guys pushing POWER8 at least have a workable situation, but at least for me, throwing $3,700 at a motherboard (Alone!) just isn't feasible. I would love to be free of proprietary firmware, but it would seem that's only for people better off than myself.
Consider the news of the Model 3 this week. There is no reason POWER8 cannot follow a similar trajectory, start with the Roadster equivalent $4k luxury workstation, move down to the performance desktop around $1500, and then release the mass market mobile / integrated board at $300 that can still go head to head with x86.
It can, but IBM doesn't care about those markets. OpenPOWER members could make their own stuff and aim it at consumers, it just takes a long time given OpenPOWER itself is only a couple years old.
You can buy a Libreboot-compatible Thinkpad X60 for $50. It is absolutely not the case that fully free firmware is only available by paying lots of money.
Yes there is even a company shipping T400 and X200 Thinkpads pre-installed with libreboot and I see they have now an option for a server: https://minifree.org/
You're absolutely right; Most advances do eventually trickle down. But, most advances in FOSS have benefited users who are less well off, making computing, both libre and in general, more available to them. It's frustrating that in order to get a truly free computer one has to pay about the equivalent of 3 months wage, which is a no go for most people I know.
While I agree with you, the average buyer of computer components is used to spending an order of magnitude less on a motherboard. A person can go to their usual source of computer parts and pick up a motherboard for a couple hundred USD at the high end. A few thousand USD for motherboard that does not offer an order of magnitude improvement in raw speed or expandability is going to be a very hard sell.
The question is rather: Is there a large overlap between the people who can afford to spend thousands of dollars for a free (as freedom) POWER8 workstation and the open source idealists who would love to buy such a free device?
I'd personally like to see the FOSS community try to embrace the POWER architecture: Ubuntu/Canonical are major members of the OpenPOWER foundation [1], so at least an entity sympathetic with our philosophy has an influence on the architecture.
Red Hat has supported POWER for a long time. Debian does. Even Mint had a PPC release. The big BSD's do. Amiga's are still on PPC haha. I think it's not a question of FOSS support by OS developers. It's the users and app that don't commit to x86 alternatives.
One issue with PPC and POWER is that they're generally big-endian and everything assumes little-endian these days thanks to x86. Even JavaScript is little-endian now.
The POWER line, and all PowerPCs except the G5, are actually bi-endian, and quite a few operating systems run in little-endian mode (including all vaguely recent Linux releases I know). The G5's lack of a bi-endian mode is why VirtualPC didn't ship on it for a long time (if at all; I don't even remember).
And most PPC distributions are building for little-endian mode as well. OpenSUSE Leap is only available for ppc64le, Fedora only builds for ppc64le now as well.
That's one of those sad realities of Worse is Better in action. Definitely a disadvantage. Far as why big-endian was The Right Thing, drfuchs had this to say:
"Because big-endian matches how most humans have done it for most of history ("five hundred twenty one" is written "521" or "DXXI", not "125" or "IXXD"). Because the left-most bit in a byte is the high-order bit, so the left-most byte in a word should be the high-order byte. Because ordering two 8-character ascii strings can be done with a single 8-byte integer compare instruction (with the obvious generalizations). Because looking for 0x12345678 in a hex dump (visually or with an automatic tool) isn't a maddening task. Because manipulating 1-bit-per-pixel image data and frame buffers (shifting left and right, particularly) doesn't lead to despair. Because that's how any right-thinking person's brain works."
A compromise for IBM might be to release old iterations of the ISA under liberal licensing terms like RISC-V or SPARC. Throw in open release of blueprints 5 years after first printout and I'd be fine with that as our collective target for free and open computing going forward.
This sounds similar to basebands on cellular devices: Subsystems controlled by the vendor, not accessible from the 'user' system, remotely updatable and with access to everything.
Except modern baseband processors usually don't have direct access to main memory or peripherals - they are usually linked to the rest of the phone via a serial bus.
ME is very, very different - it transparently has access to everything.
> Except modern baseband processors usually don't have direct access to main memory or peripherals - they are usually linked to the rest of the phone via a serial bus.
Do you know where I can read more about that? A good, technical, authoritative resource? In my little bit of research, details are sparse and authoritative technical details even more sparse.
Paranoid android used to have a nice breakdown on which phones had isolated memory for the baseband and which used shared memory. I cannot find it now, and their site seems to have taken a very wrong turn in the design department.
fyi: As far as I know, Paranoid Android development stopped last summer, after OnePlus hired away key developers in February 2015. Here's an article with much more detail, including its prospects going forward:
Hmm, actually it isn't as good as I thought. I guess I was remembering IRC discussions. IIRC, the Replicant folks found Qualcomm based devices pretty commonly have this issue.
I wonder if Apple might do something about this. They don't care so much for the FOSS side of things, obviously, but I wonder if they might demand chips from Intel without the management engine, because it's a potential attack vector they can't control.
I suspect at some point they will simply drop Intel for their own (ARM) platform. I think moving will be easy once all app store submissions are in bitcode.
I strongly believe you are correct. They have been mentioning that their ARM processors are desktop worthy. I also believe Apple are displeased with Intel's current inability to consistently get their new chips to market. All of this has to make one think Apple will take matters into their own hands soon. Likely within the next 2 years.
Important bit to note here: the two year timeline is probably only feasible for low end devices, like the MacBook and MBA.
Towards the higher end, ARM can't hope to field anything in that timeline to compete with even todays i5 or i7s (or corresponding Xeons). Some people do use this kind of CPU power.
I don't have a guess at what Apple is actually going to do, but the Retina rollout is a plausible model. Even 5+ years after the first Retina product, it's still not available across the lineup.
I don't think the switch would present a significant problem in marketing or for developers, so it would purely be a question of having the chips that fit the products. The Macbook, as you point out, is basically already there.
It sounds almost unbelievable, but it could happen. I mean, Apple, unlike every other computer company, has successfully transitioned processor architecture twice before (68k to PowerPC, PowerPC to Intel). They could pull the same tricks they pulled for PPC to have a smooth transition: x86 emulation on ARM, “Universal” (fat) binaries, and making it easy for developers to port their apps.
bitcode is still architecture dependent, bitcode generated for x86 won't run on ARM. The only reason they are requiring bitcode is so CPU specific optimizations can be done, not to allow for portability between uarchs.
Even if the ME was opened, the chips themselves are complex enough that nearly anything could be hidden. State machines that enable backdoors from instruction sequences can be pretty small (triggering these from a preferred vector, such as a web browser, seems hard-ish though).
On the CPU: extract some bits from sequentially allocated memory, a la steganography. When those bits match a certain predicate, run a decryption and feed them into a subroutine.
Now release a bunch of png, jpg, and mp4 media on the internet with the low order bits set to match.
> triggering these from a preferred vector, such as a web browser, seems hard-ish though
Easier than it used to be, though, especially given modern Javascript JIT compilers. (And that's to say nothing of more direct methods like Chrome Native Client!)
It seems superficial to concentrate on a few kilobytes of binary blobs as a security issue when millions of logic gates are also hidden from user scrutiny by design in most computers. That the number of people you have to trust now includes firmware developers in addition to hardware designers is a small movement in the scheme of things, though it may be a movement in an undesirable direction.
Dependence on a few companies to design and make processors will not work in the long term. Open source processor design that can be manufactured by anyone is the way out of this problem. Even if this never happens the attempt to go there is enough to make the large companies involved with cpus beg and serve.
Theoretically one way to correct it is to have an external device that blocks network activity going in or out.
Yes, I realize you could get around this. The superblob could be a) looking for patterns in JPGs for input, and b) stenographically encoding output into...anything the user is doing.
I think that, given a large enough group of people willing to make a mass-purchase of CPUs, Intel would be likely to listen to requests for a batch with an open-sourced Management Engine component, or some shim akin to the one RHEL uses to boot UEFI in Secure-Boot mode. (mentioned it on /r/ReverseEngineering a few months back.)
I don't know who to reach out to at Intel on that suggestion though.
The possibility I see is their semi-custom business. A cloud provider or someone else with the money can have them make one that strips out all the spyware or DRM stuff. Leaves everything else. Optionally, strips out some other baggage from backward compatibility that FOSS OS's don't even need. Preferably, though, smallest possible changes to the chip like straight up removing the wires connecting ME.
While I would love a contemporary performance computer that can be trusted, no such device is even remotely possible in the manufacturing and fabrication ecosystems of today. Consider for just a moment ALL the chips inside the box. All the microcode, all the ROM, all the places something could be intentionally hidden. The idea that you could buy some parts on the internet at retail price that could satisfy the truly paranoid (ie defense & espionage communities) is ridiculous.
On the other hand, it still is probably possible to prevent a computers unrestricted access to the internet. For now at least.
When I saw RISCV mentioned as an alternative, I had to check the date twice to make sure it wasn't an April Fools'. I understand the concerns and all, but wish the alternatives were a little better picked out.
Most people already mentioned SPARC and ARM as alternatives, so I won't delve into those arguments other than point out that there will _always_ be commercial interests at stake here - hardware, unlike software, requires considerable material resources to create* and distribute (and is still harder - and therefore rarer - to create for its own sake), so there won't be a wide variety of viable options out there, and new CPU architectures don't grow on trees.
Better to lobby for open specs on the "offending" bits of hardware, really.
* - yes, software creation can also require material resources (and a whole lot of time, which can be expensive). Let's not belabor that point...
> POWER is the only architecture currently competitive with Intel in terms of raw
performance, and boots using a fully FOSS firmware with no DRM
antifeatures embedded.
That's pretty cool. This combined with some benchmarks I saw for server workload on POWER8 will hopefully revive some interest in the platform.
Opterons from 2011-2012 are still available and seem to be the best option to me for this purpose. They're reasonably performant (16 cores...), affordable and there are plenty of mainboard options. Software support is excellent of course. I'm just not sure how valid the "pre-2013 AMD is safe" claim is, since vendors have been known to include some remote management technology like Intel's ME in earlier versions before making it a standard feature.
1) requires FOSS users to purchase a license from Microsoft to boot FOSS on affected machines that lack an appropriate Secure Boot override.
What "appropriate" Secure Boot overrides are available?
2) the end user is unable to modify the signed software
without a license from Microsoft, even though they have the source code available to them under the GPL.
Other parts of the posting imply that we have no idea what the software does, but thhe statement above says we have the source code. What am I misunderstanding?
1b) nuke the platform signing key and replace it with your own (iff the vendor lets you)
2) You're mixing things up. "We have no idea what the software does" refers to the hardware management code, which can run a full OS stack. But that quote refers to the tivoization "feature" of Secure Boot: you can recompile your software, but not run it on the hardware, because you lack the signing keys to make the machine trust your code. But, see 1)
This is a one-sided view. It can, and also is, used to implement theft-protection, thanks to which the police tracked the guy, he got convicted and I got my expensive laptop back. Yes, the guy reinstalled the OS, but the tracking SW survived precisely thanks to these technologies.
Absolute LoJack, with windows. It installs itself as a windows driver before/during the boot. It sends location once a day, or more often if you flag the device as stolen/missing. It can also remotely "brick" the device (yes, it can be undone by the owner) if the data is of concern.
I deliberately did not set BIOS password so that the laptop remained usable to whomever got their hands on it.
Requires purchase of a cerficate from one of the authorities Microsoft recognises (Verisign/Digicert/...) and then the signature of Microsoft on compiled bootloader code. Either way, you have to pay and you have to get Microsoft's permission.
It certainly does not require FOSS users to purchase a license. There is already a shim loader signed by a MS-recognized authority, which ships with a signed copy of MokManager, which lets you register a "machine owner key" of your own choosing. You can then use that key to sign kernels for your own machine, or for anyone else who wants to go through the on-screen enrollment step to trust your key.
No additional money has to change hands between anyone, and no additional permission needs to be granted from Microsoft to anyone. (You have to get the permission of someone with physical access to the machine during boot, but if your goal here was FOSS users controlling their own computing, it's a good thing that that permission is required.)
IFF you want to support the default set of keys installed on computers that ship with Windows. Secure Boot does not prevent you from installing your own keys, in fact most linux distributions do this already and just use a shim loader signed by Microsoft, the rest of the chain is signed by custom keys (the keys are silently and automatically installed for you).
IIRC, Secure Boot spec said there must be multiple trust anchors, i.e. it's not like "user's own or Microsoft", but there can be any combination of trusted CAs (and I bet there's NSAKEY somewhere, huh).
I'm not sure about the implementations and real-world situation, but as far as I get it, with X.509 with Secure Boot generally uses, one should be able put the exact card's vendor certificate (not MS CA root one) to trust the extension card. (Sadly, I think there's no way to trust one specific signature.) I guess that's probably very non-trivial in practice.
At worst, one should be able to put their own CA (to sign their own software) and be forced to add MS CA to trust the third-party software as well. But - if UEFI implementation allows user-defined CAs - it should be possible to run your own code without asking Microsoft's permission.
To be fair, I think this is only for tablet & mobile.
On desktops and laptops I've seen, there was a way for end-user to upload their own trusted certificates and use those instead of Microsoft ones, and I think that's when done like this (when, whatever the defaults are, end-user can get in control), Secure Boot is a good idea - even though the implementations are not.
I guess there must be some ignorant (or malicious) desktop/laptop vendors that don't provide key management options, but hope there isn't many.
Okay, so we get a pile of FUD (Secure Boot and Intel ME are DRM features now? 'kay), no acknowledgement of the actual security threats that compel Intel, AMD, Microsoft and the OEMs to adopt these measures, and an appeal to dump x86 for ARM (um), MIPS (uhhhhhhh), POWER8 (wat), and RISC-V (how?). What is the point of this, exactly?
Because if there is closed source binary blob in each x86 with privileges level higher than kernel we cannot make secure software (because trust chain will be broken just at processor level)
If a secure boot chain makes you feel nice and fuzzy inside, then perhaps you might be interested in setting up your own. Without the ability to do so, you are boned if the trusted entity becomes untrustworthy (such as if the mfg was to be acquired).
If you alone are the trustworthy entity, things work better.
> If you alone are the trustworthy entity, things work better.
That really, really depends on how trustworthy you are, doesn't it? I would argue that most computer users don't and shouldn't trust themselves to secure against low-level threats, and some of the people who do trust themselves really shouldn't.
Yup. I run Debian instead of Gentoo because, for various reasons, I trust the Debian project to be better at things (like triaging, backporting, compiling, and testing security updates promptly and correctly) than I trust myself. I think this is a common decision.
I later extended this logic and bought a Chromebook—a decision I don't take lightly, as a free-software advocate, but I was not convinced that there was an alternative that effectively let me retain more control over my computing. One of the things the Chromebook does that basically nobody else does (systemd vaguely wants to do this, my previous employer wanted to do this for our customers, etc., but I don't think anyone actually does) is it enforces a secure-boot-style thing for the entire OS, and makes it hard for anyone who doesn't have the signing key to take control of my computing away with me. In an ideal world, someone other than Google would have the signing key. But per the logic above, I definitely don't want it to be me.
So, why is SPARC left off in all these analyses? It's right there ready to pick up and deploy. More open, easy to acquire, and trustworthy (far as licensing) than than a POWER chip although slower for sure.