Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Volkswagen’s Diesel Fraud Makes Critic of Secret Code a Prophet (nytimes.com)
474 points by nkurz on Sept 24, 2015 | hide | past | favorite | 227 comments


This would be a good time to point out that a similar auto software story found its way to HN recently: https://news.ycombinator.com/item?id=9643204 Once Toyota's ECU code was reviewed by domain experts, they found extraordinary lapses in basic software quality practices. This and the VW fiasco clearly show that hiding ECU code is the wrong way to go. We can directly measure the negative consequences, and these are only 2 such incidents discovered recently. Who knows how many more issues are still hiding. We won't really know how at risk we are unless there is some kind of 3rd party review process, required by law. Clearly, the auto industry will prioritize profits and liability over actual quality, in much the same way that banks will never voluntarily limit their risk at the sacrifice of profits. Self-regulation is not working.


History shows us that "3rd party review process"es turn into paperworks games, with all of the effort going into making sure some boxes are ticked and no effort going into actually thinking about the code. Especially for large code bases operating in complicated domains where the effort required to really understand both the code and its context is at about the same order of magnitude as writing it in the first place.

A 3rd party review can reveal horrid practices, but it's hard-pressed to make any sort of guarantee, no matter how soft.

We know how to do better than "process". Software verification techniques are approaching economical. If a piece of safety-critical software is going to put lives at stake on every road in America, it's reasonable to ask the creator of the software for a set of formal specs that is concise enough it can actually be reviewed by experts, together with a proof that their software meets that formal spec.


Current history is showing that if the source code isn't revealed, the formal specs will be rigged. Concise formal specs and proofs are a great addition to the source code, but they are not a substitute. Access to source code is even more critical when lives are at stake.

> We know how to do better than "process".

Are you saying that revealing source will lead to more bureaucracy that formal specs will somehow avoid? I don't see how that follows.


Open sourcing the code is an orthogonal issue to whether we should consider the use of formal methods a best practice.

> Are you saying that revealing source will lead to more bureaucracy that formal specs will somehow avoid?

I'm arguing that process is insufficient (which, crucially, doesn't imply formal methods are by themselves sufficient, and also doesn't imply process or access to source code isn't necessary).

Merely revealing the source code isn't enough if you need to know how the car behaves and have to recover the meaning of a bunch of complicated equations in order to make any sense of the code. Really reading and trusting the code would require more effort than re-writing it.

> Current history is showing that if the source code isn't revealed, the formal specs will be rigged

Hm. That's very surprising to me. Can you give an example of a company providing a rigged formal spec?

Also, rigging formal specs is why the formal specs should be reivewed by a third party expert.

And if/when fraud does happen (e.g., by giving a bunk formal spec or cheating on the proof), formal methods still have two advantages over process:

* The debate over whether enough work went into QC becomes trivial, and skimping on QC becomes willful deceit (something that's incredibly hard to demonstrate in the status quo).

* Unlike process, there are technical solutions to the problem of rigging proofs. And, the specs themselves are much easier to review than the code. So the review process is pretty likely to catch cheating.


Agreed why add process when financial and criminal liability would suffice. Add sufficient liability and players will improve their internal processes appropriately.


I think the Bookout v. Toyota case is a pretty good example of how culpability alone is an insufficient motivator and how an add process (e.g. external audit) could have prevented a tragedy.

Michael Barr's review of Toyota's ECU code (http://www.safetyresearch.net/Library/BarrSlides_FINAL_SCRUB...) showed numerous compliance issues with established industry best practices (80,000 violations of MISRA-C) and failure to even follow Toyota's much laxer internal coding standards (32% rule violation). Toyota shipped uncertified versions of their code and the design and behavior of that code prevented defect detection.


External audits meant nothing with Enron. Criminal liability is the key.


I agree. The US legal system has historically been way more lenient on financial companies than automobile companies. In the case of Enron, they had a lot of financial incentive for their actions, with only the prospect of a few fines as the expected downside. I don't see that being the case for GM, Toyota, etc.


Because they're slightly different cases considering how they begin. The GM and Toyota cases are that they screwed up. They failed to design a safe system. Meanwhile this VW case and those financial cases are that they designed the system intentionally to screw up other people. It's like accidental homicide vs murder.


You need more than this. GM just off scott-free after killing and wounding hundreds with the ignition switch malfeasance. The company plead down to do some PR and pay a teensy fine. Meanwhile, the murderers among them were not charged.


Depends on how it is managed.


Definitely. Formal methods don't replace process carte blanc, but rather make the tasks from which the review process is composed tractable.


External audits are already required for finances. Safety critical systems are even more important and should be subject to even more scrutiny as a result.


It'll take a lot of resources to develop institutions to do this. It'll take a long time to debug these institutions.

We're still arguing over "goto considered harmful", "the billion dollar mistake" and how toolchains influence code-safety-behavior.

And there exists a class of CASE tools from the '90s that, while being somewhat clunky, addressed some of the transparency issues. But those are hardly even relevant any more.

And to make it worse - the final configuration of the system matters just as much as whether the code bases for the individual nodes have passed an audit. This is especially true of CAN networks.


The thing to bear in mind about banks is that nowadays they are really computerized service companies that happen to specialize in finance. Competent tech isn't just a competitive edge, it's fundamental to being able to operate at all.


I've worked in two of the biggest US banks for years and I can confidently say both important departments were far from completely understood by most people working there, let alone outsiders doing an audit.

Most of the complexity is in the messages passed between systems. In a code audit it's fine to remove sending tag X by system A but then you find out when you hit prod system B used tag X and now it's not there which causes a problem in system C. And that would be a simplified example.

I'd be disturbed by any outsider claiming they understand the dynamics of multiple systems with millions of lines of code which often have feedback loops.


Honest question, have you ever worked for a bank? Many still run on AS/400 mainframes in the backends. Knowing what I know, that comment actually couldn't be further from the truth. Banks are insanely risk adverse, and rightfully so. If it works perfectly, they absolutely will not change it, even if the newer shinier tech has real benefits.

Source: I don't work for a bank, but have worked in electronic trading for the past 8 years and work directly with the (in)competent tech teams from other banks to clear and reconcile trading bits.


Using AS/400 systems would make them a computer service company. I actually work at a bank (developing apps) and we use a massive COBOL system for a lot of data. However, we also have a massive Hadoop cluster (much larger than the COBOL systems, and firewalls everywhere.

Banks might not be the most competent at web programming, but they higher a ridiculous number of awesome security folks. I would still put them behind Apple or Google in their given domains, but many banks have pretty robust technical systems for their domain.


Every bank I've ever been with has had a pathetic limit on its online banking passwords. One of them (NAB) was limited to something like 10 characters and only numbers and letters.

Maybe that's changed recently but still, firewalls won't help you when somebody brute forces your users' passwords in 10 minutes.


> Many still run on AS/400 mainframes in the backends.

Are you implying that using old technology makes it somehow incompetent? I don't know about you, but I absolutely do not want my financial institutions to be running their risk analysis software in Node just because someone wants to try it out.


Nope. I'm implying that they don't use the latest and greatest or "best tech". I'm agreeing with your opinion and having worked with the tech teams of banks am confirming it as truth via firsthand experience.


Like in any industry there are good teams and bad teams. Working in finance verses working in say social does not automatically make you a bad developer.

Culturally there are two things make finance different. One is that it is a hideously conservative and risk adverse. AS/400's are used because they are well understood, very reliable, and supported by someone other than an attention deficit teenager in a bedroom.

Second is that mainstream finance doesn't see itself as a technology industry. There are areas like quant investment and high frequency trading that are, but most finance companies still look at technology as a line item on the budget rather than the foundation that their business is based on. This is changing slowly (see one) but will mostly likely require an external disruption to push change through any quicker.


Entirely agreed on all points. I work in hft and only am in this industry due to the fantastic bleeding edge tech.


So which part of simonh's comment were you saying couldn't be further from the truth? I didn't see anything in his comment about the latest and greatest or "best tech".


I have seen systems that ran on hardware as old as DEC VAXs as late as 2008, but I'm not sure the reason behind not transitioning was risk aversion.

Regardless, there aren't formal controls in place. Otherwise issues like Knight Capital [1] wouldn't have happened.

[1] https://en.wikipedia.org/wiki/Knight_Capital_Group#2012_stoc...


> about banks ... competent tech isn't just a competitive edge, it's fundamental to being able to operate at all.

Automobiles are going along that curve too. Maybe this whole VW scandal is an indication of this change?

edit and the Toyota firmware scandal in 2013 has similar implications: http://www.latimes.com/business/autos/la-fi-hy-toyota-damage...


That may be true of the largest banks, but there are hundreds of others that are running on fairly incompetent tech. Either way, external audits have been a part of them for much longer than the computerization has been.


Absolutely. But finances are pretty standardized, software is vastly more complex. Audits are a good idea, but it's an incredibly hard problem.


That's true, and it isn't hard problem. But note that audits are also a hard problem. Auditing teams don't go through and reconcile every transaction. They conduct spot checks of sample transactions and scrutinize controls, and aggressively follow up when any failure of controls is observed. I think a lot of those concepts could be applied to code audits.


I think a better approach would be requiring that developers (and their managers and testers etc.) working on software that could kill or injure people if it malfunctioned have some sort of a professional license, that would be granted and revoked similarly to how medical and engineering licenses are granted and revoked.


I'm not opposing this idea, but I'm not sure it would have helped in the VW case. There were some people (engineers? Managers?) who were cheating and they knew that what they were doing was wrong. I don't believe a license would have changed that.


Other people have raised the question of how well the prospect of losing a license would act as a deterrent.

One other aspect which might be even stronger would be if the professional organization had a role not unlike a union in protecting its members’ professional decisions. Imagine if you worked at VW and your boss told you to make a change which affected safety, emissions, etc. – how different might your reaction be if you know that if you refused or reported it to the appropriate regulators and there were repercussions the Bitpackers Guild could provide legal representation and expert witnesses for you, stage a strike where no licensed engineer would work for an irresponsible company, or simply ensure a lot of publicity? Suddenly it's not “go lean on Sally until she gives the engineering sign-off. She can't afford to quit until her kid's out of college” but “do we want a team of professional engineers to hold a press conference saying we're cutting corners over our experts' judgement?”

There are certainly potential downsides but … anyone who drives a car, uses medical equipment, etc. might reasonably conclude they're worth it, particularly if the system was structured to focus on transparency and due process rather than the pathology some unions are prone to where members are always defended even when they're in the wrong.


If a developer is asked to do something obviously wrong they might not feel they can refuse, because they can be replaced with someone willing to do it.

If an architect is asked to design a bridge that isn't safe they can refuse, secure in the knowledge they can't be replaced with someone willing to do it, as no licensed architect will knowingly design an unsafe bridge.

Of course, a licensing scheme would probably have a bunch of disadvantages.


Perhaps the threat of having their license pulled, thereby nullifying potential future employment might have caused them to think twice about wilfully cheating emissions controls?


While the FDA may not be a great regulatory group, if someone at a pharmaceutical were found to cheat like this they could potentially be barred from working in the industry again. This works in some cases, at least in theory.


Perhaps the angle is that this would constitute ethical turpitude sure to cause loss of license and ejection from one's specialty.


While I don't agree with the license requirement the least we could require is publication of the source code and validation by an industry body made up for subject matter experts for safety critical code.


> some sort of a professional license

Sure, as per the construction industry.

Or perhaps simply the threat of being prosecuted for manslaughter or bodily harm, etc?


Maybe parent meant the software in finances, because that also requires external audits to some extend.


Even the possibility of an external source code, revision history, requirements etc... audit would change working practices dramatically ... particularly if there were legal penalties against developers found responsible for introducing bugs.


There's no way in Hell that I will consent to be held responsible for the output if I do not have full control over the inputs.

If I am an employee of the company, and someone else is telling me what to do for my job, and particularly if they are telling me how to do my job, they must necessarily share responsibility for anything that I do pursuant to obeying those instructions.

And threat of retribution leads to stupid practices:

  public void CoverYourAss()
  {
    try
    {
      int x = 0;
    }
    catch
    {
      throw;
    }
  }
This is a simplified example of a real-world coding standard. At one of my former workplaces, everything had to be wrapped in a try-catch block, including statements that would only ever generate run-time exceptions, like out-of-memory exceptions. It didn't matter if you re-threw the exception you just caught. You just had to make sure the try-catch was there. In every function. Or you're fired. I am not making this up. If the software ever crashed to desktop for any reason, including a bad memory module in the computer running it, or someone nuking parts of the filesystem while it was running, or even a bullet striking the motherboard, someone was getting blamed for it on the development team, and fired. As it would be a witch hunt anyway, the inquisition squad would obviously look at the code written by those most threatening to them, or least popular, or both, before anyone else, and seize upon any irregularity to lay blame.

You'd better believe I was sending out resumes the day I found out about that.

I can only imagine how bad it would be if the penalty was to be fired plus arrested and/or sued.


But if there were a standard set of industry-specific tests that the program had to comply with, it's not like it would just be on you.


You really have to remove the incentive to cheat from the software group before the tests happen.

A defeat device does not get installed accidentally. It's not like a mutation propagating through evolution of living things. Someone decided to put it there, and someone got paid to do it. There was an additional requirement added, one that had no official test coverage. It was to increase fuel economy and produce more pollution when no one was paying attention to the emissions.

As far as the developers were concerned, they did everything right. They built the code their employers asked them to build. It passed the official tests. This was a triumph; I'm making a note here: "huge success!"

The developers worked for the automakers, not the testers or the public. They did what VW wanted, which was to game the system to make more money. You're not ever going to do more than start an arms race as long as the developer is taking orders (and getting hired or fired) by the guy who just wants to sell more cars.


The irony is that instead of incentivizing auditing of system, hackers and security researchers put themselves at huge risk whenever they look for, find, and report vulnerabilities. The companies that have bounties are doing it right.


I agree completely, but on the other hand I'm not convinced that tighter government regulation of ECU code would be better. Can a bunch of government bureaucrats come up with a set of standards and regulations that would actually be beneficial? Given the track record with similar projects, it looks doubtful.

Really I'd say that part of the problem here is that academia has been letting us down. CS programs are universally of fairly low quality, in my opinion, and proper software engineering programs are very rare. There has been insufficient pure research into software development practices, software design patterns and features, and so on in regards to what is required and what is beneficial when it comes to creating control software and firmware. Industry too has been letting us down with their lack of pure research in general, but that's been obvious for a while.

We're starting to reap what we've been sowing for the last several decades in software engineering. We got out of the first "software crisis" where many software projects didn't even deliver anything worthwhile or functional, but now we are in another perhaps even more severe software crisis. One where shipping software that "works" isn't a problem, but where making sure that it does the "right thing" and is sufficiently secure, robust, etc. for the intended use is becoming a huge issue. And not just a financial one, but one that can (and will, and has) result in injury, death, and destruction. We very much need to wake up to the seriousness of this problem, it's not going to get better without concerted efforts to fix it.


I develop safety critical software for railway applications. We have to follow some ISO norms that contain some sensible rules. For example, code reviews are mandatory, we need to have 100% test coverage, the person who writes the tests must be different from the person who writes the code etc. This leads to reasonably good code.

It also makes some things a lot more difficult. For example the compiler must be certified by a government authority. This means we're stuck with a compiler nobody ever heard of that contains known (and unknown) bugs that can't be fixed because that would mean losing the certification.

I assume the car industry has a similar set of rules and the problem is not a lack of rules, but a lack of enforcement.


> We have to follow some ISO norms that contain some sensible rules. For example, code reviews are mandatory, we need to have 100% test coverage, the person who writes the tests must be different from the person who writes the code etc.

The exact same thing happens in the car industry.

> I assume the car industry has a similar set of rules and the problem is not a lack of rules, but a lack of enforcement.

Bingo! Right now I'm staring at some ECU code(not safety relevant, thankfully) that looks like it's been written by a monkey, but I'm a new addition to the team, have no authority here yet and we have to ship it like yesterday.

Guess what will happen.

Truth be told, for safety relevant applications, I've seen the code and it's quite good. And the issue in this case is not that the software was badly built, it's that it was built with deceit on their mind.


>some ECU code(not safety relevant, thankfully) //

What parts of the running of an automobile engine aren't safety relevant?

Sounds like "oh we made the stock for that shotgun from cheap, brittle plastic as the stock isn't safety relevant; how were we to know that it would crack and embed itself in someones shoulder?".

You're right that the primary issue here is deceit but the issue of closed source code in such systems is how that deceit was possible [edit, should probably say "facilitated that deceit" as the deceit would still be possible, just harder and move discoverable with open source]; and that leads to questions of safety as if companies will screw over the environment against the democratic legislation then they're unlikely to be mindful of other morally negative consequences.


Infotainment, air conditioning, etc. There are many many more ECUs in a car than just the one in the engine.


When you have an organizational culture that places meeting deadlines without sufficient planning or resourcing above quality and safety ... the result is inevitable.


I work in a different industry. We have to follow some sensible rules: code reviews, 80% minimum coverage. What happens in practice is that the test verify that the result is not null (and nothing else) and the code reviews pass... God knows how. I have seen methods with a cyclomatic complexity of 65 and methods a few hundred lines long. Oh, and this is in Python - the Java code is worse.

[I was also told by my team leader "no, you can't fix that code, it belongs to X from the main office and he will get angry and not help us anymore".]


> This means we're stuck with a compiler nobody ever heard of that contains known (and unknown) bugs that can't be fixed because that would mean losing the certification.

This is why regulators should embrace formal methods as an alternative to process-heavy regulation. They're actually measuring ground truth, and today are not that much more expensive after accounting for all the costs associated with certification processes.


... or highly intensive 8-years-of-in-service-operation-equivalent testing at the system level ...


System-level testing doesn't always suffice; see the Toyota UA case.

Or, more topically, see the VW case for examples of why testing "in-service-operation-equivalent" requires a certain level of trust that's not ideal in a regulatory relationship.


Governments just need to regulate one thing on the ECU: access to the code. They don't have to make any specific laws. Of course to be able to proof, this is the code uses, one would need access to upload it to the ECU. (And compile it.)

Which gives another issues: people breaking the law by changing their ECU map, to exhaust more pollutants.

But I suppose this is no more of an issue then it was/is with WiFi access points, not a lot of people do it, and you don't want to brick your car :D


I'd say there are some successful efforts to regulate software in safety critical areas. The FAA comes to mind. I worked in the avionics industry for a while, and there are strict standards to which flight management and avionics displays software must be adhered. The DO-178 family of documents defines these standards/guidance/whatever. As a young engineer at the time, I remember two of thinks were were not allowed to do under DO-178B... Pointer math of any kind, and dynamic memory allocation.

These standards have been around a long time too.

https://en.wikipedia.org/wiki/DO-178B


I find your comment about CS programs a bit misplaced. Are you aware that a research team from WVU actually uncovered this emissions problem in the first place?

The "VW diesel-gate" aside, I do share your feelings about the quality of CS programs in general. There is nowhere near enough education about real-time systems, high-reliability systems, and formal verification methods. All of these topics are completely appropriate academic material, in addition to being fundamentally useful for business needs. I'm not sure any of these topics are covered in the usual undergraduate curriculum.


Auto makers have crash tests and even pay private independent companies for it, AFAIR. Let's create a software certification entity that gives stars and stuff that auto makers can display in their ads.


What about mandatory "preferred form for modification" source code releases and mandatory bug bounties?

That is, if anyone finds a bug impacting safety in the ECU code, the manufacturer has to pay $1 million to them.

If any employee shows they release obfuscated source code, or the binary is not compiled from that source code, they get $100 million reward paid by the company and criminal charges are filed against those responsible.


Self-driving cars are also starting to be commercialized, which raises another level of questions.


Perhaps the hackers trying to release an open source version of the John Deere on board computer can extend this drive to all computer driven vehicles.


I think the Volkswagen episode shows that we need better emission testing mechanisms.

Demanding that all code be open-source unfairly targets the software industry (companies that sell software products for money). Why not ask Google to make their search engine code public ? If all software code must be publicly available, why not demand that Coca Cola also make their formula and their entire manufacturing process public ? Why not insist that no product must have any proprietary/secret information about how the product was created ?

Ultimately, the proof is in the pudding. I think we need a better emission testing system for testing emissions, not a government review of software code.

IMO asking for free software code is an ideological demand that is better promoted through the free market. Free software ideologues should refuse to buy products that have proprietary components, but all of us should only ask that the government fix its emission testing process.


> Demanding that all code be open-source

Open source means a license which grants others permission to modify and share the code in a non-discriminative way. This is not that.

> asking for free software code...

Free software also add patent grants and the right to run the software for any purpose. While anti-DRM has been called for in other articles, code inspection doesn't need to be done by volunteer researchers and can simply be done by government inspections. This leads me to...

> why not demand that Coca Cola also make their formula and their entire manufacturing process public

I am sure Coca Cola allow health inspections into their manufacturing process. They are also required by law to provide an ingredient list in order to be sold for human consumption, including a nutrient list which further details what Coca Cola is made from. If they had sneaked in some illegal substances, say rat poison, we would be here asking why current regulations were not enough.

> we need a better emission testing system

We can always claim that the testing systems need to be better, but it is not enough to just do black box testing when the product itself can identify and change behavior during testing. If society keeps the current system, we would need to at least add criminal liability for deceiving during testing.


>> I am sure Coca Cola allow health inspections into their manufacturing process. They are also required by law to provide an ingredient list in order to be sold for human consumption

Check your coke can. Look for the word "flavor". That's not an ingredient. That's their secret recipe stuff. They don't have to disclose it on the label. They can make a "generally recognized as safe" (GRAS) determination in-house and exempt themselves from disclosure of a trade secret. These are often submitted to the FDA for approval, but not always. If the substance can be determined as gras based on publicly-available information they need not submit to the FDA. It's all about allowing companies to keep their trade secrets ... which is very much similar to the situation with vehicle electronics.


If a car manufacturer want to add flavor to their software which follows the very strict regulations of GRAS, then why not? If someone want to add different lights on their hubcaps, or maybe a voice controlled assistant inside the car, then they are free to follow all the regulative hoops in order for that isolated unit of additive to be declared as safe.

The legal fallout if Coca Cola claimed rat poison as GRAS would be nothing less of major incident. The expert panel that they are required to hire would likely go to jail on fraud. The documents to the FDA would be proof of intentional deceit, basically guaranteeing that who ever submitted it will get the harshest punishment available to the court. All of it would be enough of a deterrent that companies only do this for the most critical company secrets which they are 100% sure is safe, and everything else can be inspected by the FDA. A pragmatical and reasonable method to make life critical products as safe as possible.


There have been plenty of attempts to reverse engineer Coca Cola - and some came pretty close.

However the major part of Coca Cola (and its flavour) is in the recipe - the technical process that yields final product, not the chemical composition of the ingredients.

I don't have a reference handy, but I am going to present you with an analogy for french fries. I hope you can understand my point from that case (which is even more illustrative of the phenomena I am trying to highlight).

http://aht.seriouseats.com/archives/2010/05/the-burger-lab-h...

There are plenty of fields where the Car Companies can compete - and software quality should absolutely be one of them. Therefore killing DRM does everybody a favor.


Are we really talking about open sourcing here?

As far as I know the discussion is against DMCA, which prevents even looking at the code on the car. DMCA makes no sense whatsoever.

https://www.eff.org/deeplinks/2015/09/researchers-could-have...


Devil's advocate:

It makes the law is trying to avoid their being a distinction between code that lives on hardware you have in your possession and code that lives on a company server.

For instance you might be indignant that you can't look at the code on your proprietary device, but you aren't indignant that you can't look at the code that runs Google's search engine.

So without laws to "equal the playing field" in a sense - server side service companies get a distinct advantage as they're able to have trade secrets - while companies that distribute code are kinda screwed.

Even with the current legal environment companies STILL try to push processing into their own server controlled space (see Siri, Cortana, most cloud services)


Honestly, I believe if you're given a physical device in your hands and pay for it, you should have the right to figure out how it works.

The problem of everyone uploading code to the cloud to prevent this (which is not applicable to cars firmware anyway) is different, and a valid way to protect your IP. I don't particularly care this makes code move into the cloud. But the packets sent to your device, which may affect your device, your health, well-being directly, you should be able to read and modify -- same as the local code. By forcing black boxes upon us we don't have a minimal control, understanding, or prediction of what our own devices can do to ourselves. It's insane.


So you have zero sympathy for the fact that this creates an unequal playing field?

That Google gets too keep all their code private but a car company can't (due to the rather arbitrary fact that it's impractical for the car to ask a server how to change the engine parameters)

How about if I want to buy a device and agree to not see the code? (b/c for instance - I personally - have no interest in it) Should I not be allowed to enter such a contract with a company?

If yes, why is a company not allowed to sell devices exclusively with such licenses?

If not, why are you limiting the consumer's rights? (this rabbit hole simply leads to companies only leasing consumer's devices, b/c selling them suddenly involves giving away all your trade secrets)


> So you have zero sympathy for the fact that this creates an unequal playing field?

I have, it creates an unequal playing field between you as a company and me as a user. It reduces my ability to adapt a product to my needs. It reduces my ability to learn from it. And yes, it also reduces my ability to reverse-engineer it, improve it and then become your competitor. It stifles progress.


You've completely ignored all my arguments and made a counter argument... =S

To address it, I think that when you say "creates an unequal playing field between you as a company and me as a user" that's a bit of an apples to oranges comparison. When I say playing field - I'm comparing two potentially competing software companies (one just happens to have a mechanism to physically separate you from the code you in-effect use)

You have no ability to learn from Google even though you use their search engine - you can't reverse engineer it or improve it. Does this upset you? (again, you will probably ignore my question)


> You have no ability to learn from Google even though you use their search engine - you can't reverse engineer it or improve it. Does this upset you?

I do, in so far as it interacts with my machine. I can look at the APIs, at the traffic patterns, at whatever code is shipped to my browser. But yeah, the secret sauce stays on their servers.

But then again, comparing Google to VW is apples-to-oranges too. Google is a software company, VW is a car manufacturer that incidentally has to write code for their engines. They are not competing with each other, they are not playing on the same field. So VW having no way to use SAAS for ECUs (thank God) is as much unfair to them as their DMCA restrictions are unfair to me (the user).


Well actually they compete for the same workforce

So if you run a SAAS company you have the ability to create a giant monopoly make billions and buy up all the talent, and if you run a device/native-code-based company you should be relegated to fighting on the margins making returns in the window between your code-release and your competitors' reverse-engineering?

I'm not even sure what to say other than.. don't you think that's .. silly?

I think you can see the effect of this on VCs, where most funded startups are SAAS-ish, and you'll never get funding to make a product that's native.

This distorts the whole industry

A simple example is face identification. Facebook has some secrete sauce for doing an amazing job. You upload a photo and it almost always knows which one of your friends it is. I'd love to have a native app that does that for all the photos on my phone. Am I ever going to see it? No, b/c anyone who is half way decent at that stuff will go work for a SAAS company and make a lot of money by keeping his/her innovations as "secret sauce"


If you want an apples-to-apples comparison, why not look at the actual physical devices that Google is selling? To the extent that Google and VW are "competing for the same workforce", the folks writing code for the Nest, OnHub, or heck even Android are the ones you want to be thinking about anyway...

...And more importantly, the implications for the folks buying and installing these products are a hell of a lot closer to those pertaining to VW's ECUs.

If you're gonna argue for a level playing field, it probably helps to start with teams that are playing the same game, or at least one that's reasonably similar. Right now, you're arguing that Google's bobsled team would unfairly compete with VW's baseball players, ignoring that Google fields a cricket team as well...


> ...you should be relegated to fighting on the margins making returns in the window between your code-release and your competitors' reverse-engineering?

One's ability to profit from reverse-engineering a competitor's hard/soft/firmware is not unlimited. IIRC, if the purpose of the product of the RE isn't to be compatible [0] with the RE'd 'ware, then you absolutely have to do a dirty room RE and documentation followed by a completely independent clean-room reimplementation. This raises the cost of the endeavour by at least 2x.

See, the original holder of the copyright on that 'ware still potentially has a real claim on the product of a solely dirty-room reverse engineering endeavour.

Additionally, AIUI, many software licensing agreements make the situation a little more difficult and uncertain for our would-be reverse engineers.

And, if the thing you're RE'ing makes use of patented material, I suspect that things get rather expensive rather fast.

[0] That is, to work alongside or enhance the 'ware.


Regarding an equal playing field, if all actors in the same space have the same constraints, I don't see how it distorts anything. There are already plenty of standards for aeronautics, healthcare, automotive, (MISRA, DO178,...) that don't apply to other fields. Does it creates an unequal playing field ?


Well I'd argue that in general companies that have been able to shift their business into remote servers are doing much better.

In fact I can't think of something that runs native on a device and is better than all it's competitors that run "in the cloud"

B/c the biggest difference is that by having the secret sauce physically in the hands of competitors, you aren't able to create a vast difference in the quality of your product.

Following standards creates horrible barriers to entry and to innovation, but at the end of the day you aren't spilling the beans on how you do all your tricks. You can't make some fancy new valve control system no one will ever figure out.

Then you look at someone like Google, and it's 100% driven by secret recipes no one knows about

Maybe there should be indignation about trade secrets, but I feel like the physical distance to the code you are running is irrelevant.

When I search for a word or phrase on Google, I'm running code remotely on their server. Yet no one feels like I should have the right to see that code. Why?


> When I search for a word or phrase on Google, I'm running code remotely on their server. Yet no one feels like I should have the right to see that code. Why?

Because I have not paid money to Google purchasing a tangible thing. If I buy any tangible thing, it should honor the doctrine of first sale, and I should be able to modify my tangible thing as I see fit. It is my property.

Google is akin to a remote service provider; I send in a thing, modifications to my thing are made, and then I get the thing back. I don't ask my warranty repair firm or any other entity that provides a remote service how they did it. I simply make a request at an agreed-upon price (or agreed upon terms: you'll provide ads in exchange for the service) and if the terms are not satisfied I will not hold up my end.


For small businesses it does. There's a huge cost to be compliant with the regulations in medical and aviation space, so you don't see many successful startups and not much progress as a result. We are still flying WWII rockets and use crude antibiotics.


> There's a huge cost to be compliant with the regulations in ... aviation space, so you don't see many successful startups and not much progress as a result. We are still flying WWII rockets...

SpaceX would like to have a word with you. IIRC, they've designed and built -from the ground up- their own rockets that get payloads to orbit at 1/3 the cost of anyone else.

The other players are still flying rockets either largely designed in or actually from the 1960's [0] because not many of the folks who need things in space understand how to build rockets. This means that -much like the telecommunications "market" in the US- incumbent players can put next-to-nothing into R&D and new tech and just sit around and watch the fat paychecks come in.

It's an industry that's in dire need of a shakeup.

[0] WWII was from 1939 to 1945. ICBMs [1] are substantially more sophisticated than any rocket from WWII.

[1] Which are effectively what was transporting the astronauts in the Apollo program.


Regulations are not what is responsible for us flying WWII rockets and using crude antibiotics; they're responsible for those rockets not exploding too often or those antibiotics not killing us in most painful ways imaginable. Rocketry and medicine are hard.


Some fields are more capital intensive than other. It is much easier to launch a mobile app than an oil refinery. It does not distort the competition between companies who have to follow the same rule book.

I understand where you are coming from but you need to have a minimum threshold for software that is going to be a matter of life and death.


Google isn't responsible for software that could glitch out and apply your accelerator directly into a concrete wall.

Like the GP said, physical devices that have a use that is in some way responsible for a human's safety or some element of real risk (cars, medical equipment, fridges, ovens etc.) should have inspectable source code. It need not be open source, just able to be viewed.

That said, if companies are willing to cheat emission schemes they'll probably obfuscate their code to "protect IP" or something.

I'm not saying that it's impossible for cloud software to harm a human, but every example I can think of is fairly contrived and would probably result in somebody reading the code and going "Why does this x-ray query a server for this person's safe dose?"


Dear shill, if you don't want to see the code for your device, don't look at it.

The right of other less optimistic consumers and government agencies are more important than the desires of your device maker.

Additionally, if companies lease devices they are no less accountable for the code on them.


Well, the company could rent a device instead of selling it. Just like google provides services, instead of selling its code.


Why not ask Google to make their search engine code public?

That's a great question - why not? For many consumers, Google essentially is the internet, and by meddling with search results (which they already do to try to personalise my results) they could make or break companies or politicians by presenting one-sided or even false material. That's a lot of power - why shouldn't we know how it works to make sure they're not abusing it?


Why do you need to see Google's source code when you already know that Google is personalizing the search results for you? Hypothetically, what kind of parameters discernible in Google's search code would make the difference between "OK meddling" versus "Evil meddling"?

The bigger question is: why do you think that there is some kind of pure, objective ranking of websites that Google is now tampering with? Do you think that someone whose IP is located in Utah should get the same result for "good hamburgers late night" as someone in New York? If someone from Vietnam searches for "Vietnam War"...should they be sent to the entry at vi.wikipedia.org because it's in Vietnamese and perhaps more relevant to the Vietnam user? Or to en.wikipedia.org's version, because besides having a shitton more inbound links, it's likely to have also many, many more eyes and editors on it? These are complicated things that we don't need a review of source code to debate. But your suspicions toward them seems to be based off of a worldview in which there is One True Web for everyone, which seems as problematic as the mindset that surely Google can do no evil.

Given that Google uses personal data to shape the web its users see...isn't it even more relevant to see the data they collect? AFAIK, Google still allows us to download virtually every bit of data we've explicitly generated on its services, including all the emails and search requests we've conducted.


You're reading a lot into what I posted. For the record, I was mostly playing devil's advocate, but I don't think the question is as ridiculous as the post I replied to seemed to think it is.

There is One True Web, of course, which is the collection of all documents on the web at any given point in time. But the interface to that web is nearly always a search engine which are all obviously presenting a list of suggestions according to some criteria. Google used to use PageRank, which was a relatively "objective" criteria in that at least it applied equally to all pages. Google now uses tons of data to make decisions about what to show me when I search, but neither you nor I have any idea what they are. Maybe they do actually use some algorithm which applies equally to all pages. Maybe they use some deep neural net, and not even they know how it actually works. Maybe they have a list of heuristics that they're evilly using to manipulate all the world's information. We'll never know without seeing the source code.

Again, I'm not tabling it as a serious proposal, or claiming that Google are doing anything evil. But given that they have the potential to essentially manipulate the sum total of the world's knowledge as seen by 99.999% of the population it doesn't seem ridiculous to consider whether we might like more insight into what they're doing.


> Google now uses tons of data to make decisions about what to show me when I search, but neither you nor I have any idea what they are.

To be fair, a lot of people at Google probably don't know either. Peter Norvig had a great talk [1] in which he discussed searching Google's code base for "naive Bayes" and found some code in 2006 with a funny comment:

> “And it was fun looking at the comments, because you’d see things like ‘well, I’m throwing in this naive Bayes now, but I’m gonna come back and fix it it up and come up with something better later.’ And the comment would be from 2006. And I think what that says is, when you have enough data, sometimes, you don’t have to be too clever about coming up with the best algorithm.”

[1] https://youtu.be/ql623nyCdKE?t=5m25s

I don't want to sound like an anti-algorithm-think-of-the-humans advocate...I love algorithms...but I think your statement:

> Google used to use PageRank, which was a relatively "objective" criteria in that at least it applied equally to all pages.

...besides being too oversimplyfing for even a simplification -- I don't think the evolution of Google could remotely be reduced as: first there was purity of math, then came the money -- but I also think that you overlook the bigger issue...No one argues that algorithms are statements based in mathematical truths. It's the decision to use an algorithm -- including the weighting of its tradeoffs -- that is decidedly biased and opinionated.

How is "a page should be evaluated by the number of pages (plus the authority of those pages) link to it" not an opinionated statement of the way information should be organized, versus the pre-Google algorithms of "If a page has a lot of mentions of a word, it must be particularly relevant to that word"?

The fact that PageRank had to be changed and modified as soon as people figured out how to create networks of backlinks should in itself be evidence that an algorithm -- and the decision to use it -- is not just objective truth in the way that 1 + 1 = 2 is.


> If someone from Vietnam searches for "Vietnam War"...should they be sent to the entry at vi.wikipedia.org because it's in Vietnamese and perhaps more relevant to the Vietnam user? Or to en.wikipedia.org's version, because besides having a shitton more inbound links, it's likely to have also many, many more eyes and editors on it? These are complicated things

I don't find this one so complicated; they're better sent to en.wikipedia.org, because that page, like their query, is written in english.


Sorry if this is already evident to you...but the differences between Wikipedia sites is not just about the language. vi.wikipedia.org is independently edited and maintained from en.wikipedia.org. The Vietnamese wikipedia page [1] is not as lengthy as the English one. And I'm not as good as reading Vietnamese as I should be, but the Vietnamese page doesn't seem to have the same content for "Media and censorship" that the English one does...among other differences. So I don't think it's as simple as "Send them to whatever language they used" (though I guess this is one case where seeing Google's source code would be enlightening, so...touché?)

[1] https://vi.wikipedia.org/wiki/Việt_Nam


I have no doubt that the English page is higher quality than the Vietnamese one. That won't matter at all unless the person searching can read English. When looking for information, there are no concerns greater than "even if I find it, can I understand it?"

Even if the Vietnamese page were better, there's no excuse for replying to an English query with a Vietnamese result.


Additionally, it could depend on whether they are using google.com or google.vn.


> Do you think that someone whose IP is located in Utah should get the same result for "good hamburgers late night"

Yes, I absolutely do. What if they want to find out what "people in general, according to Google" think and write about that? That question isn't even possible anymore, if you will.

> If someone from Vietnam searches for "Vietnam War"...should they be sent to the entry at vi.wikipedia.org because it's in Vietnamese and perhaps more relevant to the Vietnam user?

No, because "Vietnam War" is an English phrase. I doubt it's spelled that way in Vietnamese.

> of a worldview in which there is One True Web for everyone, which seems as problematic

How so? When I watch a movie, I see what everybody sees. If it's full of advertisement or laced with propaganda, I know it's that way for everybody. I think that has value in itself. Not every query is about getting what I want, sometimes I want to see "how the world is", and for that purpose Google, Facebook and others aren't helping with their being overly helpful. Others may feel different, but FWIW, I never heard anyone say that typing "ordering pizza in X" is just too much work compared to just typing "ordering pizza". Thinking such things are neat were, as far as I can tell, unilateral decisions made by these corporations. Nobody asked for it, nobody would miss it, and I think it's harmful just for destroying the idea of a "consensus reality" on the web even being possible.


Google's mission statement is to "organize the world’s information and make it universally accessible and useful." Currently, their opinion is that when a user searches for "late night hamburger place", the information most useful to that user is a late night hamburger place near the user, if there is such a place nearby with a statistically significant amount of noteworthiness, i.e. not just a random blog post from another user near me in Palo Alto that mentions how they loved eating Shake Shack in New York.

Your opinion is that what is usable to people around the world is knowing which page ranks highest according to a metric such as PageRank when it comes to "late night hamburger place". Because numerical metrics is "how the world is". There's no way you could be a casual habitant of Earth and think that that's how many other people evaluate things in this world.

But here's an example: What if a user types, "late night hamburger place near me"? How should Google interpret that? At what point does "pure" PageRank shut off and "Google as the new Yellow Pages" kick in? That in itself is an opinion not just about how information should be organized and what is useful, but about linguistics: whether or not that query is interpreted as a question or a user typing in the words he remembers from a song he likes and hoping that this blind query finds it.


> Your opinion is that what is usable to people around the world is knowing which page ranks highest according to a metric such as PageRank when it comes to "late night hamburger place". Because numerical metrics is "how the world is". There's no way you could be a casual habitant of Earth and think that that's how many other people evaluate things in this world.

Yes, and? The question was what I find better. I answered that. That others find something else better is not relevant to what I find better. I'm more than just a casual habitant of Earth and I notice people generally gobble up anything just because it's there and someone is pushing it. The success of corp X or Y means as much as the success of Hitler to me, I still have to make up my OWN mind, people are mostly useless even as individuals and doubly so in aggregate. What they like or don't like doesn't matter unless they can present a good argument for it. Yes, I'm arrogant. And I do not expect Google to agree, they "have a business to run". But that doesn't change my opinion.

Also, I said "how the world is according to Google".

> But here's an example: What if a user types, "late night hamburger place near me"? How should Google interpret that?

Where to buy product X or eat Y is below even trivial, I find it nauseating but not surprising that such stuff is constantly brought up. And yet that's the "killer application" I have to find a replacement for, with "why would anyone even think about that much less spend one second programming anything related to it" not being a valid answer, and my concerns about people seeing what others see not even worth being mentioned, much less addressed.

Don't interpret it at all, and people will have to be more specific, try out various phrases, which they will learn to do quickly. Adding "in [name of city]" to a query shouldn't be that hard.

Also, I'm tired of "information" being held hostage by what is merely about products and those who sell them. Yes, technically that's information too, but it's a very high-minded word for the lowliest of human aspirations in my books. The ones that matter when I am (indirectly) asked what I like or don't like.


Personalization is not a problem. Systematic bias is a severe problem, even if it is not intentional.

The 'one true web' idea is a red herring. We absolutely need to know how google biases the results.


> That's a lot of power - why shouldn't we know how it works to make sure they're not abusing it?

That's a double-edged sword here. The current problem with search, and one of the reasons the exact algorithms stay undisclosed, is not Google. It's your average Joe the small business owner, who sees no problem with poisoning the Internet with spam and fake websites, just to gain a competitive edge. There's even a whole industry (SEO/SEM) serving such Joes, that specializes in extreme "meddling with search results". As long as Google maintains some control over it, it can tweak and twist their algorithms to counteract the manipulation, or even explicitly penalize biggest offenders. Take that control away and the Internet will go to shit.

Search is a common resource, and it needs to be managed. I'm far more willing to trust Google than a mob of hundreds of millions "entrepreneurs" trying to one-up one another at the cost of everyone else.


This sounds similar to security through obscurity.


Yes, but you don't have much other choice. Security through obscurity can and often is practical, and in this case a full, open solution is an AI-complete problem (i.e. equivalent to creating a superhuman artificial intelligence), because it would need to understand motivations behind actions, and not just actions themselves.


It is a valid proposal. Google is not just selling you a search algorithm with their google.com platform, but also the server infrastructure that maintains it, the brand name (a vast swathe of less tech savvy Internet users say google X instead of go to X website), and the uptime. Yes, the algorithm constitutes a portion of their value add, but if they open sourced google.com on its own they would not see their business destroyed for it.

Would Bing suddenly become a lot better? Certainly. Would people start using Bing over Google if it just became as good as Google search? Certainly not. The only candidates that might take market share are DuckDuckGo and StartPage, who preach privacy and thus a lot of users (including myself) prefer them despite their poorer search results. If they could get Googles algorithms, they would produce better results and probably steal market share.

How much market share? 1%? Do privacy savvy searchers even constitute 1% of search traffic now?

Yes, it is absolutely a bad business decision for them to publish their source. They are much more likely to lose money than to see third party contributions to their code make their search so much better they offset the cost of everyone having access to the same quality search. But I still postulate it would not be much revenue and market share, and that there are tremendous applications for extremely sophisticated search algorithms like Googles that could produce tremendous economic prosperity for other disciplines, and I imagine their work might be applied in completely unrelated industries for other search domain problems. Their algorithm is ultra-specific to web page indexing, but I'm sure in that Goliath beast of code there is some novel ingenuity that we are missing due to its closed nature.


Maybe because thousands of people have spent millions of hours making the best search engine in the world and they should be rewarded for their work by making money off of it instead of giving it away for free?


Publishing source code doesn't preclude making money off it. There are many natural experiments in this category, where source code of proprietary products was leaked with little ill effects. It's happened eg to Windows, many games, VMware... This pretty closely models the scenario of making source code public but not giving anyone license to use it for purposes other than study.


Linux is also made of millions of man-hours of work and yet it is free and Free and it has become a great gift to Humanity as a whole.


I think patents exists for this very reason.


But software patents are clearly a scam, too.


They have been rewarded for their work. Perhaps there should be limits on these rewards?


this is a seriuous problem with all digital products and comes down to fixed vs. variable costs: - development costs are fixed costs- they are independet of the number of products sold. - production costs are variable costs- they have to be paid for every unit sold.

as digital copies in most cases have no or negligible variable costs (support may be regarded as variable) the earnings for certain products can be way out of any sensible relation- but who should say: enough is enough? als long as there is no sensible answer to this question there can only be one: let the market decide...


> let the market decide...

Are you talking about a market with or without a regulated strong copyright law?


This is a great question. What would the world look like if no individual could earn more than a million dollars a year, or if companies were taxed at exceedingly high rates once they reached a valuation of 100 millions dollars?

I realize it would be tricky to implement, but considering the societal benefit of more even wealth distribution the idea itself is worth exploring.


As determined by who?


Well clearly if the rewards included all the money on the planet, that would be egregious. It seems fairly likely that there would be limits that are lower than that (all the money in a country) and possibly below that....


Bruce Schneier made a similar proposal in Data and Goliath, his latest book. I forget exactly how similar, and of course the details matter; but the purpose behind it was much as you say.


> For many consumers, Google essentially is the internet,

That's their problem.


> IMO asking for free software code is an ideological demand that is better promoted through the free market. Free software ideologues should refuse to buy products that have proprietary components, but all of us should only ask that the government fix its emission testing process.

The problem is that the amount of devices out there with non-proprietary components is close to nil. So there's a clear lack of choice, it's almost impossible to get a computer without non-free blob inside as well. The balance is clearly skewed towards proprietary-everything.


I'm probably one of the few willing to buy a SPARC computer that is completely free, even at the likely 500% markup or more required to build it.

But I also imagine it would be impossible to build due to how few of us even care.

You would need chips, which are pretty much not being manufactured anymore. Good luck buying fab time as TSMC, and having enough volume to justify it. You also need a design, thankfully the SPARC chips have open designs, but they are still really old for old fab tech. Good luck getting the engineers to update it.

You need mainboards, assembled somewhere (expensive) and fabbed (again, expensive). You could invent your own or reuse a published ancient print if any exist.

You would need open designed DRAM, and good luck trying to support modern iterations like ddr3 or 4 on old SPARC boards. I'm not aware of any DRAM being sold with open blueprints, but also have not looked much into it.

You would need open storage devices. No SSD or HDD qualifies today, since they all have proprietary microcode controllers. No open designs or firmware to be found there.

You would need a graphics accelerator. While there are some purely free software GPU drivers (Broadcom for the Raspi) none of the designs are open. Again, good luck.

All combined the cost of just getting these five core components in a firmware free / open design state is insane. It would be on the order of tens or hundreds or probably a billion or more USD just to get a few thousand units. And at those kinds of costs (lets say we are optimistic and sell 10,000 fully open computers) and say it cost only 100M to build them, they would be $10,000 each for what is probably as powerful as something proprietary you could pick up today for $50-100.

One day I'm absolutely certain there will be a sustainable market for such a computer, because with proprietary hardware and software you are guaranteed to eventually be screwed over. Its only a matter of how deep in the rabbit hole you want to get before you wake up and realize how bad an idea it is. Eventually, I am certain all hardware and software design will be open, its just a matter of how long we want to slow progress and risk our individual prosperity on proprietary tools until enough wake up to the problem.


>Free software ideologues should refuse to buy products that have proprietary components

This doesn't work when you are a small minority of the market. It just means you can't have anything. The economies of scale required to produce processors, etc do not enable niche markets to thrive.


> Demanding that all code be open-source

That's a fundamental misunderstanding of the requirement for 3rd party review, and it seems to be a common oversimplification here on HN. There is some precedent for auditing important code without any intention of open-sourcing it:

http://uk.businessinsider.com/microsoft-opens-transparency-c...

http://uk.businessinsider.com/apple-china-security-audits-ns...


>I think the Volkswagen episode shows that we need better emission testing mechanisms.

Code is not the problem - in this case code is just one possible loophole out of many possible loopholes.

The fix here is to make sure that corporate officers who break the law and cause significant harm are jailed and/or stripped of any profits they accumulate as a result of their actions.

This will not happen because we've set up a culture where the higher up the pyramid you are, the more you gain personally and the less you need to worry about being punished for being a bad actor. Convictions do happen occasionally, but they're exceptional, not inevitable.

You're not going to fix that political/economic problem - which is a direct outcome of free market ideology - with code review.


https://en.wikipedia.org/wiki/Open-source_software#Open-sour...

You can make it source available only. Although China would probably just use it as it was free, I give you that, but they use it anyway.


You can have a trusted third party, either a government entity or certified vendor inspect the code under NDA and verify there is now fowl play or egregious or sloppy practices.

It would be no worse than letting contractors look at your code.


>Why not ask Google to make their search engine code public?

You could ask for the code to be open in applications where it can directly kill the public, eg. car and aircraft controls, and not elsewhere.


Society should have better testing infrastructure all around. Cars, food, pills, materials, clothes. IMVHO states (govt) should be mostly that: fast and lean test suites.


> Free software ideologues should refuse to buy products that have proprietary components

Which minivan can I buy that runs on free software?


Hopefully, the Volkswagen events may make people think rather more about what is going on in their voting machines.


I am up-voting you so hard right now. I really, really want it to be true.

But I doubt that anyone even remotely likely to connect those two dots will be allowed anywhere near the camera lenses of any company that broadcasts television.

The source code to one of Diebold Elections Systems (now dba Premier Elections Systems) voting machines was leaked in 2003, which summoned blackboxvoting.org into existence.

As far as I know, every voting machine system that has been examined since then has been found to have potential attack vectors large enough to fly an intercontinental airliner through. When the public records are examined, the audit trail is often nonexistent, destroyed, or falsified.

In the 12 years since then, the only meaningful result has been to end the practice of exit polling in the U.S. Yep, rather than explain the discrepancies between statistical sampling methods and the official results, we chose to stop sampling and trust the results.

So some academics apply statistical methods to the official results, to show convincing evidence that one major party or the other, or both, are blatantly cheating using specific voting machine systems.

You can hear those crickets chirping over the din of the absolute silence.

~But hey, let's all get worked up over homosexual marriage and Syrian refugees, mmkay? The 2016 national elections will be perfectly fair and honest, and not cracked and defrauded at the central tabulators at all. And since there's no cheating going on there, there certainly wouldn't be any at the primary level, with lower levels of scrutiny.~

Open source is democratic, but democracy certainly isn't open source.


Why did we end exit polling? Seems like an obvious counter-measure/check.

I don't trust for-profit media to conduct them. This is something that we all need to chip in to accomplish, if we care about democracy. If we can't trust election results, we really can't trust anything connected to government, and as the state continues to grow, that's very little.


There are two reasons.

Firstly, they were diverging from official results. There are two possible reasons. Reason one would be that the exit polls were not accurately reflecting the will of the people. Reason two would be that the official results were not accurately reflecting the will of the people. It could be both.

Since reason two would have been too uncomfortable to consider, reason one was chosen. And the exit polls dried up.

It is not even widely known that prior to ending the polling completely, the pollsters were already applying statistically-calculated adjustment factors before reporting the poll results on air, in order to force the polling numbers to predict the official results more accurately.

I don't even need to put my foil hat on to say that I trust a statistical prediction made by blindly sampling from a random pool of volunteer respondents over a secret-source black box that pretends to be counting votes, where there is significant incentive to cheat.

The great thing about exit polling is that anyone could do them, and publish the results along with their margin of error. And some people still do. They just don't get reported on television during elections night coverage any more.

The second reason is that news programs are now reporting official results, as they are tallied and reported by those black box computer systems. The computers can count electronic ballots quickly enough to report initial results shortly after voting booths close. Exit polls lost the advantage of speed. The television news programs like to be able to project the likely winners as soon as possible, and preferably before any competitors. By the time the hand-counted paper ballot areas finish reporting, the winners have already been known for hours, if not days.


Seems like this trust issue could be solved with a bit of cryptography. A search for "blockchain elections" turned up some interesting proposals. http://www.bitcongress.org/ seems like it's being served over 28k modem though.


My company, like most companies, relies on an enormous amount of open source code in order to construct a closed source product.

We have also contributed to open source, and have plans to contribute more. There are certain things that help everyone without jeopardizing our business, and other things that are too valuable to share.

We cannot force a company to open source code without an extraordinarily good reason. I believe in open source and support it, but at the application level it is a trickier issue.


> We cannot force a company to open source code without an extraordinarily good reason. I believe in open source and support it, but at the application level it is a trickier issue.

The fact that human lives depend on the correct functioning of the code is an extraordinarily good reason. IMO, any code that can affect driving safety should be required to be released (note: this does not mean that the companies must relenquish any copyright claims, just that the code has to be publicly available for inspection).

Right now this is hard to do because all the auto manufacturers like to have everything connected, so the result of this would be that everything including the in-vehicle entertainment system would have to be open-sourced. But that's the point - such a requirement would overnight force the manufacturers to implement a proper seperation between critical software and non-critical, so that they could keep the non-critical stuff closed.


I'm curious how other companies did not know about this. Did engineers at GM/Chevy not inspect the VW product and figure out that it was cheating or even suspect that their competitor couldn't have some sort of magic engineering snake oil to pass emissions.


Great question. I've seen companies with entire teams dedicated to dissecting the performance claims of the competition, and then running their own tests to verify and/or debunk the same. Keeps everyone honest, and it prods along the in-house engineering effort. Why didn't the other automakers see this?


Maybe other car companies don't want to know because VW is not the only guilty company? What are the odds that only one car company cheated on only one car model?


I understand and sympathize with this point of view, but I have no idea how it would work out practically. Software patents are rightly criticized and I don't think they should be the avenue to recapture investment. But if you also remove trade secrets, the dynamics behind expensive R&D would certainly be a lot different.

I used to work for a company that produced diesel engine. Trying to pass ever stringent emissions meant massive capital investment and millions of dollars in research and testing. The "secret sauce" ends up being a few thousand lines of code and lots of constants. It is literally the make-or-break factor in whether a manufacturer can sell a product or not. If all results had to be immediately shared, what happens to the competitive zeal?

My most optimistic perspective is that critical pieces of software just become common, with everyone latching onto the best option available. There is still plenty of room to differentiate, especially in something like a car, and that the sharing would be just generally beneficial.

But will the real breakthroughs still occur if there isn't the potential to be substantially better (and substantially rewarded) at some fundamental factor, like fuel efficiency?


Hopefully any system of "code inspection" could be done by authorities, and not require the code to be public.

As an example: I'm not sure, but I believe the Nevada Gaming Control Board inspects the source code of gambling machines in Las Vegas casinos. This doesn't mean the casinos can't have secrets in the source code, it just means that the authority sees all the secrets.

I do think however that "code inspection" for engine control units is a dead end. Writing underhanded code that appears to do X but instead does Y is so easy. (See e.g.http://www.underhanded-c.org/). Inspecting a 100k C-program for illegal behaviour won't be doable until the next years model is out, at which point it's time to start again.

The solution is to treat the code as a black box, and just test it as a black by casting a wide net. The problem is the reliance on simple lab tests that are easily fooled.


Those magical constants would likely be specific to the engine though. I also think you're overstating how importnat these trade secrets are. If VW really had magical algorithms to reduce emmission they wouldn't have had to cheat.


I'm surprised such a small amount of code hasn't been reversed engineered and the constants copied for millions of dollars of profit.


Somewhere, this software sits in a source code repository. Someone planned it, wrote it, reviewed it and approved it. Where are these people, and why are they not speaking up? Why have there not been any whistleblowing on this, it's huge?!


> Why have there not been any whistleblowing on this, it's huge?!

If whistleblowers were treated as the heroes they are instead of traitors, that might inspire other people to do the same...


What I have seen missing from most reports is that VW does not make the ECUs. Consumer reports has reported[1] that Bosch was the company that produced these ECU. So Bosch should have the repository and should have a request from VW to implement this particular code. I wonder if Bosch will be forced to throw VW under the bus.

[http://www.consumerreports.org/cro/cars/volkswagen-emissions...]


If this is provided by a third-parties, there is a requirement written somewhere for the supplier to have two different modes of operation, one for test and one for road. This is likely redacted to be innocent sounding, but there is definitely a DOORS history that would give the date and the name of the people who have written the requirements.

However there might be some good reason to have a test mode, for engineering tests, for example. The culprit would be the one who have decided to ship it and activate in production.


This is definitely a question I have especially since in my experience sometimes you had trade-offs due to other restrictions so every functional item needed to have value and new functionality may require removing/changing/disabling other functionality.


It's common for a larger OEM like VW to write large parts of the application software themselves and send it back to the Tier-1 ECU supplier only as object files. (The industry is competitive, worried about IP protection, and anything clever in the powertrain ECU application software could easily provide a competitive advantage).

Bosch would have ultimately integrated it and built the final ECU software, but I very much doubt they ever saw any source code from VW.


One of the previous articles had a comment from a Bosch representative. It said roughly "yes we supply the ECU, and no we don't perform integration or fine-tuning". Which makes sense, in the same sense that SSD vendors typically source their controllers from a supplier and then adapt the firmware to match their hardware layout/performance objectives.


The maker of the ECU is not necessarily the one who does all (or even most) of the software development for that ECU. AFAIK the code running on the car model in question is largely from IAV, and some is probably code developed by VW.


VW has already admitted to intentionally installing this software to cheat on the emissions test.


Suffice it to say that every auto maker who has to pass Euro emission tests is doing this to one degree or another, and has been for years. Mechanics know it, as do the regulators who resist attempts to reform the emissions legislation.

All of which is to say that this is bigger news than it seems, mostly because the US EPA is actually following up on this.


"A group of automobile manufacturers said that opening the code to scrutiny could create 'serious threats to safety and security.' And two months ago, the E.P.A. said it, too, opposed such a move because people might try to reprogram their cars to beat emission rules."

Oh, the irony.


I think that the code that operates dangerous machines should be publicly available. I think that applying the DMCA to keep it hidden is a bad application of that law.

The DMCA was intended to protect creative works like movies or songs, where the whole value is in the encoded file.

Code that runs a car is the opposite: it is worthless by itself. Keeping it hidden from the public just enables bad behavior, like Toyota's terrible code or VW's deception.


Agreed.

The predictable retort that there is no need for a public review of source code, "because indecipherable/underhanded code can still be written" is nonsense. If the code is so poorly written and tested that it is possible to be underhanded without the risk of anyone noticing, that code should not be on the road!

We wouldn't be happy if the cars wiring and hydraulic systems were a fragile spaghetti mess that was impossible to inspect, why should we allow it for embedded software?


The submissions to the underhanded C contest don't look very poorly written to me.


> Code that runs a car is the opposite: it is worthless by itself

What about for self-driving/automation features? There is a strong argument that such code especially should be available for scrutiny, but it is far from worthless as a standalone.


Not to mention that as far as the physical portion of the car is concerned, competing manufacturers can simply buy the car and tear it apart.

That said, there's probably a strong argument for protecting the source behind things like integrated entertainment/navigation systems, provided they're sufficiently decoupled from the car's critical systems. A really slick UI is a competitive advantage.


Yeah, but what are you going to gain from looking at the source when you can see the screens and output directly? Is there really that much innovation in entertainment systems that can't be discerned from the UX itself, and which requires deep analysis of the code? The only way that makes sense is if the code itself gets cloned, but that's a minimal development-time savings in an otherwise huge release of software + the car itself.

DMCA in these cases seems to be a protection against third-party aftermarket for upgrades, and protection against remote hacking rather than protection of IP for competitive purposes.


Admittedly you're right; there's little to gain competitively from source access there. However, it wouldn't stop car companies from screaming bloody murder if they were required to disclose the source for those components—let alone the rest of the vehicle.

I imagine it would also create a licensing nightmare since navigation and entertainment components often utilize licensed, closed source, third party software. Much more so than the core functions of a vehicle (e.g. engine, steering, braking).

If anything, affording manufacturers the right to keep non-critical vehicle systems closed source would further the goal of opening up the critical portions.

Also, DMCA as a means of protection against remote hacking is tantamount to security through obscurity—but worse. While public source code access isn't some magic solution to security, it seems likely that it would vastly improve automotive security via allowing security researchers to do their far job more efficiently.


Good point on DMCA as not effective as a form of security through obscurity--the horse is already out of the barn before the DMCA would ever kick in.


The problem with making it public is it would start this process of regulation. Then it gives way for these regulations to be used in such a way that would greatly restrict software creativity and growth.


Yes but it gives it to competitors as well, which is a problem.


But your competitors can't really steal it, because they are subject to the same rules and have to make their code available too. So any theft of code will be apparent.

Manufactures will probably check each others code to make sure none of their own code has been stolen and file lawsuits if it has.


There's still China you know. They don't play by the rules and copy everything relentlessly.


Before cars had software in them, competitors could learn everything by just buying a car, taking it apart, and measuring all the pieces. Yet that was the golden age of the auto industry, with no lack of competition and market growth. So I think it's not a problem for competitors to gain access.


Why? Competitors are the best placed entities, in terms of both resources and motivation, to perform a review.

Also, do you want officials to have to review and test one hundred different implementations of a safety critical piece of code, or would you prefer an evolution to a single standard shared code base, shared between manufacturers, for which all testing efforts are directed at new patches?


This is too vague to judge.

Does the AI code in self driving car apply? Or the control software in a drone?

And in the future, ECU may not be a standalone device any more.


Yes. Any code which has a safety responsibility should have it's code publicly available.

You could divide the cars systems up into safety critical and non-safety critical components, like there is no reason why the software running the car's media center needs to publicly available, but there should be a well defined interface between the media center and safety critical code (or no interface at all)


So you want software regulations on how code ought to be done? Forcing the separation of components based on safety... what even warrants a component to fall under these regulations. Such a dangerous slope. Just think of a time when regulation is at its peak and people want to innovate but can't due to law restrictions / regulations. Being trapped by standards that are produced by people who know nothing about software.


No mention of RMS as an earlier prophet ?


Eben Moglen has had a long history with the Free Software Foundation. He co-drafted version 3 of the GPL and still serves as their legal counsel. It's not necessary to mention RMS every time other leaders of the Free Software movement make a comment on something, unless it's something specific RMS has said on the subject.


VW could have given the EPA the complete source code for its system, and it would likely have made no difference.

The main thing preventing inspect-ability isn't lack of access to the code. It's the incredible complexity of the software. Even barring deliberate attempts at code obfuscation, it would be prohibitively time-consuming and expensive for the EPA to gain any sort of understanding of a codebase of this size.

Comparing a complex software system to an elevator is absurd.


"The main thing preventing inspect-ability isn't lack of access to the code."

No, I'm pretty sure you can't inspect code that you don't have access too.

I get what you are saying about complexity, but I don't think that's an argument for prohibiting inspection altogether. This story directly illustrates that there was at least for this particular car a group of people will and able to inspect at least a particular part of the code, if they had access to it.


>This story directly illustrates that there was at least for this particular car a group of people will and able to inspect at least a particular part of the code, if they had access to it.

How does it illustrate that?

They discovered the scam by doing better emissions testing not by reverse engineering the code.


That's true, but then there followed a long time (18 months?) in which VW denied any wrongdoing, and nobody could really prove them wrong. If the source code were available, the investigators could have quickly gone from "this looks wrong" to "hey, check out this really sketchy source code!" It might also prove that it was deliberate, rather than a complicated accident.

Also, I have a hunch that they wouldn't have done it in the first place. If you know your code is going to be public that's an incentive not to do bad things, even if there's a chance nobody will ever read it.


>Also, I have a hunch that they wouldn't have done it in the first place.

That's a good point. But, I see no practical way to enforce that the source code that has been published is actually the same as that running on the car. Not without prohibitively slowing the pace of development.

> the investigators could have quickly gone from "this looks wrong" to "hey, check out this really sketchy source code!"

I really don't think this is true. Not quickly. The investigators would have be a big team of expensive code audit experts to achieve this. And, even then, if the authors wanted to, they could easily obfuscate the code to the point of making it effectively unfindable.

Apple can't even guarantee that the apps it's auditing for inclusion on the app store are malware free. And that's Apple. And apps are relatively simple compared with the codebase for a modern automobile.

The real way to fix this is just to have better and harder-to-defeat testing procedures.


>>I see no practical way to enforce that the source code that has been published is actually the same as that running on the car.

I was going to say that you could dump the code from a random vehicle and compare the hashes. But I suppose that would require having the entire toolchain to go from source to shipped code.

>>Not quickly.

Perhaps I should've said "more quickly." You've got a point that it's difficult to verify that code doesn't do anything bad. However, it's relatively easy to start from the notion that it's doing one particular bad thing and then find the code that does it, which would've been the case here.


What I mean is that changing that variable would not have much effect.


> Comparing a complex software system to an elevator is absurd.

No it's not, it's a metaphor, a simplification for the purpose of illustrating a point. It's true that many software systems are complex, but it's still important that they be inspectable.


And here I thought it was going to be an article about Richard Stallman. :-) I like the approach that proprietary software is an 'unsafe building material' (or component). I think that is something non-technical folks can understand a bit more easily than "software rights". I hope Stallman adds this approach to his arsenal.


Recording of Eben Moglen's speech, if you'd like to listen instead of reading the transcript:

http://www.softwarefreedom.org/podcast/2010/jul/20/episode-0...


The amount of logic that went into this program just so Volkswagen wouldn't have to lessen their emissions seems...illogical. Did it really cost less to build, QA and deploy this software (and keep it under wraps for this long) than it would have to just fix the damn engines?!


The problem is that lower emissions leads to lower MPG.

Volkswagen wanted the best of both worlds, Low emissions and high MPG, so they changed their settings based on what was being tested.

This allowed them to sell their cars with much higher MPG ratings while still passing the emission standards test.


Just like "good, fast, cheap: pick two" there's a similar rule in automotive tuning: "Fast, efficient, clean: pick two"

VW decided to go for all three.


That's an interesting comparison. What would it mean, from a mechanical or chemical perspective, for an engine (or the whole car?) to be both very fast and very clean, but not efficient? Does not the inefficiency also lead to a lessening of cleanliness?

I know that would probably require defining "clean" as distinct from "efficient", I simply know very little about cars and so the fast+clean idea feels a little off, and I'd like to know where I'm wrong.


> Does not the inefficiency also lead to a lessening of cleanliness?

All depends on what we're measuring. In the case of VW, it's the NOx limits that they were cheating on. In order to reduce NOx, you can run a richer air/fuel mixture, which consumes more fuel and is thus inefficient.

If you're instead talking about C02, then you're generally right. Efficiency and cleanliness are aligned.

And my original comment could probably be rephrased as, "fast, clean, efficient, reliable, cheap: pick any three". But that doesn't have zing to it :-)


> and very clean, but not efficient ... Does not the inefficiency also lead to a lessening of cleanliness

Not at all. You can burn tons of fuel, while making no pollution at all. It might not be efficient, but it will be clean.

In a car, they use some of the energy to clean the waste product of burning the fuel, so that all that comes out is Water and CO2.

But it costs energy to do this cleaning, it's easier not to.

For an internal combustion engine higher temperature means more efficient, but higher temperature also makes more NoX (i.e. nitrogen that burned in air). So you can lower the temperature, or try to clean the exhaust.


Looking at it holistically, it seems pretty obvious that something like this would happen eventually.

Look at the history. There have always been concerted efforts to game the standards as much as possible. In Europe where the standards are laxer there have been lots of dirty tricks employed for a long time. For example, you use a special test car for the emissions testing. One that has an engine lubricated by extraordinarily expensive, specialized lubricants. One that uses over inflated, slick tires. And so on. Taking every inch possible. Transitioning from that sort of behavior of playing chicken with the standards to outright subverting them is not nearly as drastic a change in behavior as going from purely lawfully abiding by the strict letter of the standards to deciding to smash them to pieces the next day.

Not to mention that the fact that automakers have been gaming the system by inches for so long and getting away with it almost certainly led to a widespread belief that automakers could get away with much, much more, as well as perhaps taking the standards bodies less seriously.

Once they began down that road, they probably didn't realize how severely the actual emissions would differ from the "fooling the test" mode to normal driving mode. And as they tuned the performance of the vehicle the divergence would simply grow to the limit of the system, which turned out to be enormous (up to 40x). I highly doubt that anyone at any stage was fully aware, until perhaps very late in the process, that the divergence was so large. But by then they were committed. Many poor decisions often result from such incremental processes.


When they test the efficiency of new cars, it's fairly obvious that it serves only as a comparison between different cars, and it does not translate to real world experience. I really wish that companies was required to perform the tests with a car only as it is sold to the customers.


Physics is a cruel mistress.


I sincerely doubt that.

Please remember that the engine control software is already there, doing exactly the same thing in normal operation: adjusting the engine settings to produce optimal results for the current driving situation. For example, the mix should certainly be different when the car is stationary (sitting at an intersection), than when accelerating or going full throttle.

There is nothing underhanded about this, quite the contrary, it is a Good Thing™.

Once you have knobs to turn, the temptation to turn them just a little too far is always great, especially if you have conflicting goals. On the one hand, you are supposed to reduce CO2 emissions, which having great mileage does, on the other hand you are supposed to reduce NOX emissions. So you fiddle.

And car companies have always fiddled with any knob they could to get performance and always pushed it as far as they could. For example, this particular problem has been known for some time and was tolerated, see http://www.welt.de/politik/deutschland/article146711288/Die-... or http://blog.fefe.de/?ts=ab029b73 (German). Furthermore, manipulating the catalytic converters has also been long-standing industry practice, I found a Toyota forum post complaining about the practice from 2003, also in German: http://www.toyota-forum.de/threads/kat-schaltet-unter-vollas... However, I remember those discussions from the late 80ies and early 90ies.

So if everyone is doing it (they are) and have been for a while (they have) why now and why Volkswagen? Your guess is as good as mine, but the EPA needed a win after some bad publicity, and going after a US manufacturer probably wouldn't have been quite as politic.

http://www.zerohedge.com/news/2015-09-22/dear-volkswagen-was...

Also remember that even one single Barbecue grill or wood-fired stove/fireplace (even the "green" pellet-based ones) emits more of the pollutants in question than a thousand of these cars. So maybe a little perspective may be in order...

UPDATE: https://news.ycombinator.com/item?id=10265534 has more links indicating how widespread this is (apparently all the french car manufacturers)


>Also remember that even one single Barbecue grill or wood-fired stove/fireplace (even the "green" pellet-based ones) emits more of the pollutants in question than a thousand of these cars. So maybe a little perspective may be in order...

If we we're talking about the same study, the 1,000 cars thing is talking only about particle pollutants not C02 and NOX.

Also that study had to tweak the numbers a bit to get to 1,000 cars.

They said that wood burning stoves produce 150g of particle pollutants per 4 hours. While new cars only produce 0.15g per 30 kilometers driven. A car will normally drive much more than 30 kilometers in 4 hours.

>(even the "green" pellet-based ones)

A quick look here http://www2.epa.gov/residential-wood-heaters/fact-sheet-summ...

shows that as of this year all residential wood heaters have to emit no more than 4.5g of particle pollutants per hour. So either all new wood burning stoves are illegal or you can in fact buy "green" wood burning stoves that produce way less particle pollutants per hour than 1,000 cars.

Edit: This link http://www.burningissues.org/comp-emmis-part-sources.htm

Shows that the average pellet burning, (and even EPA certified) wood burning stoves release much, much less than 150g of particle pollutants per hour.


>A car will normally drive much more than 30 kilometers in 4 hours.

A car is usually stationary for those 4 hours.


That's true, but arbitrarily picking 30km for the car and 4 hours for the car is useless. The only reason they did it is so they can say it pollutes as much as 1,000 cars.

You'd need to look at average km driven per day and avg time used per day for the stove.

You'll note that they are also comparing old, non EPA certified wood burning stoves with modern EPA certified cars.


You really believe it was only VW doing this and not other companies? And it was miraculously discovered and not part of automotive war?


The Swedish transport agency found that Volvo, BMW and VW had ten times as much nitrous oxide in roadtest than labtests. So it might be automotive war.


I certainly hope Google will open-source their self-driving car code before it is let loose on the public roads.


That wouldn't matter. We don't know how to inspect a trained deep neural net. No one knows how the car drives.


A trained network should be considered compiled code. It's almost literally a circuit wiring diagram, the lowest level of all code representations.

Arguably the "source" in this case is the dataset used to train the model, along with the learning algorithm. If the data are released, so that you can inspect them and replicate the network weights yourself with an off-the-shelf learner, that's a reasonably good sign that nothing nefarious is going on inside the system. (though not strictly a guarantee: you could imagine someone searching very hard to find an innocuous-looking training set that somehow encodes malicious behavior)


We have no indication (and I seriously doubt) that Google's self driving system is just a deep neural net. There are probably neural net components, especially in object recognition / tracking, but only as a component in a larger system.


This would be a great time to point out that the diesel engine was originally intended to be run on peanut oil. Any diesel automobile can run on "bio-diesel" with no additional modifications. You may be (un)surprised to learn that the inventor of the diesel engine died a rather mysterious death on a trans-Atlantic cruise.


Haven't read all the comments yet, so maybe others have commented on this already, but I find the premise (critic of secret code validated by VW scandal) weak.

According to the article:

“Proprietary software is an unsafe building material,” Mr. Moglen had said. “You can’t inspect it.”

Except that in this specific case, it was the builder who designed and implemented something "faulty" precisely because they needed that. There is a difference between installing a third-party set of windows and finding out that it soon falls apart because the frame is defective, and installing windows after you have secretly bored small holes in the frame (because you have an interest in your tenants going overquota with their heating expenses).


Making it possible to download the firmware as a binary blob, even if it's encrypted, so that it can be hashed and compared to the published hash for the version of the software that's supposed to be in the car; while also publishing the source code of that version, doesn't in any way mean the user has a way to modify the code, compile it, and install it into the car.

Now maybe the people talking about this to the reporter don't know this; but I suspect people at the car manufactures know they could do this, they simply don't want to, so they cherry pick b.s. arguments why disaster will strike if code is published.


I hear all this talk about how this software scandal by VW is pointing to the end of diesel cars but this raises concerns about electric cars as they rely heavily on software. We actually discuss this in this podcast http://www.africantechroundup.com/volkswagen-up-in-smoke-as-... and even more concerning is when it comes to self driving electric cars. Mmm...


I can't believe it - not a single comment about Richard Stallman (in the article or this thread). When I read the title, I thought the article would be about Richard Stallman.


Richard Stallman's lawyer isn't close enough for you? :-)


I want open firmware in my next laptop so its inspectable.

If you mix open source with closed firmwares you are half doing it. Half doing it means there are potential malware lurking around hidden in some corner where no one looks.

Richard Stallman has been right the whole time.

Also if the software is open there is possibility to maintain the code long after the manufacturer makes the next model and loses interest in the old device.


I think it's a really weird situation where everybody has painted themselves into a corner:

* the emission tests are extremely tight

* the car industry is very competitive

* loosening emission standards would be a politial issue

As far as I know things started badly with emission tests not being representative and car manufacturers doing cheap fixes to pass them. Now would be a good time to fix this mess.


In the article, they say the software detected the 'speed' of the vehicle among other things. I always thought the speed was simply calculated as a function of the wheel turning rate - unless the article really means 'acceleration', and there's such sensors in the car?


When the trajectory assistance detects that the front wheel spins at 60mph whereas the back wheels are fixed, of course the code puts itself in a special condition. Same goes if the wheel meets no resistance to acceleration. It's either surfing on butter or in a garage.


Interesting - didn't know a) that cars measured torque, or b) speed of the unpowered wheels.


For the likes of ABS to work the car needs to know the speed of all wheels.


Maybe GPS or whether the rear (non-powered) wheels are spinning.


>A group of automobile manufacturers said that opening the code to scrutiny could create “serious threats to safety and security.”

I think it is tremendously difficult for the layman to understand that obscurity is not security. In fact it's counter-intuitive at first.


Obscurity is not security. True.

Transparency is not security. Also true.


But transparency is a prerequisite for security. Free software isn't necessarily secure, but only if its free software can we check and verify or fix it and distribute modified versions.


That was true when the incentive was for vulnerabilities to be disclosed and fixed for the good of all, but sadly today, vulnerabilities are extremely valuable and so the incentive is for them to be sold to the powerful.


As a car guy, I fear this fraud is the symbolic fallen domino that will lead to the end of my hobby of tweaking the tunes on the ECU of my Subaru WRX STi.


How will Volkswagen's cheating, or Eben Moglen for that matter, make it more difficult to tinker with your car?

The Magnussen-Moss Act aside, auto makers seem to have failed to notice that DMCA-ing their software has not stopped tweakers and modders.

It's so bad that John Deere doesn't want you to own your own tractor, just "license" it from them [1] (ostensibly so that they can refuse to let you look at the ECU). Would they take this step if they were confident in the DMCA's protections?

Hollywood was the bellweather and tip-of-the-iceberg with DVD-CSS many years ago.

[1] http://www.wired.com/2015/04/dmca-ownership-john-deere/


Missed in the media frenzy is that at least one company has offered a complete ECU upgrade for VW/Audi TDIs for a while: goapr.com ... plus homebrew. And VW never sicced DMCA on them.


Really? Can't I hack into the build chain and inject code while building. The code will pure and clean like a ice block for sculpting.


"Reflections on Trusting Trust To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software. "

https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...


If it was standard for the software on vehicles to be FOSS, this type of problem wouldn't exist.


Stallman must be happy now that someone is actually listening (albeit; not to him)


Does external code review even matter? Leaving aside the difficulties of fighting with lobbyists, entrenched companies, unions, IP risk and financial obligations, (among many other points of friction) what is the upside?

OTA and continuous integration will only improve, and likely at a logarithmic pace typical to other technological advancements. What if you during your project you needed every line of code reviewed before it was deployed (you probably do), but then a thrid party has to review it before you can ship it? This would KILL turn around time.

Entire teams are devoted to writing and reviewing this software at every company, and each continuous update. These updates can be rolled out daily, weekly or monthly. It probably takes internal testing and internal reviewing to push code. This is non-trivial and scales with the complexity of the design. Further, as difficult as internal reviews and testing may be, it can still be incorrect; even when the reviewers have intimate knowledge of the entire codebase and the granular changes, source-code diffs, and commented commit messages.

How many people do you reckon it would take to independently review every line of code in production? At what level of inspection would you be able to achieve a confidence level >90% that the code did what the developers allege in a safe and sane manner? Open source works because people can contribute code, fork-code and use the code they write and fork. Without these incentives a decentralized community would be unlikely to contribute, review and maintain proprietary code as they would be disincentivised from doing so as the IP would be protected. The government or the auto-companies would do the reviews, which outside of the massive cost (passed on to consumers) I will leave you to work out the issues there.

Oh thanks for this grim view, it is fairly accurate, I disagree with X but on balance maybe it is more true than false, what is the solution?

We open source the testing and make that public, and we don't need the actual code to do this. Simple heuristics are developed for whatever we need to optimize for.

* Safety works this way and is done really well, we smash some cars up, run some stress tests and qualify them.

* Manufacturers are required to publicly announce when they have pushed an update and the specific systems changed like a longer commit message with details of the system and changes.

* Third party tests are continuously upgraded for whatever the battery demanded by the market is.

This will create a market opportunity for someone to create value. They will provide data and can sell it, distribute it or publish it. People will be incentivised to write tests because they use vehicles and companies will want to be auto-trends verified car of the year. If we accept that it will be almost impossible to convince automakers to give up their IP and that the code will regulalrly change making review prohibitivle expensive, this seems like the only option.

Create in depth testing for emissions, safety, cruise control, privacy, data transmission ETC and have that be governed as open source projects. Open up bug bounties (either auto-trader/magazines/interested companies sponsor this or automakers are incentivized to foot this bill). By bug bounty, I mean more of audting the car and requirements, or if the companies allowed specific features to have code reviewed like the Jeep hacking experiment.

This will happen anyway because people don't like their credit card data to be stolen, and they don't like getting caught cheating on ashley madison, but what they really don't like? Having their car remotely turned off, remotely piloted using self-driving technology to the criminals dark layer or have their car otherwise destroyed or used for crimes.

Review is coming, maybe even endorsed by automakers with some of their code provided, but we can't expect the government to do a great job reviewing the code, so let's just have an open-source testing model and develop better tests for things we care about.


I have mixed feelings about this. I mean, yeah, open source would definitely alleviate this issue. I don't know enough about the state of car industry software (if Honda is par for course...I'm guessing, based on my new car...not that great?) but I imagine there would be unintended consequences that make OSS a difficult pill to swallow for carmakers. But sure, if it's feasible, make them do it.

...But my objection is that there seems to be many other ways that this problem could be solved...For example, perhaps the certification process should be slightly more rigorous than "self certification"? Obviously, testing is cumbersome if the EPA can only test "10 to 15 percent" of new cars...on the other hand, it sounds like WVU engineers were able to discover it accidentally without touching the software...which is how a lot of effective engineering testing manages to be done despite black box systems.

Yeah, making software transparent is almost always a good thing...but if the test has the kind of predictable characteristics such that programmers can seemingly hard code the parameters...Is it really easier to move an entire industry to the open source paradigm than to design a more robust test?

And not to throw too much pity on the regulators...but they were dealing with an exceptionally committed adversary. It's horrible how GM's culture allowed the ignition switch defect [1] to go unrecalled until a dozen people died in such senseless ways...but the ignition switch defect was a result of well-intended, if ultimately incorrect tradeoffs in the engineering process. And the deaths were far enough apart, and their cause not certain enough, for good-hearted engineers to have a "well, someone above my pay grade is dealing with this" mentality.

But VW's situation...it's hard to imagine how the bad code came about without active intent by everyone who touched and tested that code. It's one thing for a huge company to have the kind of bureaucratic stagnancy that leads to tragic inaction...it's something else for them to commit to breaking the law in such an obvious way.

[1] http://www.npr.org/2014/03/31/297312252/the-long-road-to-gms...

edit: It really just seems the OP is throwing a bunch of assertions together, with software being the most mysterious and therefore the most obvious scapegoat. The advocate/technologist most prominently featured doesn't make cogent arguments (though maybe he was misquoted?).

The following quote from Mr. Moglen is so nonsensical that it could only make sense to people whose understanding of computers seems very limited:

> “Software is in everything,” he said, citing airplanes, medical devices and cars, much of it proprietary and thus invisible. “We shouldn’t use it for purposes that could conceivably cause harm, like running personal computers, let alone should we use it for things like anti-lock brakes or throttle control in automobiles.”

Huh? If software shouldn't be allowed to run personal computers, because PCs can cause harm...then...huh?


I've seen a lot of comments on this topic discussing the issues manufacturers might have with releasing their code as "open source". I say odd because it's orthogonal to the issue here. No one (not even the EFF) seems to want to force auto makers to open source their code - not even if you use a bastardized meaning of open source like Microsoft's "Shared Source". What most articles I've seen propose is instead that it not be a felony to copy, reverse engineer and then inspect the code. Some articles have mentioned letting regulators inspect code, but most stick to a simple request - remove the DMCA Anti-Circumvention protection for automobile software. It's disconcerting to see a relatively simple request be pushed into straw-man territory of requiring all code be open sourced.


Of the DMCA issue, I definitely agree. If that's the main obstacle preventing a more transparent way to scrutinize automakers' software (among the many other kinds of software the DMCA negatively impacts)...it's hard to see the downside of fixing the law.


I assumed the quote was with regards to closed source software and read it as "We should't use [closed source software] for [anything]"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: