Hacker News new | past | comments | ask | show | jobs | submit | jordskott's comments login

The problem with Mir, Unity and all the convergence ideas was that they were poorly communicated.

They didn't really engage with the community and kept all the discussions and development behind closed doors, releasing dumps of code once in a while once they reached a milestone that they were satisfied with.

That's what Google does with Android, for example, and that would be okay... but only if they were able to take on such ambitious plans.

You can't expect the community to welcome you when you blindly follow your ideas and you can't even produce anything in quality. And when I mean in quality, I mean something that justifies going your own way, not just another "GNOME clone" without any added features.

I'm pretty sure they had amazing ideias in mind and that they had reasons on why they avoided going to Wayland but... is anyone able to point out any public mailing list, or public blog discussions, where they discuss all of this? I really tried to follow all Ubuntu/Mir/Unity/Phone/Convergence project but all the information I found was poor and outdated. Even working Ubuntu Phone images for the Nexus 5 was hard to find.


> You can't expect the community to welcome you when you blindly follow your ideas and you can't even produce anything in quality.

So you'd rather destroy a promising program just because you weren't asked about your opinion? Doesn't this effectively mean that we stopped caring about open source, the only thing that actually matters now is the perception of broad community support? It doesn't seem to be about users anymore either, it's all about the approval of gatekeepers now.

It used to be anyone, a guy in a garage, or even a huge company, could make something, share the source code, and we'd be happy for their contribution. Now that's apparently shifted to "don't start anything new, just fall in line with the existing stuff". What company or individual would want to publish anything new in this kind of environment?


> So you'd rather destroy a promising program just because you weren't asked about your opinion?

Canonical made blunt claims about deficiencies in Wayland which did not exist, they even made claims that others like KDE would happily adopt Mir but those did not know about their luck and did not plan to do this. Oh and add some CLA requirement for contributions to the mix. This understandably provoked a very harsh response from the affected projects.

> What company or individual would want to publish anything new in this kind of environment?

Those who don't insult other people and their work I would say. Then everybody will either be happy about your contribution or at least don't care if your contribution is practically unusable because it's incompatible with everything.


People get paid to develop Mir. They're a limited resource in open source - paid developers are the ones who can drive projects, fix tricky bugs and take on big features. They also wind up being gatekeepers to contributions.

At the time Mir was announced what I heard and what a lot of others heard was "Ubuntu's paid developer resources with relevant expertise are being diverted from Wayland".

Replacing X is sorely needed in the Linux desktop space, and it's a huge project. So big that unless it's frightfully mismanaged then it's extremely dubious what a split in development effort is going to accomplish. And what did it accomplish? I have absolutely no idea what Mir actually shipped that was usable. I have no idea what they were doing substantially differently and why that was good (something about convergence by using bionic so it ran on Android too I think?) And I do remember Canonical at the time arguing they would beat Wayland to something usable... But here we are, I'm still using X, my video payback still tears, and I can't for the life of me think how the Linux desktop has benefited.

I suppose Cinnamon got created and I now run that on top of others vanilla Ubuntu?


Wayland is actually fairly usable at this point, though some programs are buggy. With aliases to run problematic programs in X, it'd probably work quite well.


Nobody is destroying anything. But people can refuse to adopt or support your project/program


> Nobody is destroying anything. But people can refuse to adopt or support your project/program

There is a difference between ignoring a piece of software you don't like and actively campaigning against it.


It's not like Mir wasn't campaigning actively against Wayland.

No project exist in a vacuum. I like it that way.


Thats what they said about systemd but careful application of product tying strategies rammed that thru.


The same way you can build anything you want on your own, I also can say it's crap and I don't want it. And that's fine, you don't need my permission and I don't need to be your public target.

But it's a bit odd that once you realize that you can't build your own vision without asking for my opinion, you start throwing a tantrum saying I'm a negativist and that I don't embrace the community spirit.

No one forced Mark to shut down the project, he made the decision on his own.

Also, community and open source is all about contributions and public discussions, not _just_ code dumps.


> Also, community and open source is all about contributions and public discussions, not _just_ code dumps.

Personally, I think software should be the central point, not populism. Ideally, an open source project would surround itself with a support community of people who actually use the software instead of wasting so much time fighting.

If I build something that 10 people like, I might not care that 3 billion people choose to ignore it. But I would probably care a great deal if 5 people spend a lot of effort and air time campaigning against it.

> No one forced Mark to shut down the project, he made the decision on his own.

I'm disappointed in that as well, in case that wasn't clear. But as a vocal critic you also don't get to wash your hands completely if something you hate goes down the drain.

> The same way you can build anything you want on your own, I also can say it's crap and I don't want it.

Sure, I just don't like how much enthusiastic assent you can get out of the implication that the main reason for your dislike is simply that I built something on my own.


There's a difference between "destroying" (active) and "not supporting" (passive). If you want to garner support, you need to be open, communicative, and, yes, elicit opinions.


You know, being "validated" opens access to a lot of opportunities and most underdogs want just that. Open source is a fancy way to refer to unpaid work nowadays, can't see any revolution here, all being marketable if optimised for production.


> Doesn't this effectively mean that we stopped caring about open source, the only thing that actually matters now is the perception of broad community support?

Oh it's been that for... I dunno, 15 years now? Maybe more to be honest. It's just got worse recently.

Not that the other side doesn't exist too of course, but acceptance is hard - even for better ideas - if the community doesn't buy in.


It's all about freedom.

As long as you do freedom the right way.


The problem with Mir, Unity and all the convergence ideas was that they were poorly communicated.

Another problem with Unity was that it was totally not like the thing that made many of us like Ubuntu in the first place.

I'm not saying it was bad but I didn't like it. I guess this holds true for a number of earlier Ubuntu users.


> Another problem with Unity was that it was totally not like the thing that made many of us like Ubuntu in the first place.

Well not so much a "problem" per se, but this was always something that amazed me: Unity was the last thing I was/am looking for.

I get that choices for the DE are great. Unity isn't my cup of tea, but having the option is nice. However, my main quibbles with Ubuntu lied - and still lie - totally elsewhere.

For example, I have never done a dist-upgrade with Ubuntu that didn't break everything and force me to do a new installation. This confuses me a bit, since Ubuntu is a great go-to distro for newcomers to Linux. And from that perspective I get that some people might be upset by the "NIH" of some parts of Ubuntu while others still could use more improvements overall.

That said, I like what Canonical does very much. Ubuntu is - amongst other things - a great live-system. I always make sure to have an Ubuntu Live flash drive around in case I have to save a friend's files because their Windows system kicked the bucket. And I suspect Canonical wouldn't have ended up where they are now if they didn't have the guts to go their own way on some things.


If there is a reason on why ARM's GPU drivers are not updated on your device, it's hardly ARM's fault. ARM doesn't ship the SoC or the device directly to you, it ships it to the OEM. And it's the OEM who states the agreements.

If you want to blame someone, starting by blaming the OEM.


ARM still writes the drivers for the chip and then dosent update or open them up. Not much OEMs can do about that.


But ARM cannot, and won't, officially support the OEM's devices and start releasing updates for them. The process is a bit more complex than you might be imagining.


It still has to start at the top, and if they did update their drivers at least open source projects could then update on said platforms instead of being stuck on a 5 year old kernel till the end of time.


ARM has a reference design which includes drivers. The OEM is responsible for the final design and validation. ARM cannot take responsibility for validating 1000s of implementations that vary in both hardware and software configurations and validate them for billions of devices. Neither can the FOSS community. For the handful of boards which are popular enough to have sufficient tracking for the FOSS community to take the wheel if an open source driver was available the OEM has sufficient incentive to support their platform. For the other 4,999,999,990 LowHo NiHao industries unbranded boards it wouldn't matter in the first place.


Open sourcing commercial products like hardware architecture and algorithms to the public is not exactly an easy thing to do.

There are millions of registered patents and the chances that your clean-slate ideas were already invented and patented are really high.

Open sourcing means exposing patent infringements to the public (even if you are not really aware that you are infringing anything), which means that you need to invest on a strong legal team in order to go through all possible patents and to deal with all possible litigations you might face.

In order words, open source requires much more than ideals, it also requires butt loads of money.


Why has open source been so successful on the CPU then? Branch prediction, say, is no less a patent minefield than GPU framebuffer tiling. Yet gcc and LLVM have no trouble shopping optimizers.

This is an excuse, basically. They just don't want to because they fear revenue lost to compatible implementations.


Those are compilers. What open source CPU is there?


Uh... the GPU drivers we're talking about in this subthread are precisely "compilers" for the shader architecture (and configuration generators for the texture units and framebuffer layouts, etc...). In fact for architectures other than NVIDIA's the hardware-facing part of the linux driver already is open source.

The only secrets left are the bits responsible for turning OpenGL (or Vulkan now, I guess) calls into programs to run on the GPU.


Wouldn't it be fair to say that GPU architectures (at least thus far) tend to change a helluva faster than CPU ones?

I can think of AMD adopting and then ditching VLIW for one.


OpenSPARC T1 and T2 are open source and don't have out-of-order execution.


That's completely different, there is interest to expose your CPU architecture in order to help people write better compilers and better programs.

But on a GPU, the whole interaction between the GPU and the application is abstracted by APIs like OpenGL and Vulkan and you own the driver, you own the compiler, you own the implementation and you own the architecture. So companies tend to protect their "secrets" since they own the whole product.

If you are asking me why is it different... Ask these patent trolls instead:

- https://techcrunch.com/2014/09/04/nvidia-sues-samsung-and-qu...

- http://www.anandtech.com/show/11101/amd-files-patent-complai...

It's not an excuse, it's the sad reality.


And there is no interest in exposing your GPU architecture in order to help people write better compilers and programs?

Just as on a GPU, the whole interaction between the CPU and the application is abstracted by APIs[1] like C and Python.

[1] Yeah, we don't call them APIs. But then we don't call GLSL a "API" either. They're all languages.


I already made a comment about Apple's GPU here: https://news.ycombinator.com/item?id=14021814

Apple actually has its own mobile GPU, built from scratch.

The Tile-Based Deferred Rendering GPU is an advantage from the GPU sharing the same memory with the GPU. On a normal desktop GPU you need to transfer huge amounts of data from the main CPU RAM to the Video Ram but on a mobile device, the CPU and the GPU sit both on the same memory system. This allows you to architecture your GPU in a different way.

Both ARM's Mali and Qualcomm's Adreno use Tile-Based Deferred Rendering.


Both Nvidia and AMD have already switched or are moving towards tile based rendered to reduce bandwidth (and hence energy) costs.


Source? That's a huge architectural change for them.



So that's tile based rasterization not tile based rendering. There's a huge difference. This is basically just a cache before the ROPs write into DRAM.

EDIT: Also, I don't think his test is proving what he thinks it is.


No, I can say with certainty that they are working on their own mobile GPU.

They have been hiring a lot of graphics people and putting a team together.

And another thing that most people are not really aware: Apple had a lot of saying in the architectural and design decisions of Imagination's GPUs that ended up on their iPhones. A good part of the development actually happened at Apple's offices with Imagination people flying over.

So they know what they are doing, they are very well familiar with Imagination's GPU and they are more than capable of developing their own thing from scratch.


> Apple had a lot of saying in the architectural and design decisions of Imagination's GPUs that ended up on their iPhones. A good part of the development actually happened at Apple's offices with Imagination people flying over.

So that's why Imagination is insisting Apple can't not infringe: they know Apple won't have a cleanroom implementation not using the guys who've talked to Imagination.

Apple have a classic "they saw the copyrighted sourcecode" problem on their hands.


Not really, you seriously think Apple would just let ImgTech guys come in without lawyers and agreements and all of that sort? Apple has extensive experience in this area, they had ImgTech signed everything possible to protect Apple and to indemnify themselves. It is a risk that ImgTech also took by allowing Apple deeper into the development process. This isn't a one-way street here.

Apple is extremely potent in protecting its technologies. There is no way they just let random ImgTech fly in and work on stuff with them without any agreements in advance. If this happened, ImgTech is going to be an easy billionaire by the end of the lawsuits they could do.

While I have no doubt that Apple works closely with their hardware partners by flying their engineers in to work on projects, I seriously doubt it was as simple as the OP made it sound.


It actually is as simple as the OP said.

That was the main reason why Apple went for Imagination instead of ARM or Qualcomm when it comes to mobile GPUs.

Imagination market cap has been falling hard in the latest years, so hard to the point that their only customer until now was Apple. They were desperate and they signed very risky deals in order to keep Apple as a customer.

And Apple is a complete control freak when it comes to their products, the idea of not being able to control the stuff they put on their products is unthinkable to them.

So Imagination signed a bunch of architectural deals (instead of purely implementation deals) because that was the real product Apple was looking for.

Don't let yourself be mistaken, this whole situation is far from a surprise to Imagination. They knew this day would come, they were just trying to cling on to the little market they could find until they found another deal in order to stay afloat.


I'm not sure where we crossed wires but we're saying the same thing, nothing you said changed what I said.

What I meant by OP is that it is not as simple as flying their partners in and they start working together and then leave. Apple doesn't just do that without ensuring everything that happens stays in Apple only. So, flying ImgTech guys in and out does not mean ImgTech owns the patents to what they did at Apple want or the other way around, Apple can ensure they have the exclusive rights to it.


The beauty of orgmode is that you don't need to learn everything to start using it, actually, you can just start with your typical workflow and look for stuff as you need.


I stopped using org-mode for a time after having invested a lot of time in learning all the cool stuff you can do because I felt like it was overkill and I was trying to do too much with it and organizing everything was becoming a chore. If you look at all it can do, it has so many capabilities (outlining, gtd, wiki, blogging, ebook publishing, presentations, time tracking, etc).

Recently, I came back to it just to use for code/programming notes and instead of trying to organize all the code into source blocks I went the other way and added in my own comments and questions in blocks that can be toggled on/off around the code that's there. Now the only org syntax I really use from org-mode is `#+BEGIN:` blocks for comments/questions I can quiz myself on and occasionally headlines when I want to organize something because it's in the way of reading the plain text. If I gave advice to someone new to it it would be to start with just plaintext notes and only add what you really need. You can quickly go overboad once you are trying to figure out how to do footnotes, file linking, exporting w/ images, source code execution, etc.


Not sure if it is the same thing, but I find myself sometimes procrastinating by noodling with software features. It isn't the fault of org-mode (or any other software), it is mine, for not being disciplined enough.

"Distraction free" apps don't work, because I frequently need to switch windows a lot and need some fancy features.

Really, the answer for me is to work on staying focused. (Perhaps a lobotomy would help.) But it isn't a problem with my tools.


Yes I've done this a lot too. Emacs is the perfect environment for doing this too, but it really isn't too blame. I also have a problem wasting too much time on reddit and HN. If I put all this time I've been spending reading forum comments into instead reading really high quality content material (maybe books, wikipedia, or even good source code) I'd probably be a lot more skilled than I am. But, yeah the problem is me, and I agree looking to software (like discract-free apps) to solve the problem for me is counter-productive


Someone once said that the trick of being productive is to use emacs as "your operating system", or unproductive if you just end up playing with elisp all day.


Oh you want to quiz yourself occasionally? There's a minor mode for that: org-drill http://orgmode.org/worg/org-contrib/org-drill.html


To be honest, Yubikey was never really open source. Sure, they open sourced _some_ compomnents before but you coulnd't do anything with the source.


I guess it mostly boils down to Moxie and his ridiculous claims of how much more secure Signal is when compared to other solutions (like XMPP and anything based on PGP).

Don't get me wrong, I understand the design and user experience decisions of making Signal depending on GCM but Moxie just loves to bash on XMPP and federated protocols and putting Signal on a pedestal of exemplary security.

I admire the dedication on putting together the Axolotl protocol but I hate when he mixes his business interests with secure crypto solutions, because by the end of the day that is what he wants, to sell Axolotl to companies like Google and WhatsApp. And of course, bashing on XMPP is just a business pitch to those companies.


It's not Moxie doing that, it's virtually the entire community of cryptographic engineers. And Open Whisper Systems is a grant-funded nonprofit that until recently could so barely afford developers they were considering withdrawing their iOS version, so the idea that this is all about Moxie's business interests is horseshit.


isn't Axolotl an open protocol to be freely implemented by anyone? https://en.wikipedia.org/wiki/Double_Ratchet_Algorithm#Usage



I agree with what was said from your links. The code was poorly written but that does not mean that this can't be improved once they get an active community. Sadly, right now they are inactive and I don't know why.


I think the complaints about peripheral source are valid, but they have had it running on silicon for months, so the answer to "will it work" is pretty clear.


"First of all, it is just a simple microcontroller and the implemented RISC-V instructions are not that significant for the purpose. It's like expecting the SIM cards used in mobile phones to be equivalent to PCs because they run "Java"."

Who is calling this a PC? OpenV seems to be marketing itself as a microcontroller, it keeps comparing itself to Arduino.


These are very valid concerns.

I should also add that the "boring" mips is right now significantly more open than riscv, more mature and easier to get hold of.


His concerns are with OpenV, not RISC-V.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: