Hacker News new | past | comments | ask | show | jobs | submit login
.NET on Linux – bye, Windows 10 (piotrgankiewicz.com)
214 points by spetz on Oct 17, 2016 | hide | past | favorite | 282 comments



The real story here is the forced updates. How do people who are planning demos and presentations handle this? It's my worst nightmare.


Set back the system clock.

I watch movies on my laptop while I exercise. One day I was about to start exercising and Windows was like "Would you like to update now or in 10 minutes?" And I was like "I'm going to watch my movie," and set back my system clock, and surprisingly, it worked.

(I find this experience funny because moving back the clock was a knee-jerk reaction and it actually worked)


MSFT here. Thank you your report. I have escalated your issue to the Windows Update team. Microsoft is committed to your security and the loophole will promptly be closed by another forced update. Resistance is futile.


Know you are joking but still want to mention:

A while ago I reported that the pw field in W10 gave away a lot of entropy easily (basically ctrl+arrow keys stopped between every character class so if your password was: newr..!VYIV you could in five seconds conclude that the password contains of 4 chars of something, then 3 chars of something else, then 4 characters of something other than the second group.)

Whoever handled it concluded that it was absolutely "interesting" but not a security issue so no bounty for me :-/


Hmm, combine that with a USB stick-like device and you could probably implement a brute-forcer that doesn't actually attempt most passwords, since key based editing of the original password would probably work.

This of course assumes the entered password is still on the screen.


Didn't think so far as a USB stick.

That said, once you can leave a usb stick attached to targets computer I'd say all bets are off since at that point we can be reasonable secure that the target won't notice a USB or PS2 keyboard sniffer either, giving you plaintext passwords.


Was your original issue dealing with an already typed password? The stick would emulate a USB keyboard using something like a tiny and wouldn't be left attached. It would perform edits to the password and try to determine which charecters were special, the apply a simplified dictionary attack to the contiguous groupings of characters of the same class. The goal would be to reduce the space of the attempted guess. I don't see how it would be a practical attack though.


IIRC Windows has pretty strong rate limiting on login attempts. You don't really need a USB stick to execute them as fast as possible.


Twice I've been bit by the unexpected updates during shutdown during a power failure. At home my UPS is only good for 5 minutes, so I want to shut down as soon as the power goes down. I do that, and Windows starts on "Installing 17 updates", with absolutely no way for me to intervene. And powering down in the middle of OS updates has to be about the worst possible time for it.


It will simply roll back the half-finished updates and retry the next shutdown. Those things are done transactional. Powering down won't hurt it.


You mean pulling the plug instead of shutting down normally? I suppose that'd work but usually you only realize that Windows wants to update _after_ you hit the shutdown button. And powering the machine down in this state will likely result in an unbootable machine.

And good luck fixing that, especially if your hard-drive is encrypted.


What version of Windows do you use? My desktop uses the latest version of 10 (AFAIK) and the shut down button reads something like "Shut Down and Install Updates" when Windows wants to install an update. I also remember 7 having a little orange icon next to the shut down button when it was going to install an update.

Granted, I don't think there's a way to properly shut down without the update in an I-have-five-minutes scenario. But it does, at least, warn you.


IIRC there is. You can log out, and then have the option to not apply the updates (you'll have Shut down or update and shut down in the poweroff menu).


You can shutdown without applying updates if you use 'Alt + F4' to initiate shutdown


Running shutdown -s -t 0 works.


Add a -f to that too, I have never seen a windows with pending updates install them with "shutdown -s -t 0 -f"


For what it's worth, I regularly "pull the virtual plug" on my Win10 VM during updates because fuck you it's time to go home, and I've never had it fail to boot the day after. It's obviously an unacceptable solution to the problem, but it's at least not disastrously fatal in most cases.


Windows updates fall into various categories in how they are deployed (this also causes variations in whether you need a restart or not). The overwhelming vast majority of updates nowadays are deployed in a rather safe mode where nothing on the system is changed during install until at the very end effectively a switch is made to the newly-deployed components. Those updates tend to not care very much, or at all, about sudden power loss. The next start notices there's an unfinished update and either finishes it, or rolls back and you can try again.


True, but that doesn't really help you, does it. There is no indication on the screen which kind of update is currently being deployed. So shutting down your machine is still a gamble, something you definitely don't want on a production machine.

To use a car analogy: modern car's safety measures would probably protect me in case of an accident, but there is no way of being sure. So I'd rather avoid crashing into things at great cost.


As a Windows system admin, I can confirm this is incredibly rare. Windows is pretty darn good at recovering from broken update states.


Eh, honestly compared to other operating systems it's really not. Windows 10 is starting to get there.


So the message 'do not turn off your computer' is just a suggestion?


"It's now safe to turn off your computer."


Why it couldn't be an "install updates now?" prompt with default set by the admin is beyond me.


Anecdotally I'm 1-for-1 on forced shutdown breaking Windows 10. It messed up the users user account; recovery was done by rolling back in System Restore.

So I'll go with powering down shouldn't hurt it.

It wasn't however update related. The user went to shutdown, the computer said "no" (apps are open, blah blah) and the user held the power button down.


> Powering down won't hurt it.

Usually. I see enough students who mess up their machines due to hard poweroffs during updates to say that sometimes it does not work out.


In Task Scheduler, expand the Task Scheduler tree to go to Task Scheduler Library -> Microsoft -> Windows -> UpdateOrchestrator -> Disable Reboot task

In the folder %Windows%\System32\Tasks\Microsoft\Windows\UpdateOrchestrator remove write permissions for all users from the reboot file.

https://techjourney.net/permanently-disable-prevent-automati...


Don't forget to do it again after any updates because MS will likely "fix" the obvious mistake you just made.


Check point 6 of the linked article.


Group policy editor.

You can configure it to not update while anyone is logged in, only manual, at a set time, etc. It's just not in any of the "usual" settings menues. Likely rationalized by average users not disabling security updates by accident or something.


I love that group policy! It will still force a restart about every 9 days with that enabled, but it's much better than daily restarts.


This seems a tad depressing...ever heard the Malcolm X quote: "If you stick a knife nine inches into my back and pull it out three inches, that is not progress"?

And yes I am completely aware that Windows 10 updates are in no way comparable to the civil rights movement and I am in no way making any such comparison.

I am just quoting a witty man.


I find it ridiculous that you feel the need to issue a disclaimer about a comparison between Malcolm X and Windows 10. It's a perfectly valid comparison, though obviously using hyperbole, and it's pathetic that you can't use a hyperbolic comparison like this without some stupid hipster bitching about it these days.


I'm glad I'm not the only one that was mad about that. My PC just shut off and started doing updates, in the middle of the day.

My friend said "how much did you pay for Windows 10." I said "nothing, it's free update." then he said "There you go."


> My friend said "how much did you pay for Windows 10." I said "nothing, it's free update." then he said "There you go."

You paid for the original system that you updated from. If you are outside the 30-day window (which you are, because the free offer is over for more than 30 days), you cannot go back to the old system.

So you didn't get the system for free, you exchanged it for the old one. Just the act of the exchange was free, which does not make the system free.

Newly bought Windows 10 licenses are not free either.


Most no-cost Linux-class OS/WM's don't shut themselves down for updates.


I'm sure I could write a cronjob that would be naggy and demand to do what Windows does.

And I could even use SELinux so even as root, couldn't kill it. That's what "System" is, right?

/grrrr


XBox One is expensive, and has a ridiculous amount of updating.


If your using Windows 10 Professional or above, you can change the Local Group Policy to prompt you for both downloading and installing updates.

For lower versions (like "Home"), there is a way to tell it that your connection is "Metered" and then updated won't be downloaded automatically; but that's not an official method to prevent updates.


For public information: Wired connections cannot be set as "metered". Only wireless ones.


Also for public information: There are hacks that allow wired connections to be set as metered:

http://www.windowscentral.com/how-set-ethernet-connection-me...


I didn't realise that, thanks.


This is indeed the worst. By itself it is enough to make me want to switch.

It's not the only thing, though. Win 10 is a decent OS in most ways, but there are too many places where it does things that I don't want and I can't (easily?) change.

I use Windows because I do C# development for a living. For now, it's not practical for me to switch, but maybe it will be in a few years. I'm hoping so.

Really stupid to drive developers off of your OS, Microsoft.


Since when do developers have much say in which OS they use? These things are decided by corporate policy and management in most places.

The best you can hope for here is that you can do the bulk of your C# development on a "rogue" Linux machine or VM, and then do some final testing and bug-fixing on Windows, since it's going to be deployed on Windows with customers (internal or external) who use Windows.


Developers get a say in which OS the same way they get a say about lots of things, by finding a job that fits their desires. For example I decided a while back that my computer had become to integrated in my life for me to consider using a company provided computer, so I don't, and I haven't had any problem finding good jobs.


> Since when do developers have much say in which OS they use

i predominantly work in java; and while windows is the "official" development environment (more by default than by decree) there weren't any objections or policies against using linux, as nothing in the workflow is tied to a single platform. (hooray!)


Simply stop and disable the windows update service, and turn it on when you have time to actually install updates. Been doing it this way for more than a year now without problems.

Of course you'll have to remember to turn it on to check for updates, but that's no different from using manual updates in past versions of Windows. The only downside is that certain things, i.e. Add/remove features might not work 100% while the service is disabled.


I was out in the field consoling into a cisco switch and I had my laptop restart(I had been cancelling it a while now and it got to the point where it just restarted)

It took the better part of 10 minutes to apply updates. Thankfully it was nothing pressing I was going on the switch. I have disabled auto updates with a registry hack. Microsoft isn't serious about Windows and is now treating it like an OS used for petty un-important work


In Win 10 Anniversary Update you can specify active hours, when you use the device and it shall not be rebooted. Also, there's a setting for custom restart time.

http://betanews.com/2016/06/09/control-when-windows-10-insta...


I found out the hard way that you may only specify up to 12 hours as active, in one period. So, if like me, you use the computer in the morning and again in the evening, you are SOL.


You can reschedule the reboot if it hits during your working hours beyond that period (second part of the article under the link shows the UI).


The updates drove me away from Windows 10 years ago.

I still have a windows machine for occasional games. What drives me nuts is "Install Updates and Shut Down" leads to a 15 minute boot time. WTF? I thought all that was supposed to finish before the computer turned off???


What games keep you on Windows? It is my job to solve that problem, we'd love to help you out.


Star Wars: The Old Republic Witcher 3 Watch_dogs Overwatch Mirror's Edge (both games) Paragon

There is a general selection of games I play but can only play on windows. In general I settle for the Mac graphics stack proforming on par with Windows.


Overwatch - Windows/Mac only.


We're working on Overwatch, but DirectX 11 just isn't quite there yet :(


reason why gaming on linux is never gonna take off - there is always some reason that the new hot game isn't gonna work. it's been a couple of months for overwatch, the initial hype is already gone - confirms my suspicion that if you have a linux gaming pc you essentially have a paperweight for playing older games that no one cares about


This is largely only true for huge AAA games. Lots and lots of indie games come out with support for Linux on day one, largely thanks to Unity. Indie games that don't have a native port frequently (though not always) work in Wine because they tend to be less demanding. Personally I don't like AAA games, so I'm actually pretty happy gaming on Linux these days.

So yes, it might be a paperweight if you're a big Call of Duty fan, but it's hardly a paperweight to everybody :)


"The updates drove me away from Windows 10 years ago."

This is amazing because Windows 10 is barely more than a year old!


Heh, I read it the same way you did, but I think the author's statement could be reworded like this: "10 years ago, the updates drove me away from Windows."


Heh, now that I look at it, you could definitely be right! Freaking English language. IMHO, the British can keep it.


This can be turned off... You can use the group policy editor to set it up like windows 7. I just did this on my dev machine this weekend after being bit by it. Now I get notified of the updates available and can choose to install them at my whim. No more coming back to a rebooted machine and all my development windows, webservers, etc are no longer running. I have windows 10 pro, so I cannot speak to how this works in Home, but I would be surprised if a developer was using the Home version anyway.


Doesn't work in Home edition (edit: the group policy editor is not available). Cheaper laptops (which are still fine for lighter web dev) come with Home edition so there are a few people (me at least) who do have to live with the forced updates


> Cheaper laptops (which are still fine for lighter web dev) come with Home edition so there are a few people (me at least) who do have to live with the forced updates

Recently I bought a pretty expensive ASUS laptop, 2K and it comes with Home edition. At this point I can buy an upgrade license from MS for about $100 or buy a full retail Windows 10 license for > $200.


I just disable the Windows Update service. Every couple of months, when I feel the need to clean up all my open pdfs and Chrome tabs, I'll enable it and restart.


Launch "Windows Update Settings" and review "Active Hours." The dialog claims, "When a restart is necessary to finish installing an update, we won't automatically restart your device during active hours."

Note that the use of "we" (instead of "Windows") is a tad unusual, and maybe a little ominous, for Microsoft.


As mentioned in another comment, I have all these things configured and yet twice in that past seven days Windows 10 Pro has given me the 20 minute warning, right in the middle of my working hours. To add insult to injury for some reason the "Use a custom restart time" is now disabled.

I'd love to have the annoying old windows update nag box with the postpone button back.


You're implying that the reboots occur at random without notification. They do not. Windows notifies you that updates were installed that require a reboot and that it will occur at off hours. If you ignore that notification then when the reboot is schedule to occur you receive another notification with the option to postpone the reboot for another amount of time.

I've had these forced reboots happen a couple times and it's disruptive but that's about it. The most disruptive was when I was working on a project and stopped for dinner then never came back and left my desktop on overnight. The next day I discovered that my computer had rebooted and was at the login prompt.

I can see how this would really suck for applications that do not respect the shutdown notification that Windows broadcasts but even the Arduino IDE dutifully saved my work.

I suspect that over time app developers will pay more attention to OS level event notifications and we'll get to a point similar to Mobile or even MacOS where apps can be halted and resumed at will. Personally that sounds like heaven.


> You're implying that the reboots occur at random without notification. They do not.

Well actually......twice in the past seven days I had updates installed during my "work hours" and it still went ahead and did the 20 minute reboot warning thing during the same specified work hours in the same day.


That's curious. I just checked and all of my machines have the reboot during off hours option turned on. I don't recall turning it on explicitly but it's entirely possible that I did, I'm usually not that thorough though.


> Personally that sounds like heaven.

It sounds like heaven to have the capability. It sounds like hell to be forced to use it just because the computer's doing something you didn't ask it to.


The user isn't forced to use it so much as it just works. The feature itself is nice in for many reasons beyond the operating system rebooting after updates. Loss of power, system hangs, and program lockups come to mind as situations where it would be nice.

I think about my browser not restoring tabs after crashing and I shudder because that happens at least once a week. Notepad++ has had this saved state functionality for a while and I take advantage of it quite a bit. I think it would be great if every application did it. Most core Apple applications can resume after reboot.


> The user isn't forced to use it so much as it just works.

In the context of forced updates, one would be forced to use it (or deal with recovery of what they were doing on their own).

> Loss of power, system hangs, and program lockups come to mind as situations where it would be nice.

Absolutely. But in these cases, it's acting as a way from recovering from a system exception. The few times that I've recently had a system unexpectedly restart (pulled plug on a laptop with a dead battery, for instance), things like saved browser sessions were great. I just don't like it as the mitigation of negative effects of a system-enforced reboot.


> In the context of forced updates

I was speaking more in general and less in that context.

> I just don't like it as the mitigation of negative effects of a system-enforced reboot.

I don't either but I don't consider it all bad. Honestly the session resuming capability of Android and iOS is a great selling point and one that's sorely missed on desktop operating systems. Apple has done a great job bringing it to their core apps but how widely available is it beyond that? In a way Microsoft's policy of forced updates might expedite the spread of this feature.

Putting my conspiracy hat on, Microsoft was and still sort of is pushing everyone towards UWP which promotes this sort of application design at the core. Maybe this is how they force everyone to it.


Hibernate. I don't reboot more than once a month or so. It doesn't do on-bootup updates when you resume from hibernation.


Windows 10 is designed to keep your computer safe. It'll let you know when you can use it! No time limits. Safety first.


>The real story here is the forced updates. How do people who are planning demos and presentations handle this? It's my worst nightmare.

Manually run the update check every 2-3 days and run it again two hours before the presentation. Any updates that come out in that two hour window will allow you at least one delay.

Come on, folks, this isn't exactly rocket science here. Keep your OS up to date and, surprisingly enough, you won't have any problems caused by it being not up to date.

EDIT: May I also remind everyone that your corporate IT security policy almost certainly requires you to keep the OS on your work PCs up to date?


OS updates (or any update that potentially takes more than 2 minutes) shouldn't ever be forced. It's just terrible. Sure, there should be some prominant prompts saying "Plz update me" from time to time (so the average joe's computer isn't terribly out of date). But don't force me to update in the middle of prez. Because clearly, I'm best placed to know what's the best time for my computer to become an expensive paperweight.


The problem is that if you don't make them automatic, people won't install them. Microsoft spent most of the Windows XP era with people being bitten by problems which had been patched in many cases years before.

The approach they've taken of making things mandatory is clearly not universally popular but most of the arguments about it tend to ignore the fact that it was designed to solve a very real problem which affected millions of people.


And yet macos doesn't have forced updates. They just prompt the user (with a very in-your-face message) and people DO update.

Same with linux.

Same with android.

Same with IOS.

If you prompt the user to update, they generally will.


That's technically true but not true in real life for most people. Yes, you can ignore the notice for awhile but fairly soon you'll reach the point where something like iTunes requires a newer release or you need to upgrade iOS/Android because an app developer made the business decision that using a new API offers more benefits than supporting legacy devices.

There are two reasons why this works out better. One is that the non-Microsoft world isn't dominated by enterprise IT departments demanding binary compatibility with antique code; the second is that those platforms have spent many years building trust that major upgrades won't prevent you from working. In the case of Windows, that was complicated by charging for updates and the broken IT culture which resisted them amplified with the rough Windows Vista release cycle causing many people to avoid it for a decade.


What if the update goes badly and locks the machine two hours before your presentation? Your post is probably the worst excuse for this behaviour I have yet to read and its frustrating, on so many levels, to see it being excused.


What if your laptop battery catches fire two hours before your presentation? It's just about as likely.

I am surprised to be asked by adults to think of a solution to a broken laptop given two whole hours in which to find a solution. (Hint: borrow somebody else's laptop.)


Laptop batteries catching fire are such a rare and dramatic occurrence. A simple battery is much more likely in which case you should have come prepared with your charging cable, a very reasonable mitigation scenario. The difference here is agency - a user is empowered to bring the tools to ensure a best case scenario. When your OS decides restart time is NOW, you are not empowered. Quite the opposite.

Software is meant to serve us, not the other way around. Updates can wait for when we're ready, it's a purely coded decision to resolve otherwise. There is no value, worth, expertise, or pride to be extracted from being a slave to your machine which makes it all the more stunning that this is still happening to people today and that people like you can excuse it.


I do a lot of presentations, and I'm on the Windows Insiders Fast Ring, meaning I get a new build about weekly, hence a lot more Windows Updates. I check for and install updates as part of my preparations before presenting. I've been doing this for years, on every operating system I present on (yay xcode updates over hotel wifi!). I've never had a problem with unexpected OS updates, because I run them explicitly. I do the same for frameworks and IDEs I'll be demonstrating. I'm surprised someone doing a presentation wouldn't make sure their software was up to date before doing a presentation, honestly. Not just because it makes sure you know the state of your software stack, but because you'll know if - for whatever reason - your demonstrations don't work or work differently on the latest software release.

End to end tech checks (including your software stack) is just part of respecting your audience.

I think some people run into surprise software updates because they do demonstrations on a virtual machine. I don't think that's a good idea, but if you're going to present on VM images, it's just good practice to test out your environment before you present on it.

Note: I'm not making any statement here about how I think software updates should be installed or how Windows works. I've blogged about controlling Windows Updates over the years, here's a post from a decade ago: https://weblogs.asp.net/jongalloway/438009. This comment is about how I make sure, as a developer and presenter, that software updates don't affect my presentations.

Disclaimer: Microsoft employee, Nazgûl


You work for an OS company. Your experience is by no means universal. Your way out on the far side of the bell-curve by about several light years.

What about non-tech savvy people who expect a computer to work like an appliance and "the bloody uupdate broke my presentation software".

The problem with the world today is there are too many bloody software dwvelopers, too many who seem so keen on sacrificing stability and compatibility with "The Next Thing".

As someone who works with computers and machines in a fabrication shop that, CNC machines that have never had a software update but they still work day in day out, and UI element that don't move or disappear on a weekly basis, I have to say my personal experience of smartphones and computers (Mac, Windows, Linux) is this: OH MY FUCKING GOD WHY DID THEY CHANGE THAT. WHY DOES IT DO THAT. STOP CHANGING EVERYTHING ALL THE TIME.

Yep, computers are huge piles of black-box shit to the average person, who expects their computer to be more like an appliance.


1. you use the app everyday so you're excited/opinionated about UX and UI changes so you'll be vocal and help the developer improve your usage of the service 2. you use the app infrequently enough that you forget how to use it everytime so it doesn't matter if they change everything on you, it'll look new to you everytime


> and run it again two hours before the presentation.

Wouldn't two hours be too short to fix any problems caused by the update?


> Manually run the update check every 2-3 days and run it again two hours before the presentation. Any updates that come out in that two hour window will allow you at least one delay.

OK, it's possible to work around if you take special care. No one should have to pay that much attention to their update schedule.

> Keep your OS up to date and, surprisingly enough, you won't have any problems caused by it being not up to date.

The problem isn't caused by the machine being not up to date. It's caused by Microsoft requiring forced updates. The computer doing something disruptive that I didn't ask it to do is something that I associate with malicious software; it shouldn't be something that I expect from my OS.

> EDIT: May I also remind everyone that your corporate IT security policy almost certainly requires you to keep the OS on your work PCs up to date?

They do. I typically have 2 weeks of warning, and the chance to update anytime within that time period. I am warned repeatedly as the day approaches that the computer will restart on its own. Patch sets are released monthly, so it's not particularly onerous.


It is seriously that bad now? You have to touch windows update three times a week?!?

That sounds like an even bigger time sink than the whole "find a new Debian derivative because Unity breaks the alt key, and systemd breaks everything else" thing.


You don't need to switch distros to install other desktop environments. What did systemd break?


In this specific context I'd guess startx/xinit


"Although the application did run correctly, it threw an exception that gdiplus.dll could not be found. It makes sense, that’s a Windows component so it’s not available on Linux. But my point here is that although being the .NET Core application it’d still crash, so imagine what would happen if I’d publish it to some Linux server – everything would seem to work, but actually it wouldn’t. Another point for the .NET Core development on the Linux instead of Windows."

I am baffled... so silent crashes and the illusion of working is a plus... really?

I hope I am missing something


I think the point is "develop on the platform you are going to use". You'd get that error when you were developing on linux, so it'd never make it to production. If you developed on windows, you'd never get the error, cause you have gdiplus.dll. Still you should be at least testing on something matching the target platform, but catching this kind of stuff early is a plus!


Well, I guess he wouldn't have used gdiplus.dll if he was planning to run the app on Linux, so I'm guessing the conclusions should be:

- If you're doing multi-platform software, don't use platform specific libraries.

- If you're doing multi-platform software, test on all your target OSes.

Use any OS that makes you productive.


Just to clarify, he didn't use gdiplus.dll, he used System.Drawing, which in turn uses gdiplus.dll.


You're right, thanks!


It seems like he was happy that he was developing on Linux, because Linux is not the happy case for .NET. By doing so, he discovered a bug he might not have otherwise discovered until he was running his project on a Linux environment.


Linux is the default environment for the official .net core docker images. You can see this on Microsoft's official docker page[0]. For non-linux builds, you have to specifically ask for the nanoserver images.

ex:

microsoft/dotnet:latest is a debian based image[1]

microsoft/dotnet:core is a debian based image[2]

[0] - https://hub.docker.com/r/microsoft/dotnet/

[1] - https://github.com/dotnet/dotnet-docker/blob/master/1.0.0-pr...

[2] - https://github.com/dotnet/dotnet-docker/blob/master/1.0/debi...


Sure, for .NET Core, that sounds accurate. If you take the .NET ecosystem as a whole: the CLR, the tooling, the libraries, etc.. Windows is the happiest case. That's at least how I take his specific point.


Looks like a sensible option for .Net Core development. VSCode and the command line tooling look to be pretty decent. I would really miss LINQPad though.


The csharp REPL is actually a fairly decent substitute for LinqPad. It can pull in 3rd party libraries, has autocomplete, and is all around quite nice to use. You don't get the automatically generated ORM against your database, but if you mostly use it to test code snippets or as a shell scripting substitute, you might be surprised by how little you miss it.

When we migrated all of our .NET development to Linux where I work, I thought I'd have to at least keep a Windows VM around so I could continue using LinqPad, but the csharp REPL has pretty much sufficed for everything I do on a regular basis.


I think Project Rider is probably going to be better then just VSCode.


The amount of time I've invested in LINQPad scripts means I'm going to keep dual-booting Windows in the foreseeable future.


> Before I did manage to install a fresh system on my laptop I had to disable UEFI, create valid partitions with special flags etc. it took me a while to figure this out, yet it wasn’t that difficult.

Ha, right here is why "the year of the Linux desktop" will never happen. It's basically a kit car. Some people may find it "not that difficult" to weld a chassis together and rebuild an engine, but it's never going to be mainstream.


I fail to see why snark is warranted here. Linux should not have to conform to Microsoft anti-competitive practices.

UEFI is a mess to deal with because Microsoft wanted to lock users. Yes, this will prevent Linux on the desktop. That doesn't mean it will never happen.


I don't even understand why this bloody "desktop" paradigm has to be maintained.


I think it can be hard for us (people who get a lot of screen time, spend a lot of time coding in 2D) to imagine that the "desktop" paradigm is already dead. Mobile/voice/AR is killing it. Granted the desktop UI metaphor will probably never die, just like the command line hasn't. But the front-and-center desktop is fading away...


I'm not so sure. If you took a drink every time you read "sorry for the short answer, I'm on mobile, I will edit it when I'm back on a real computer" on HN or reddit, your liver would die faster than Windows Phone. There's still a place for devices with lots of screen space and real input devices, and not just in the IT community (unlike with command line interfaces).


Command line interfaces are making a comeback. They are called "bots" for some weird reason, and all I've used so far are remote -- but it's still a cli.


I suppose I'm hoping for a point in the near future where AR/VR type glasses (or something like that) can be used with high enough fidelity and resolution to replace my desktop, physically. Coupled with virtualization and a good keyboard, I think the PC form factor could be fully obviated.


Waiting for Apple or someone to introduce "Retina VR"? The issue is, if you double the pixel density you quadruple the amount of work on the GPU. Few current graphics cards would be able to smoothly render content that high-res, yet.


Even if they were (2D rendering isn't that bad), we'd struggle to transmit it (neither HDMI nor DisplayPort could handle 16K or whatever we need for crisp enough VR rendering).


I don't think much work is done or will be done on mobile. For consumers perhaps. As soon as you need to do anything serious with a spreadsheet or code, you can forget the mobile. It's not a iOS vs Windows functionality thing. It's a form factor problem.


Mobile yes, but virtually no one uses voice commands once the novelty wears off (same as in the 90's) and AR is still incredibly niche.


It's always not Linux' problem.


Can you clarify this statement?


Well, take any problem from a user regarding Linux - and it will not be Linux problem.

No hardware support? Not a Linux problem. Crappy drivers on Linux? Not a Linux problem. No Photoshop on Linux? Not a Linux problem.


But it isn't. Hardware and software support for a platform is decided exclusively by the hw/sw vendor. That's why I support the ones who do.

Does Linux make drivers harder to write than Windows or macOS? No. In fact there are many, many spaces where development is by far the easiest on Linux. Yet those companies choose that part of the market is irrelevant to them.

You might not care why there's no drivers for your joystick and use Windows, but it is 100% the blame of those companies, there's no debate about it.


I wonder what he fucked up to make this necessary. Modern Linux distributions work fine with UEFI and create the necessary EFI System Partition itself.


UEFI support is relatively new for Linux and there are still a bunch of guides floating around that tell you to disable it, more than likely he followed one of those but didn't really need to disable it.


UEFI support may be new to some major Linux distros, but all the necessary components have been lying around for years. I booted Linux on my Mac through EFI before UEFI support was common on PCs.

Most of those guides you mention were half-assing it at the time of writing. There have been good reasons to still boot through the BIOS CSM even when a system has EFI, but for the most part those reasons have faded or been entirely eliminated over time: older graphics cards that lack the necessary firmware are less common, 32-bit EFI on 64-bit hardware is extremely rare and since kernel 3.15 isn't a problem any more.


His UEFI might have "secure boot" locked. I've heard that's an issue with some laptops.

Obligatory Wikipedia reference: https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_In...


All major distributions ship with shim enabled IIRC, so should work with Secure Boot.


Doesn't matter. Secure boot just requires that the bootloader is signed with an approved key. Most major distros already have this down as Microsoft will happily sign any bootloader that isn't a glaring hole for a small sum of money.


Unfortunately not on every UEFI implementation ...


> Ha, right here is why "the year of the Linux desktop" will never happen.

I've been using Linux desktops almost exclusively for more than a decade. Both my PC laptops have UEFI enabled and both installed Linux (Ubuntu and Fedora) off a USB stick with no tweaking, so, it may be too complex for you, but most developers I know don't share the same personal issues.


I believe you and your parent are talking about different categories of users.

>> but it's never going to be mainstream


My mom uses Ubuntu (but, in all fairness, I installed it for her)


Which is only fair, Windows comes preinstalled also.


My mom uses Windows. In all fairness, it was pre-installed by HP.

I don't know any non-technical consumers who installed Windows themselves in the last 15 years.


I work in IT, exactly 0 of my users could pull off a Windows install and if they could they could they also probably would be able to install Centos or Ubuntu. Both have wizards that make it just as easy as Windows.


I have to disagree on this one. Installing a mainstream Linux distro is really easy these days, far easier than installing Windows. With Windows, you have to go hunting around for device drivers for anything that's not included on the stock Windows install disc, you have to reboot lots of times during the install, it takes a long time, and that's just for the bare-bones OS: installing extra software means running around and downloading stuff (or using install discs--how quaint), running lots of "installers" and rebooting many more times. With Linux, I can have a fully-functional system, complete with lots of software, within 20-30 minutes and just one reboot.

Saying that installing Linux is "just as easy as Windows" is like saying "moving a mountain of dirt with a bulldozer or earthmover is just as easy as doing it by hand with a shovel".


My Year of the Linux Desktop was 1999, when Win98 OSR2 died and I had to pick between my Red Hat Linux 5 CD, or waiting a week and a half to get another windows disc. Took about 3 hours to get online over PPP (their configuration manager didn't work, but all the docs were included!), and I was installing Enlightenment 0.14 before the end of the night.

UEFI is painless on every distribution but vanilla Arch/Gentoo/Debian installs. Anyone using (k/x/l)ubuntu probably didn't even notice UEFI just works.


"The year of the Linux desktop" already happened, but in the meantime desktop got so small that you can put it in your pocket. It never would have been happened without commercialization of the Linux desktop (Android), so your kit car analogy holds.


Good luck running desktop GNU/Linux stuff properly on Android/Linux.

The set of APIs exposed to the NDK is so constrained that Google can replace the Linux kernel with something else and only OEMs writing drivers will notice.


Funny you mention that. Fuchsia seems to be the replacement for Linux.


Check also Brillo.


Brillo uses Linux.


It's really not too bad with root. Although the permissions and selinux stuff will trip you up a lot.


Yes, but those aren't the devices normal users get on the store.


Yes they are. However the rooting process isn't something normal people do with their normal devices.


It happened when Windows XP got discontinued.


I'm not even sure why he did this, the Ubuntu installer took care of this for me when I installed Xubuntu in UEFI mode this morning.

Is the Mint installer unable to deal with UEFI?


I've read numerous posts on support forums about Mint's UEFI support being broken. I have a friend who recently experienced it as well. I turn off UEFI and SecureBoot in my BIOS so I don't have to deal with it, though Slackware does support it if I needed to leave it on. So far Windows 10 hasn't complained about it being turned off on my gaming PC either.


Here's my anecdata after recently attempting to move permanently from Windows to Linux Mint:

* No logitech mouse support = severely disabled experience. I can't use any of the extra mouse buttons in the way I want on my Performance MX. And yes I did spend several hours trying to get it to work but it seems that the button mapping works only for keyboard commands and even that I couldn't make work. And never could make the mouse wheel move at what I consider to be fast enough. Very frustrating.

* While it supposedly exists, I could not make f.lux work. This essentially prevents me from using my computer at all at night.

* The ridiculous amount of times I have to type in my password (and use sudo) is really user unfriendly. Yes I'm sure there's way to disable this or even use root but the average non developer user is going to be hugely turned off by this.

* Node is really, really good on Mint. A full stack MERN/gulp application took 10 seconds(!) to yarn. On windows it would take 2-3 minutes. And chrome just seems much faster for dev. While working on a multiplayer game with 5 windows open (MultiLogin) all of which are connected to livereload, it is about twice as fast for reloads vs windows and a bit faster than macos/macbook pro. Gulp itself takes about 20 seconds to start up on windows the first time is instant here.

So in the end there's just no way I'm going to use Mint for every day use i.e. websurfing and of course gaming. I'll switch to it when I want to do serious heads-down dev.. maybe. To be honest the fact that its a bit faster for node probably won't be a good enough incentive to go through the hassle of setting up a dual boot.


>While it supposedly exists, I could not make f.lux work. This essentially prevents me from using my computer at all at night.

I'd recommend using redshift instead (http://jonls.dk/redshift/). It's free software and does basically the same thing as f.lux.


    sudo apt-get install redshift-gtk


I use Arch Linux and I haven't faced any of the problems you mention. I don't game so this might be slightly skewed, but there's not a single thing on Windows that would tempt me into using it.

Also the sudo point I completely disagree with. It comes down to security or convenience, and it's a damned shame that people pick convenience.


Further, if you are doing a bunch of administrative tasks from the CLI, sudo will keep you authenticated. If you really want, you could even open up a root shell.

I think the real problem is that the Right Way(tm) to do things is completely non obvious to a linux novice. Looking back on my times starting out with linux (ubuntu 6.06), I remember a lot of similar frustrations. Really, there is no middle ground, you are either a novice or an expert, and bridging that gap is super challenging. The only real useful advice I ever received was to 1) learn the CLI, and don't assume that it is antiquated, 2) use linux for real tasks, don't just follow tutorials, and 3) If something isn't working the way you want it to, there is probably a way to change it.

Hopefully the author will use his frustrations as a learning experience, rather than a reason to leave the platform. The linux community could use more people like him.


> If you really want, you could even open up a root shell.

For those new to Linux: this can be done as simple as "sudo su -". More details here: http://askubuntu.com/questions/376199/sudo-su-vs-sudo-i-vs-s...

That said, -in my book using root except for updates is often a sign that someone misunderstood or that something is misconfigured or something.


> sudo su -

sudo -i


I think the real problem is that the Right Way(tm) to do things is completely non obvious to a linux novice.

I can't see Unicode characters properly over SSH, because it claims my locale is "POSIX". I'd like to fix it the right way, is it:

http://askubuntu.com/questions/162391/how-do-i-fix-my-locale...

- locale-gen; dpkg-reconfigure locales, with 293 upvotes and 96 "didn't work for me"

- setting the environment variables in /etc/environment, with 225 upvotes

- setting them in /etc/default/locale, with 70 upvotes

- exporting them from .bashrc on a per-user basis, with 70 upvotes

- editing ssh_config and removing the option to SendEnv, with 53 upvotes

- Install the language-pack-en-base, 23 upvotes

- exporting environment variables from /etc/bash.bashrc, on a system-wide basis, 9 upvotes

- exporting them from /etc/profile or .bash_profile, for 6 upvotes

OR

http://askubuntu.com/questions/770309/cannot-permanently-cha...

- localectl set-locale, from systemd

OR

http://serverfault.com/questions/626346/ssh-locale-wrong

- UsePAM directive in sshd_config, which turns on Pluggable Authentication Modules and in a sneaky and ridiculous rider, also turns on 'session module processing for all authentication types', and disable locale setting if you disable PAM

OR

https://serverfault.com/questions/475925/how-to-fix-putty-sh...

- change /etc/pam.d to make the pam_lsass.so 'optional' instead of 'sufficient'

How many of those are 'obvious' to an expert?


> Really, there is no middle ground, you are either a novice or an expert, and bridging that gap is super challenging.

I think it's easier now.

Also, the "expert" level on windows… it's mindbogling complex.


Windows has an intermediate level though. The average consumer never gets past novice.


>>> No logitech mouse support

Make sure you support vendors that support linux, I have no logitech devices in my home for this reason.

>>>While it supposedly exists, I could not make f.lux work.

Redshift, much better

>>> The ridiculous amount of times I have to type in my password (and use sudo) is really user unfriendly.

You should only have to type the password if you are attempting to access a area outside of the users control, this should be limited to administrative functions like installing software, or changing system wide settings. Sounds like you have a configuration issue or at attempting to do things that SHOULD be locked down and SHOULD require elevation and are complaining because linux is more secure than windows

One of the reasons windows is terrible at security is because every time they attempt to add security they give people a simple "click yes" way to bypass it rendering it useless


Sudo timeouts can be changed:http://lifehacker.com/make-sudo-sessions-last-longer-in-linu...

Hardware support can be a genuine issue. The fault lies with Logictech in this case. If I was an arrogant Linux-head, I'd probably say that your time is better spent configuring i3/emacs/vim than getting their latest shiny toy working.


You can also edit the sudoers file to give your user passwordless sudo. So you still have to explicitly sudo for admin access (unlike, say, just logging in as the root user) but don't have to type your password.

Granted, that's not n00b friendly or anything, but it is a pretty quick solution and you only have to do it once. Just make sure you have a real root login and/or use visudo to perform the edits to avoid accidentally locking yourself out of admin access if you screw up the syntax.


You should understand that it is a decision about your and our all freedom. A few missing features and annoyances are a good price to pay for freedom.


Wow – why the downvotes? I agree 100%. You do need to make a real choice and sacrifice when it comes to open source. You will not have commercial support like Windows or a Mac. Depending on what you do, even your future career and job opportunities may be impacted by this choice.

And at least we have a choice, thanks to the thousands of faceless individuals who have contributed to make it possible.


You shouldn't have to type in sudo for gaming or web browsing. And yes, to disable it (though not recommended, since you're only using sudo when it's admin access) you can add the text NOPASSWD to a portion of a config file.

There's a reason sudo exists. Superuser-do. It keeps automated malware and other bad things from happening without authorization.

redshift-gtk does what flux does.


I use https://github.com/jonls/redshift on Linux instead of f.lux. Installing with "sudo apt install redshift-gtk" should get desktop integration (I don't run Mint, but it works great on stock Ubuntu).


Just installed it. Just 213k to download!


For developers I would strongly advice to use Antergos or Manjaro (imo the better option). They are both based on Arch Linux. With them you can experience the power of the Arch package manager and the aur.

The whole installation process would just be one line. The same if you want to install, VSCode, Atom, Android Studio, etc.

For developers Arch is the best operating system since sliced bread.


I've been spending some time putting together a framework for .NET Core API/platform development. The main problem with it currently is the lack of compatible libraries - I've had issues with DI containers, cloud provider libraries, AOP frameworks and more. Not to mention EF Core is still way off in terms of production viability. ASP.NET Core is very tasty as a plus.. and there's always Mono to fill the gap in terms of "full framework" missing functionality.


I'm having the same problems and have came to the conclusion that .NET Core isn't really ready for application development. Maybe by 2.0, but this 1.0 release has just enough for library developers to port over to the new framework. Without those libraries, app development is severely hampered.


I've discovered Autofac has good support for CoreCLR. Plus it's well designed and powerful.


I can attest to how well designed Autofac is.

I had to decide on which DI framework to use on a small front end WPF application that I worked on at a job last year (the app had a lot of singleton classes that were impossible to unit test hence why I needed to introduce a DI framework). I'd come from a job where larger framework choices had been made years prior by senior developers and their framework of choice at the time was Microsoft's Unity so that was what I had worked with for ~5 years. After a week or so of research and deliberation I decided to go with Autofac and found it to be really easy to set up and use. For me it hit the sweet spot between built in functionality and performance.


And I, too! After discovering Autofac, I've not looked back. Before, I'd tried various DI frameworks, and always left thinking "I totally do not get what why people like this DI stuff". Then, Autofac. Everything just works. But - it's also powerful enough that if I need to do something really clever and/or complex, I can, and it invariably works.

It works just as well in Core. As such, it's usually the first package I install in a new Core project.


I love autofac, I've been using it for a decade, but have you run into any situations where the DI built into .net core didn't cut it? In my (limited) uses I haven't found any shortcomings so far.


I still have to wait for sql server to be ported. But my day job as a c# dev is the only thing keeping me on windows at the moment.



That's what he's referring to. It's not available yet.


Good point, however, you could always try to run the SQL Server remotely or on VM for the development purposes.


I've considered a Sql Azure instance for this purpose, for a small dev database its pretty cheap. Not ideal but Sql Server for linux is coming so might be ok as a stop gap.


Yeah, I think I'd go the container/VM route so I could also only spin it up when needed (a fraction of the time).


You could use MySQL? It has full EF7 support AFAIK.


After years MySQL still not implemented async for their .NET driver, only slow cheap wrappers. However, PostgreSQL has full async support in their free .NET driver...


I've developed a fully async, independent, MIT-licensed MySQL ADO.NET provider for .NET and .NET Core: https://github.com/mysql-net/MySqlConnector


Yeah, however no EF LINQ support, correct? So you'd have to write all the queries manually.


I have used Mysql with Entity Framework and LINQ just fine

edit://oh you were asking in regards to postgre


I tried this for a months, but eventually moved back.


+1 Assuming you've ported all of your windows tools to linux desktop... Once you start work on a regular cadence I wager you'll find all those windows hassles are traded for bevy of new ones with linux desktop.

My current work jumps between .net and linux frequently. I've been lured by the notion of all work on a single OS. It is certainly possible. And I take a shot at it every 6 months or so. So far it still brings more hassle than practical benefit for me.

Tools make all the difference in efficiency. And if youre short on time and working on C#/SqlServer/F# then nothing currently gets the job done faster than the older/boring tech (for me that's visualstudio/vsvim/resharper on windows 7).

But I agree that SqlServer on linux + project rider may be enough to make practical cross-platform environment for my needs.

This is a great conversation and worth checking in on every few months


I'm on my third of fourth attempt, this time it seems to have stuck, although I'm working from macOS. Every other time there was something crippling with dotnet core but so far so good this go, the only thing I've noticed is omnisharp / vscode error highlighting has started becoming a little laggy / incorrect since my project has grown, hoping I can continue though as VS.NET is the only reason I ever use windows at all.


Was it because .NET was causing problems for you on Linux or because of lack of Visual Studio?


Because the vast majority of the ecosystem isn't .net core ready. So you end quite major libraries not being available.


A much more useful comment would include why you moved back.


Can I have Paint.NET on Linux yet?


No.

But honestly, spend a weekend learning Gimp. It's a pain, yes I know, but I'm really glad that I did. The official tutorials are wonderful. You'll be able to do so much more, and do it so much better, and even a bit quicker than you could in Paint.NET that you'll wind up preferring Gimp even in Windows.

Just be sure to use single-window mode.


With dual monitors and the traditional GIMP non-MDI configuration, I can put the controls on one monitor, and the picture on the other monitor.


single-window mode! That is so much better. Don't know how I missed that for so long.


I think it was implemented as an option later on.


Depends on the GUI framework it uses.

GDI actually exists for Linux (at least for Mono), but it's not really something you want to use in production.

Windows Forms doesn't exist (yet). IIRC that's what paint.net uses, so you're out of luck.

Third party toolkits like GTK#, wx.NET or QtSharp will work on both OSes.


I remember that ages ago the Mono team used Paint.NET as a testbed for how complete and usable their System.Windows.Forms and System.Windows.Drawing implementation was. But back then Paint.NET was still open source. By now the source code isn't available anymore and it has grown a lot of things that are implemented in native code for performance (set operations on selection regions) or talking to things that have no equivalent in .NET (DirectWrite). It also uses WPF for a few things, so isn't completely Windows Forms anymore either. As it currently stands, Paint.NET is very much a Windows program, despite being written mostly in C#.


Oh I didn't know that Paint.NET closed source. I found a blog post about the decision, an interesting read. Here it is:

https://blog.getpaint.net/2007/12/04/freeware-authors-beware...


I don't get it, one doesn't need the source code to change a few credits strings and pictures. Making Paint.NET closed source is about as effective as game studios putting more and more advanced DRM mechanisms into their games.

I understand why the author is upset, but his cure is worse than the disease.


Yeah, closed source tooling is garbage. I need to be able to fix the tools I use, or customize them for my purposes.


I would of thought a proper OS license would've been a better solution.


Pinta seems like a decent replacement. https://pinta-project.com/pintaproject/pinta/


WHY is it just a Ubuntu release and no RPM? I really HATE when developers go Ubuntu = Linux


I don't think that's their responsibility. Traditionally it's your distribution's job to create and provide packages. Arch Linux has one (I just installed it myself) and if we're talking rpm: Fedora has one as well, it seems [1].

So, what is upsetting you here, really?

1: https://admin.fedoraproject.org/pkgdb/packages/pinta%2A/


> Traditionally it's your distribution's job to create and provide packages

That has not been true in my 15 years of Linux. Look at bitsync, rstudio, or other major programs. Arch Linux must build off of the source so that isn't a fair comparison you are on a totally different system (port).

EDIT: Here is the one page of documentation to push out deba nd rpm on mono projects. http://www.mono-project.com/docs/getting-started/install/lin...

I have seen electron applications (pinta isn't built on electron but mono) and in the same way ONLY Ubuntu is advertised under Linux. The ability to automatically build Linux formats is built into the framework and still don't add RPM.

Linux: AppImage, deb, rpm, freebsd, pacman, p5p, apk.

Still other eclectron applications do the same thing and ONLY provide Ubuntu and not provide the Deb or RPM with source.

Look at the download page it says Linux and directly under it has a Ubuntu Logo and a ppa link. When I down load any of my commercial or major software there is always a RPM and a DEB with a tar.

Linux Support = RPM (The official Linux Foundation package) DEB and Source.

I use OpenSUSE and they have awesome build service that allows people to build packages for SUSE but also RPM, Arch and anyone else you want to add.

https://software.opensuse.org/package/pinta


(For this discussion, Arch Linux is working exactly like Debian/Fedora/Suse etc. - installing binary packages.)

In my time with Linux using anything other than your distributions package tools is an exception and extremely rare. You might do that for - say - bitsync, because it is proprietary. Your distribution cannot use the same 'grab sources, compile against our current libraries and package it up' process.

pinta instead follows the normal process. You install it by using your package tools. Your distributions created binary packages for you already, hosted on their infrastructure.

Heck, even on Ubuntu you'd probably apt-get install pinta and get a different (Ubuntu provided) package instead.

Check http://www.gimp.org/downloads/ and see what they do if you want to download their software (they tell you that your distribution is in charge and even mention why that is usually a good idea).

PPAs and OBS offer ways to build packages, yes. But again, that's the exception - most of your packages aren't coming from there.


> (For this discussion, Arch Linux is working exactly like Debian/Fedora/Suse etc. - installing binary packages.) It isn't its a ports system it compiles off of source.

The Arch Build System is a ports-like system for building and packaging software from source code. While pacman is the specialized Arch tool for binary package management (including packages built with the ABS), ABS is a collection of tools for compiling source into installable .pkg.tar.xz packages.

I have built and maintained packages for RPM and for Arch. They are different and that is the reason why in electron export Arch has a different system then deb or rpm.

> In my time with Linux using anything other than your distributions package tools is an exception and extremely rare.

That is your experience. Most Ubuntu and Arch people use AUR or ppa all the time. If I want the preview of RStudio (Open sourced and in my official repos) I down load and use the RPM. I do this all the time. If I want anything that was recently released I need to use the RPM and not the packages provided by my disto including rolling releases.

> Check http://www.gimp.org/downloads/ and see what they do if you want to download their software (they tell you that your distribution is in charge and even mention why that is usually a good idea).

I don't find much with the Gimp project to show as a positive example for other applications to follow.

> PPAs and OBS offer ways to build packages, yes. But again, that's the exception - most of your packages aren't coming from there.

Once again that isn't my experience and that is open to other people's needs. If you use AUR you also are not getting your packages from your official repos. Build a AUR and you will see. https://wiki.archlinux.org/index.php/Arch_User_Repository

AUR are awesome BECAUSE it works with the source files in a port system. The only difference between a AUR and a binary you get from pacman is that it was compiled for you to skip the step of downloading the source and compile like you do in AUR.


I'm confused why you're trying to educate me about Arch. And you're confusing things (maybe you think of Gentoo? That one is "mostly from source, can support binaries").

Yes, ABS is used to build binary Arch Linux packages from sources. But that's unrelated. Debian, Ubuntu, Fedora, RedHat, Suse - all of these do the same: Grab the sources, build binary packages. Whether you're using ABS or build rpms from a specfile doesn't matter for the discussion about upstream's (lack of!) responsibility to provide binary packages for random distributions.

So yes .. Arch is, for the sake of this discussion, working exactly like Debian/Fedora/Suse etc: The distribution (pick any from that list or any major distribution you can come up with) takes the pinta sources and builds a package from that. The end user installs a (binary) package using the distribution's package management facilities without ever touching the pinta source. The package comes directly from the distribution's infrastructure and not pinta's project site.


Okay you see apples I see oranges. I find packaging for Arch's port system to be pretty fundamentally different then for the binaries of RPM or Deb but its okay.

I still stand by the need to provide RPM, Deb and Source for your project to be called Linux supported. Also Ubunutu does not equal Linux.

Here is VS Code as an example https://code.visualstudio.com/?utm_expid=101350005-28.R1T8Fs...


It's packagers of RPM distros that should build RPMs. Upstream can't be expected to release packages for every distro (if any).


> Upstream can't be expected to release packages for every distro (if any).

This isn't provide distro specific packages this is provide the one extra file over macOS and Windows.

Also this is for the health of the Linux environment.

That is why we have RPM and Deb they work for MANY distros and if your outside that then the source is there for aa build. One day appimage, flatpak or Snap will make this easier.


No, RPM and Deb files don't work for many distributions.

Yes, the formats are shared (say, RPM for Fedora, RedHat, Suse). But the resulting binaries aren't (usually, automatically - you might get lucky of course) compatible.

The whole point of distributions is to build a system. Fedora and Suse are on different schedules, make different decisions about updates, might run different versions of the libraries your package (here: pinta) depends upon.

Your RPM file often isn't even usable between different releases of the same distribution. So just adding a random RPM wouldn't make sense. You brought up OBS elsewhere: One of the major reasons that thing exists is so that you can build packages .. for a list of target platforms. OBS is a service that uses a build recipe and upstream sources (pintas tarball) to spit out binary packages for various different environments and formats.

So one additional RPM file wouldn't do the trick here.


I use OpenSUSE. The rpm's always work I might have to grab a dependency from somewhere or make a soft link, but 99% of the time they work. These are all the programs I use that I need to grab from RPM packages from their website RStudio (I use the preview version), Lightworks, Bittorrent Sync, and VS Code. Without RPM I would not get to use them period.

In the old dasy of Linux their would be a lot of issues but presently RPM and Deb works like a charm fro my uses.

I am hoping for appimage of flatpak to take over but for now RPM is my life blood to get things working.

If it is a niche program that won't be on the repos or it is updated frequently RPM is a very viable system.


We disagree a lot, so I'm going to bow out after this one. Please be assured that I don't intend to poke you or offend you - I genuinely feel that you are missing some pieces of the picture.

RPMs don't work in a vacuum. Unless you bundle a statically built thing or something trivial (say .. a shell script) you reference dependencies. Those change between environments.

32bit/64bit machines differ. Ubuntu vs. Debian, Suse vs. Fedora. Builds for current versions of Ubuntu might not/will not work on older ones. You can only provide a (guaranteed to be working) rpm by having one per combination here, a la 'This is for Ubuntu FunnyName 32bit'. You understand that this explodes in complexity quite easily and quite fast.

Now, you're bringing up exceptions again and again (your system is 99% Suse provided packages I'd bet and you have less than a handful applications that you install manually. High profile apps maybe, but still: A few, compared to your distribution's packages).

I can't explain each and every case, but let's look at VS Code: They offer an RPM. That's true. But that's a 50MB file that depends on .. nothing, really. glibc and /bin/sh. All static. You're downloading a zip, more or less. Possible, but wasteful and not common. Download Atom and VS Code and you download Electron twice. You're also ignoring most of rpm's feature set here. This is not a good example for 'providing an RPM' (again, a tar.gz might provide ~the same~ if you ignore that this puts a .desktop file in the right place to create a shortcut to launch this thing. Extracting this to ~/opt for example would give you 99% of what this RPM package does).

For ~normal~ projects, providing packages is just not feasible and bundling projects into coherent ... packages (ahahaha) is what distributions are for, really. Debian knows that it patched library X and all projects using that library might need to be bumped as well. Debian can rebuild their own packages at the same time. Upstream projects cannot follow every distribution and update their '32bit Debian Jessy' build in a timely manner after the distribution changed some parts of their system.

appimage/flatpak etc. are basically doing what your VS Code example did again: A bundle of everything. 'Portable apps' in the Windows world. OS X bundles. In my world that's really a bad trade-off, although I understand the appeal in terms of simplicity. I personally feel that this comes with a high(er) risk for security issues and don't like the wasted space.

Back to pinta: They provide a _real_ package (actually they don't provide any binary package, they just link to a PPA they maintain, probably because they actively use that themselves?) with dependencies. Not a lot, but they actually define a real package. Not a complete bundle with a random file extension like VS Code. For pinta it would be more effort to create RPMs and the benefits aren't clear at all (just zypper install pinta or whatever you do on Suse).

Please ignore the file extension. RPM isn't RPM in these cases. Please don't claim that pinta mixes up Linux and Ubuntu. There's no reason to get worked up ("I hate it when they do this") about this. They handle this the normal way, the standard way. I haven't seen Suse in action for ages, but I suppose you're either using KDE (likely) or Gnome (maybe). Both don't even offer any binaries (for Linux) as far as I'm aware.

vim doesn't offer RPMs either..

(On a side note: VS Code is completely available in the open for some time so it could be built the ~normal~ way using shared libraries for dependencies. Bittorrent Sync will always be a problem though)


I certainly don't get worked up but I do use i3 and several KDE programs. I think your more right 5 years ago then where we are today with Systemd in our world the differences between distros is getting smaller and smaller. Once again the RPM I download work and they work for me.

I get "worked up" when the only thing offered is a ppa and a source :)


You can treat a .deb package as if it's .xz, because it is. extract it, and make the manual configuration changes. If you'd like, repackage the files into an rpm.

Back in 2006, when I switched to Ubuntu, I used to be the one complaining that developers pretended Redhat = Linux. If I had to choose, I'd pick .deb over .rpm any day.

FWIW, Gentoo offers Pinta in the portage tree:

  * media-gfx/pinta
       Available versions:  1.6-r2 **9999
       Homepage:            http://pinta-project.com
       Description:         Simple Painting for Gtk
so you could even download the source and rebuild on your own, should you need rpm's that desperately.



AFAIK Paint.NET makes use of .NET functionality only supported on Windows.

Beyond that it's closed source, so it would require the Paint.NET developers to port it to Linux.

Wine is probably going to be your best bet.


No, classic form stuff is not supported.

There are only some open-source libraries but they are very new.


This. Paint.NET is what I'm missing on my Mac so much.


It was my understanding that Pinta is essentially Paint.NET, and is available on Mac as well as Linux and Windows: https://pinta-project.com/pintaproject/pinta/


Well, it's like comparing a go-kart to a sportscar. Pinta crashes on me a lot and has a fraction of the features.


When was the last time you tried it? I dismissed it for similar reasons when I first tried it, but then I tried it again about a year later and it was fine. Eventually I just manned up and learned how to use Gimp, though.


I've been in fact using it from its very beginning and I've been getting the latest version (currently 1.6) from a PPA. It's handy for the same things I use Paint.NET for - it just feels more clumsy, despite tryig to be similar. I appreciate both for being lightweight.


Paint.net has feature? I find it is just a simple quick raster editor with little to no features. What am I missing?


A ton of plugins... To be honest, I didn't check if Pinta supports those and I rarely use any in Paint.NET.

Being lightweight, quick to start with just the essential features is what I appreciate in both.

It's just that for my simple uses (stitching bitmaps together, resizing, getting color values, pixel-perfect alignment, working with multiple buffers, turning toolbars on/off, editing in layers) Pinta feels much clumsier than its big brother.


Paint.Net is good, indeed.

Let Krita be your new friend. https://krita.org/en/


I LOVE Krita BUT I would not say that Krita is NOT the same thing.

I use four graphics tools in Linux.

1) Inkscape (Vector paint program) I also use it on Windows though I have a Adobe license

2) Gimp (My go to raster image editor)

3) Krita my go to paint program and not really for raster image editing

4) Aftershot Pro - My raw image and editing tool for quick photo edits

Krita is more comparable to Corel Paint then to Photoshop or Gimp.

http://www.painterartist.com/en/product/painter/


Would be nice if I didn't need the half the KDE desktop for it.


What difference does that make? You don't have to run KDE (heck, I use it on windows). Yes its UI is a separate library from the specific program - any well-factored program will do the same, whether it exposes the separation or not.


> What difference does that make?

• It's more packages I need to download. Doubly annoying because 95% of the packages are just "thin wrapper around foo, because we're too hipster to use libraries directly like peasants".

• It installs system-wide services I have to disable/mask just so other programs don't mistakenly use them because they think I run KDE and they have to integrate into it.

• Oh wait, I can't disable some of these, because Krita itself needs them and now I need to figure out how to make them play nice with my desktop aaaaaarggghhhh


> It's more packages I need to download. Doubly annoying because 95% of the packages are just "thin wrapper around foo, because we're too hipster to use libraries directly like peasants".

"Number of packages" is not a very meaningful metric. The combined filesize isn't that large (not by the standards of an image editor anyway), and under any modern package manager installing a package with dependencies is no harder than installing one without.

> It installs system-wide services I have to disable/mask just so other programs don't mistakenly use them because they think I run KDE and they have to integrate into it.

Really? KDE tends to be a well-behaved crossplatform citizen IME (unlike Gnome) and not rely on anything system-wide. (There was some horribly ill-advised semantic nonsense in KDE 4, but I don't think it ever affected Krita).


I agree with you for point 2 & 3, I also hated installing KDE apps for this exact reason.

But I can't agree on the first one.

I have not checked what these exact thin wrappers are, but most of the time thin wrappers are really useful for testing and switching to another library without being tangled in the old one.

It pretty much follow the "D" (Dependency inversion principle) in the S.O.L.I.D. principles: "Depend upon Abstractions. Do not depend upon concretions.”.

Again, I have not checked the KDE libs you talk about, so I'm assuming this is what these wrappers are used for.

(Source : https://en.wikipedia.org/wiki/Dependency_inversion_principle & https://en.wikipedia.org/wiki/SOLID_(object-oriented_design) )



The kdelibs situation has improved a lot in the last couple of years. The team has worked hard to split out the frameworks and libraries from the desktop and apps themselves, so now you only need like 1/5 the KDE desktop.

It's worth taking another look if that was something that put you off in the past.


That is right there with RPM vs DEB debates and "RPM dependency hell."

75 MB appimage download then! https://krita.org/en/item/krita-3-0-released/


In the past I didn't like the fact that Paint.NET isn't available on Linux. After using editors like Krita, I realized I didn't miss Paint.NET as much as I missed an image editor that isn't GIMP.


I've used Paint.NET for years, and used GIMP on Linux platforms out of necessity. I wasn't aware of Krita until now, but it looks great. Thanks!


I know it's not free, but I think the $30 for Pixelmator is so worth it on Mac. I miss that one now on Windows. :)


So I've spent some time trying to get an existing asp.net 4.5/4.6 application to work on Windows Containers using docker, running under Linux in a VM. It turns out that System.Web only works under microsoft/windowsservercore based container which will only run on a Server Core, Server 2016, or current Windows 10 Professional.

I'm wondering why a Nano Server plus additional libraries could not host a complete asp.net 4.6 runtime, possibly leaving out components that need the GDI.


So, instead of taking 30 seconds to disable automatic Windows Updates, this person decided to have a hissy fit and throw the baby out with the bath water.

The instructions are pretty simple. You have to toggle one setting. Done -http://www.tenforums.com/tutorials/8013-windows-update-autom...


The post was titled ".NET on Linux", not "Why Windows 10 is garbage". The forced update is just an easy and "fun" excuse to switch.

Maybe take a look at the last paragraph:

> To sum up – I’m not trying to say that Windows is not so good for the software development. For example, the Visual Studio will be probably hard to beat for many years amongst many other great tools. My point is that if you’re not tied up to some specific technology which is not cross platform (which usually means for the Windows users only), you might want to give a try to some other environment. It’s always a good thing to try out something new, hone skills and broaden your horizons.


This works well for the those rare people with the luxury over control of those settings in their workplace.

I actually like the Automatic Update feature. I don't need to remember to run updates myself all the time. That said, I've been bitten by an hour-long unusable computer scenario and it's been a bummer, to say the least.


> ...rare people with the luxury over control of those settings in their workplace.

Would you kindly provide a citation for that statement?

I've been programming on Windows for twenty years in various corporations - never had a problem getting Admin rights on my own machine... I'm now a consultant and I visit many, many, many job sites - and I just don't see it being a problem.

Furthermore - Linux wouldn't solve the problem for those rare people who can't control their settings anyway. So, what's your point?


It's a huge problem in more restricted environments - e.g., investment banks. Admin access can be very hard to come by.


OK whether it's a problem or not is definitely up for debate.

The bigger point here is that switching to Linux would not help those people either.


Clearly you've never worked for a major UK bank. Sure there may be a handful of very specific developers who get full admin rights over their machines where whatever they're developing requires elevated privileges, but these machines are often sandboxed.

In many cases you're stuck as a "Power User" or whatever group policy the IT department deems sufficient for you to run Visual Studio (or IntelliJ etc). Been there done that twice in my career.


Right, but switching to Linux will not solve anything for those people either because they will not be allowed to do that either.

The person who wrote this up article obviously has complete control over their computer don't they?


I got bitten by this last week and again today. I had previously configured Windows 10 Pro's custom restart time such that my machine would not restart until x days later at a specific time if there were pending updates.

I've always kept on top of this every day and check each day to see if there are any pending updates and have always installed these updates within 12-24hrs of them appearing, or nudging back the restart time by a day or so if it's not been convenient. This approach has worked fine for months....until just recently.

In a recent update the "Use a custom restart time" feature has been disabled and all I can do now is specify my "Active Hours".

I even had the Local GP setting you suggested configured as well.....but Windows 10 really needed to reboot my PC, ignoring my "Active Hours" and any and all of the Local GP settings you can throw at it. Twice in the past seven days I've been given the 20 minute warning and there is damn all you can do about it....despite all of these workarounds and overrides.

In all fairness I think the author of the article has probably just had it up to the back teeth with Windows 10 and the seemingly new ways it finds to totally fuck up your working day. I've been a MS developer for 20 years, I really like Windows 10 but I've gotten pretty close to the author's levels of exasperation myself over the past few days with some of its control freakery.


The caveat being that this one setting is only availble for Pro and Enterprise editions of Windows 10.


That's a very small and insignificant caveat. It's like saying that Linux is no good because there's this one distro (ChromeOS) that doesn't let you do anything.


We don't won't to disable it, we just want more control over when the updates are applied. Windows manages to be both too passive and too invasive at the same time. When updates are ready you'll maybe get a notification in the task tray that can be easily missed and is then hidden, then if you try to shutdown it's suddenly in your face.


If it's a pure dev machine than it's already feasible, but not perfect yet. If it's an all purpose machine , ie. gaming, dev, sound vsts etc. Than no. Mainly because of the gaming side unfortunately. I hope Vulkan starts to change that. Btw.: I'm intrigued about the forced updates. Never had them. Computer only updates when i want. Maybe i changed some config when installing the OS.


Back in the Windows XP days, there was a way of halting forced updates by killing a certain process. I had it as a simple batch file on my desktop. Every time it would come up with "your PC gets restarted in 15 minutes", I'd just double click on it and carry on undisturbed.


Was that really necessary? As far as I remember, XP let you choose to not install updates automatically.


I can't really argue because it was a while ago, but if I recall some of them were considered critical and you could postpone them time and time again, but not really disable them

Plus I wasn't really opposed to safety updates as such, and wouldn't want to take the burden of remembering about them upon myself - it's just the arrogant, disruptive manner in which they are delivered that never sat well with me (if with anybody...)


I'm not a huge fan of Linux as an everyday desktop OS, but god, Windows is so awful. It's nice to know the .NET ecosystem is so different from what it was like back when I used to work in C#, and was pretty much forced to use Windows for that reason.


I'm in a similar boat but using Project Rider on OSX. It generally works without issue but is far from ready for general use.

How does VSCode deal with things like running unit tests?


I'm sure you can create tasks for running commands like:

dotnet test (https://docs.microsoft.com/en-us/dotnet/articles/core/tools/...)

dotnet watch test (https://github.com/aspnet/DotNetTools/tree/dev/src/Microsoft...)


I have never seen this behavior. It gives me the option to reboot on "off hours"


if there would be WPF support on linux it sure would be a competitor but if you are stuck with desktop applications I'm afraid, it's still Win


https://github.com/AvaloniaUI/Avalonia

Checkout this project for making WPF cross platform.


To clarify, this is a cross-platform UI library in the style of WPF; it does not actually make WPF cross-platform.


Yes, was typing on my phone so was trying to keep my message short :)


Xamarin.Forms is looking pretty good but still mobile (and Windows 10 UWP) at the moment. Will they extend to OS X and Linux?


Well, domain name made me think about this old sketch: https://www.youtube.com/watch?v=GlOoSsfU6cM


[flagged]


The most sad thing for me is how Windows is such a given for PC gaming. :( It wouldn't have to be like this, there's nothing saying it should be a huge gaming OS, so it's too bad it is. I wish Valve would have been more successful with Steam OS. That would have fixed everything at least for me. As someone who otherwise likes amateur photography and programming, Linux is just as good or better at everything else with e.g. Darkroom and RawTherapee for photography, and programming being a given. Lightroom isn't even fun to use with its obnoxious enforced workflow and business model, just too bad it's so big on the plugin scene.

Windows is hanging onto this thin line, but it's a damn strong line for many.


> I wish Valve would have been more successful with Steam OS

I think Valve has done an outstanding job, unfortunately, other game studios and controller manufacturers don't seem to care.

For example Logitech gaming wheels don't support linux. Although there are pretty good reverse engineered drivers, last time I tried, force feedback support was hard to get configured and didn't work as well in games such as Euro Truck and American Truck, which have linux versions.

Now, like the OP, I had encounters with windows 10 forced updates: I stream on twitch and I recall at least one time where, from my viewers point of view, my stream abruptly terminated. Turns out Windows 10 decided to update itself locking my computer for at least 1 hr.


Steam OS is pretty dumb. Why would you want to install an OS that only works for gaming? Steam should probably work with major distros to get Steam working easily with it, so that installing Steam is easy (they don't; for example I had to delete their ancient C stdlib to make it work with my open-source AMD drivers). And closing Steam doesn't move it to the taskbar (this is known issue in their bugtracker for many years).


The OS is great, only macOS offers a similar OO ABI for the OS stack.

Also I do have quite a few war stories from failed GNU/Linux updates, that eventually lead me to always keep /home on a separate partition and wipe everything else.


I think you are right that big updates on Linux are just as difficult on Windows, but the point is that you shouldn't have to deal with the issue.

1) Yes, all OSes should respect something like the root/home separation and let you just nuke the system. But this is a lot easier in a world where all your apps are just an apt-get away.

2) In comparison to MS, every Linux seems to be a "rolling release". When we use linux we like it that we always have a not-too ancient version of GCC. But when Windows 10 starts upgrading continuously, sinister stuff gets mixed in.


Keeping /home on its own disk is Unix101


My first UNIX was called Xenix....


I don't do that. What's the reasoning behind that?


It doesn't need to be on a separate disk, just a separate partition. The reasoning is that this makes it really easy to change your distro without affecting your data: all your data is (or should be) in /home, with the possible exception of any databases or webservers you're running (which would mean you're running your PC more like a server, so this won't apply to typical desktop users). You can wipe out everything in your root partition but your data is safe and secure. Putting it on a totally separate disk makes it even more foolproof since you can just cut power to that disk and then there's no way for it to be affected by the system upgrade, but that's overkill IMO.


I was having this same issue with my Mom's PC and the Anniversary update, turns out it was having issues with one of the connected USB devices.

I ended up unplugging everything USB after I started the shutdown and it finally completed successfully.


You can recover from these repeated update failures by deleting SoftwareDistribution\Download inside the Windows directory. Search the web for details.

This kind of problem (which, I agree, happens often on different machines) is one of the main reasons I don't use Windows for working anymore.

On my gaming PC, I set windows update to just notify me about updates, but not download or install them automatically.


Trying to upgrade an operating system is almost always a good way to ensure you're going to have a bad time.

Copy any files you care about off, wipe the disk, and install from scratch. This is also usually worth doing with an OEM machine, to get rid of all the crap and weirdly configured partitions they set them up with.


> Trying to upgrade an operating system is almost always a good way to ensure you're going to have a bad time.

That's a really strong assertion to toss out without supporting evidence. I've had the opposite experience with Debian/Ubuntu (100s of systems, starting in the late 90s) and OS X (dozens of systems, starting with 10.0) — the process is almost always start the update, reboot, resume work with rare exceptions for something like previously-undetected filesystem corruption.

Even my experience on Windows has not supported that conclusion – you might always want to be ready to reimage the machine but it hasn't been rocky for at least a decade.

The difference might starting and staying clean: avoid OEM Windows installs, apply updates regularly rather than putting it off for years, don't install system-level hacks (Mac users were terrible for awhile about using “haxies” and then complaining about Apple when some random binary monkey-patching broke on the next major release), and be careful about installing software from irresponsible sources.


I agree but I'm mildly surprised when an upgrading OSX or Linux instead of a fresh install always works fine.


That's not true for Linux - I've had many instances where repo refs broke because they didn't have packages for updated version, or where the new version kernel introduced a bug breaking my WiFi, audio, graphics, or random stuff failing - it's probably worse than doing a Windows upgrade.

I've only used macs for a year and only did 10.5 -> 10.6 which went smooth IIRC. Mac approach of owning the hardware and the OS certainly has it's advantage - being able to clone image to an external HDD from my MB pro and then boot a iMac to the exact same image is a magical experience. Setting up a master image on my laptop and then redeploying to an office of 40 something various mac devices using IP broadcast and doing it in an hour - hearing that apple chime 40 times was really cool :D


I had a spectaculr Ubuntu LTS upgrade failure on a server earlier this year because I had inadvertently left a terminal open running a command with 'su'.

It transpires that the sudo binary cannot be upgraded when su is active, so the entire upgrade cascade-failed and left me with a broken and unbootable system. Had to start from bare-metal.


This is infeasible when upgrades are monthly or biweekly.


only me who find it amusing how windows devs are LOVING bash now, while most linux/mac devs who cares run zsh, fish or anything better than ... bash!

cant wait to see the next gen of sloppy shell scripts, yay


Well, you're free to use zsh, fish, or anything better on WSL as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: