Hacker News new | past | comments | ask | show | jobs | submit login

In a GC'd language which is what I am used to, the GC will pause execution and reclaim on the next allocate if there isn't any available memory cleaning up whatever was allocated on the attempted image load.

After that if there is still not enough memory for the simple message box then you have an uncaught exception and the program crashes and you're just back to the no catch approach, nothing lost. Most likely though there will be enough memory for the message and you have a much more friendly result.

I have used this approach before and it is way way more friendly then a crash with users potentially losing work because they accidentally picked the wrong file.




> After that if there is still not enough memory for the simple message box (...)

That's exactly the point. Once memory is exhausted, you can't take any action reliably.

You don't build reliable applications by making mechanisms like: "ok, if memory ends lets design it to show a box to the user, we have 50% chance this succeeds".

It would be better to wrap your application in a script that detects when the application quit and only then shows a message to the user.

People designing things like you drive me crazy. They come up with a huge number of contingencies that just don't work in practice when push comes to shove.

Stuck in a loop trying to show a widget, using 100% of CPU and preventing me to do any action?

I prefer simpler mechanisms that work reliably.


>That's exactly the point. Once memory is exhausted, you can't take any action reliably.

Nothing is 100% reliable, thats not realistic, and in this kind of situation its is not a 50/50 shot, its more like 10000/1 that you will be able to show a message, that should be obvious.

>It would be better to wrap your application in a script that detects when the application quit and only then shows a message to the user.

Thats simpler then wrapping the specific function in a "script" that shows message to the user but allows the program to continue to function in almost all situations?

>People designing things like you drive me crazy. They come up with a huge number of contingencies that just don't work in practice when push comes to shove.

People like you who do not value you the user experience over pure code drive me crazy. These things actually do work in practice, I guarantee you any sufficiently complex GUI program will have code like this to try and gracefully handle as many contingencies as possible before simply crashing. Do you think your browser should just crash losing other tabs if a web site loads too much data? Does Photoshop just crash if it runs out of memory on an operation losing all your work?

>Stuck in a loop trying to show a widget, using 100% of CPU and preventing me to do any action?

Its no more stuck than your process crashing and the kernel is reclaiming memory, they both take similar cpu time, one however results in a message that informs you of what happens and leaves you with a running program, the other tells you nothing and your program is now gone.


> Nothing is 100% reliable, thats not realistic,

My comments were directed at people who are interested in building reliable systems.

If you straight assume it is not possible to build reliable systems, you are missing a lot.


A program that can handle an OOM and continue to function normally is more reliable to the user than one that crashes and must be restarted potentially losing work.

Memory allocation can fail, network connections are not reliable , opening a file may fail, writing a file may fail and so on. If your program simply crashed because some operation failed it would only be reliable at crashing.


> continue to function normally

By definition, you are not functioning normally if you're OOM.


If you try an operation that causes an OOM by say allocating a large amount of memory as the example given above, then that memory is freed you are now functioning normally again.

If you write a large file to disk filling it to capacity, failing then delete the file, now you have free space again and everything is normal.


> If you write a large file to disk filling it to capacity, failing then delete the file, now you have free space again and everything is normal.

You've never used btrfs, have you? :D


All critical paths, should be allocated on application start, and you never have to worry about that.


This is the kind of advice that I put in a category of "easier said than done".

The trouble is that you may not know beforehand what exactly is going to be needed. And you might need maybe a library call and the library does dynamic allocation in it and you either have no idea about it (until you find out the hard way) or no way to help it.

So in the end maybe you can take some extremely simple action like writing something to log or show a widget, but that's about it.


Just code as normal and when you run into a problem, reassign the default allocator for the struct/object? Not possible for many PLs, hard for some but basically trivial (1 LOC per struct/object) for others.

Some PLs come with test suites that give you a failing allocator, so you can even easily test to make sure this looping condition resolves sanely.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: