Hacker News new | past | comments | ask | show | jobs | submit login
24 Gigabytes of Memory Ought to be Enough for Anybody (codinghorror.com)
124 points by angrycoder on Jan 21, 2011 | hide | past | favorite | 112 comments



There are a lot of problems which the engineer in me wants to code around that should really, really be solved by throwing money at it. Often, a trivial amount of money.

I once did several hours of work trying to optimize my use of Redis to avoid having to upgrade my VPS (I was nearing the limits of the physical memory at 1.5 GB). I even asked Thomas for advice on how to decrease memory usage. His reply: "How much to the next tier?" "$30 a month." "Why are we having this conversation?" And, of course, he was right.


better code (algorithms, performance, memory usage) is amortized across the life of the rest of the system. fixed-cost upgrades will eventually be caught up to. so it's not bad to think like that.


You can use the freed-up engineer time to write better code which you can also keep for the life of the system though, though. It is not too hard to think of things which you could do in a few hours that would pay for RAM upgrades in perpetuity, even at BCC's relatively small scale. (Say, an A/B test which resulted in a 1% lift on my AdWords landing pages.)


It's precisely BCC's small scale that makes spending engineering time on performance optimizations expensive. At Google's scale it's worth several thousand engineers' time.


At which point, you have several thousand engineers. It's a problem that solves itself, as long as you have some kind of margins.


At this points, throwing engineers becomes throwing money.


Really, you just have to weight the cost of the upgrades against what that time is worth. A week of engineer time is cheap compared to upgrades if your code runs on hundreds of thousands of machines.


Yeah. My point is that amortization vs fixed cost can be a hard analysis to do.


this is true, but over the lifetime of most systems, resource usage is going to go up at some linear-ish rate, no matter how efficient the code is. its not worth the time trying to fight that hard bottom line increase. if things are wildly out of control, sure, fix the code. but if you've been running your app for 5 years on the smallest VPS and you're pushing the need to upgrade, your time is probably best spent elsewhere.


It's often easy to forget that developer time == money. Any problem you can solve with an application of less money than an hour equivalent of developer time is often not even worth talking about unless it's just a symptom of a bigger issue.

Especially keep this in mind with regard to meetings. A half-dozen people in a meeting burns up about a day of developer time in cost every hour they meet.


And sometimes developer time now is much more valuable than developer time X months later. If you can put off the performance improvement by throwing money at it, this might be a big tactical win.


It's often easy to forget that developer time == money.

I head this a lot from people who write O(n^2) algorithms instead of O(log n) :-P


It can certainly be used as an excuse for laziness, it needs to be weighed along with anything. Optimizing an algorithm in a method that only gets used rarely and that only ever handles a tiny amount of data is a waste of time.

Sometimes O(n^2) algorithms are ok even when O(log n) alternatives exist. If it takes a trivial amount of time to write the slow algorithm whereas it would take a lot more time to write the better one and if you can guarantee the input complexity the algorithm will handle won't exceed a certain range, then it's fine.

Consider a factorial function for integer input & output, for example. What's the best way to implement such a thing?

It actually doesn't matter. The most important aspects are to get the error handling and input bounds assertions right, because 21! is larger than 2^64, and the difference between recursion, iteration, and caching at that scale is probably just noise unless you are calling the function thousands or millions of times a second.

P.S. This is a good reason why clear documentation outlining design decisions and their rationale is important. When someone decides to take some pre-existing code that "works great" and massively expand how it's used without changing it the result can be disastrous if people haven't taken into account the inherent limitations of that code.


One of the reasons I enjoy programming for iPhone, you can let the engineer in you roam free to increase performance and not feel guilty ;)


> There are a lot of problems which the engineer in me wants to code around that should really, really be solved by throwing money at it. Often, a trivial amount of money.

On the other hand, I once saw a project where people seemed to want to use a non-SQL database to store about a hundred megabytes of metadata a year. They also want to use S3 for a couple terabytes of content instead of a filesystem.

Throwing money on a problem that doesn't need to be solved, just because you can, is tempting.


I once did several hours of work trying to optimize my use of Redis to avoid having to upgrade my VPS

I would think RAM preservation would be a much greater concern in the VPS world as the hard limits are ever present, and quickly become enormously expensive.

In the dedicated server world it isn't quite as prohibitive. An R810 can be had with 512GB. Just got several new servers to add to the mix, each with 144GB of memory.

Though of course I eminently disagree with Atwood's statement about algorithms. In fact I think he doesn't believe that either, but it was just a bit of color on the entry.


Circumstances can make the benefits of tweaks and performance improvements in this particular field evaporate anyway. Too much of the work involved makes the effort expended rarely worth it unless your whole business is app deployment or you're Google.


"To me, it's more about no longer needing to think about memory as a scarce resource, something you allocate carefully and manage with great care."

Once your data set doesn't fit in memory, it is indeed a scarce resource. And 24 GB is really not very much data.


Some people say: "but the computers become faster every year," to which I respond: "but the amount of data we throw at them grows every year at a similar rate."

There will never be a substitute for good algorithms. In my field (bioinformatics), thanks to next-gen sequencing technology, the data growth actually outpaces Moore's law. I imagine the same is true for data collected from social networks.


You have reminded me of that old line: "What Andy giveth, Bill taketh away."

(Referring to Andy Grove of Intel CEO fame 87-98, still there in some capacity, and Bill Gates.)


It depends. There are still corporations making real money that could fit all their essential data on one such machine.


Reasons why sometimes you cannot just throw RAM at a problem:

1. Your smarter competitor will both throw RAM at the pb and optimize their code, and end up outperforming you.

2. Your quad-socket machine already has RAM maxed out and adding more RAM (ie. adding a second server) will need significantly more complex code to distribute the workloads on more than 1 machine. Optimizing your memory usage is probably easier.

3. Increasing RAM is just not an option on a mobile platform (eg. cellphone).

4. You will lose customers by upping the minimum RAM requirements for your app.

5. Once you have added more RAM, you have no more tricks left in your bag to quickly scale up in case of emergencies. Instead work on optimizing your code while you have time.


"Algorithms are for people who don't know how to buy RAM" is one of those soundbites that people seem to find clever, but really shows a startling lack of understanding.

You can chuck as much RAM as you like at your problem but it's not going to help once your data set fits in memory - and maybe way before that if you're thrashing those piddly L3 and L2 cache's.


> way before that if you're thrashing those piddly L3 and L2 cache's.

And that's why knowing your hardware and your workload is important. I remember once we had a performance problem with a x86 server. Over lunch, the guy with the problem told me the working dataset was about 8 megs per batch or so. I told him to move the workload to an Itanium server we had for testing because I knew that server had 11 megs of L2 cache, besides being slightly faster per instruction than the Xeon that was running the workload. The throughput more than tripled.

It breaks my heart when people just assume more memory/GHz/cores is always faster.


"Algorithms are for people who don't know how to buy RAM"... really shows a startling lack of understanding

Are you perhaps missing that it's a deliberate inversion of the more obvious statement "needing more RAM is for people who don't know how to use Algorithms" in order to make the point that RAM is cheaper than an engineer's time?


But many interesting problems can't be solved by throwing more RAM at them, because RAM isn't the bottleneck.


Are you suggesting that Jeff Atwood may be exaggerating?


Which is why you solve the boring problems by throwing RAM at them, and spend the time you just saved solving the interesting ones. :0


That's why I prefer "Buying RAM is for people who don't know how to write Algorithms." found in the comments :)


Buying RAM as a substitute for algorithms is for people who don't work for Google (disclaimer: I work for Google).


Or people that develop for smartphones. Or people that develop on embedded systems/consumer electronics. Or people that work with raw bitmaps (eg games writers).

You need to get out more if you think (as your comment would seem to suggest) that only Google has efficiency problems...


Didn't you guys just pass 20,000 employees?


Little known fact: Google's secret algorithm is not PigeonRank[1] but PeopleRank. Those 20,000 people are constantly counting <a> tags across the web.

1. http://www.google.com/technology/pigeonrank.html


Wait, so instead of spending a semester learning about algorithms and stuff, I can go throw money at the problem?

Why am I in school?


Technically you are throwing money at the problem (or someone is, for you).


So, does buying hardware have better ROI than college?


Sometimes yes, sometimes no, but you need a trained brain to spot the difference.


It goes like this:

- turn the screw: US$ 5

- knowing which screw to turn: US$ 5000


Indeed. Why are we doing anything if just throwing money will solve all our problems. Let's just sit back, relax, and throw money at stuff...

Algorithms are a poor substitute for us poor engineers that don't have money to throw :)


The difference between O(n) and O(n^2) is huge, but once optimizations get you fractions of constants, it may be worth looking at more memory.


You may not have intended it, but your comment reads as "Google has awesome hard problems that only badasses can work on" "Also, I work there". Not ever mention of one's workplace warrants a disclaimer. :)

But you do bring up an interesting point, how does Google handle the heterogeneous server problem? One assumes that not all nodes are created equal (if only because it would be cost prohibitive to upgrade all at once), so do you just use dynamic task scheduling and hope for the best or is there something cooler going on there?


If you have only one server, then you are going to add $xx; instead of spending $xxxx fixing it.

Google has to add $xx * the number of its' servers vs. $xxxx fixing it by one or two developers.


Few people work for Google or tackle problems that size. Buying more RAM is most of the time a quick and economical solution.


I think that's more of a disclosure than a disclaimer.


Seriously?

I like your blog but please never write something like algorithms are for people who can't afford more ram. Memory management and efficiency should always be a goal, or else you end up with browsers using 2gb of ram with 5 tabs open, and more crap like that.

Yes, memory is cheap. That shouldn't promote terrible code. Vostok4 on January 21, 2011 12:09 AM

That was my first thought.

I still have a laptop I bought in '07 with just 1G of RAM. It works fine and I've never felt compelled to drop a grand or two every couple of years just so that I can.. what? Use more system resources to do the same things I always do?

The problem I've been having with NOT keeping up in the tech arms race at home is that this poor laptop can easily start paging memory with only a couple of Gnome apps open and FF running. It used to be a pretty decent machine.

There are contexts where it's "cheaper" to throw hardware at the problem. I don't think the desktop is one of those areas. Not everyone using them is an engineer being paid over a hundred dollars an hour.


To me, it's more about no longer needing to think about memory as a scarce resource, something you allocate carefully and manage with great care. There's just .. lots

As much more of a hardware guy myself, I've always been frustrated by this path of thinking. Sure, it's convenient for you as the programmer if you don't have to think about resources, but it always feels like the growth in hardware performance is mostly consumed by more and more needless resource usage.

Always reminds me of how cars are getting more and more efficient, powerful and clean-burning, and yet mpg is only barely creeping upwards.

Alternatively, an anecdote from my father- he was once tasked with upgrading the capacity of a company NAS, which was a few hundred megabytes, that was closing in on max capacity. He arranged to bring on some 20x that capacity, thought "well, this should do the job for a while", and it was filled in a matter of weeks.


As an aside: "You can pair an inexpensive motherboard with even the slowest and cheapest triple channel compatible i7-950..."

Inexpensive motherboards are a terrible idea, they can cause really weird problems that look like issues with other components. I've built a few computers and since sworn never to buy a cheap motherboard again.


On the other hand, I know people who will never buy expensive server mainboards ever again as they are produced in so low numbers that bugs never get ironed out or programmed around in operating systems, unlike widely distributed consumer hardware. (Cheap consumer mainboards still sound like a bad idea, though).


Strong upvote. I'm one of those, but not on the server market.

3 years ago, I purchased an Asus Republic of Gamers motherboard for 400 dollars. It was the most expensive component in my PC, and I hate it every minute of every day.

All my work is in bootloaders, but this motherboard has a terribly, terribly buggy BIOS. Restarting the PC is not a fun thing to do with this 400 dollar PoS.

Of course, the only reason BIOS upgrades failed to address this is because only a few thousand were ever sold... and mostly to gamers and not hard-core computer engineers.


Indeed, widespread use is important. When I'm looking for computer components, I scour NewEgg for both good ratings and a lot of responses.


Buying an i7-950 right now is also a terrible idea, since the Sandy Bridge i5s are faster and cheaper:

http://www.anandtech.com/bench/Product/100?vs=288


I think it's more common with a total of 4 slots with sandy bridge though, since it's dual channel. The largest kit i could find in stock at my favorite store is 4x4GB, which is 16GB. So i guess if your only priority is memory amount, an i7 would be a good choice..


Rule of thumb: RAM size lags disk size by about ten years.

Ten years ago a typical workstation had around 32GB of disk storage. Today, 32GB of RAM in a workstation is perfectly ordinary.

Take the size of your local disk space today. Ten years from now that will be the amount of RAM in your computer.


> 32GB of RAM in a workstation is perfectly ordinary.

I look around me and most of new systems have 4 GB of RAM. It's possible to have 32 GB of RAM, sure, but it is absolutely uncommon, and blatantly unnecessary (none of my own desktop machines have more than 2 GB).


I built Chromium from source the other day on my machine with 4GB of RAM. It hit swap while linking. Also, I'm running a 64-bit Linux distro. A 32-bit system I tried could not link Chromium at all.

All of this is documented in the Chromium wiki, of course. But the point is, 4GB memory is their minimum requirement, and insufficient for doing a lot of work on a project like Chromium.


Damn, I thought needing something like 3GB to compile PyPy on 64-bit put us in some class on unreasonable usage, interesting to see that it's apparently not so crazy for large projects.


I have 8 servers running distcc for my compilation chores, however none of them has more than 8 GB of RAM. IIRC I had no problem compiling firefox with my puny workstation. Chrome is as nasty as OOo :)


Linking Chromium on Windows hit swap for me with 8GB.


The machine I use as my workstation has 64GB of RAM.

Of course, it also has 32 cores... It's not exactly your standard i7 workstation.


I think I had a 4GB hard disk ten years ago.

20GB about 2 years later.


Nope. Both RAM and disk are asymptotic. We're getting to the point now where the laws of physics are preventing us from getting much smaller. Now, all of this is only applicable to the current paradigm; an entirely new kind of RAM could mean vastly higher ceilings, but if we keep making it the way we've more or less been making it for decades, we'll hit a wall.


That rule of thumb is from Jim Gray:

http://en.wikipedia.org/wiki/Jim_Gray_(computer_scientist)

It's held true for a long time. Like any rule of thumb, at some point it may no longer be correct, but we're not at that point yet.


Interesting. A citation would be informative. I tried the usual keyword searches of "memory disk space barriers asymptotic ceiling" but couldn't quickly fine anything. I'd be interested if you could provide us a pointer.


Right now DDR3 is made on a ~30nm process. We can't really shrink that too much more without the various pathways bleeding into each other. We're just running into insulation limits. They'll still be able to shrink it a bit, but this holds true for CPUs as well, and is one of many reasons why we're increasing cores in pursuit of higher performance.

We're scheduled to hit 11nm; after that I'm not sure. See the Wikipedia article on quantum tunnelling for an explanation that's mostly above my comprehension.

http://en.wikipedia.org/wiki/Quantum_tunnelling

edit: To get an idea of the scales we're talking about, 11nm is only about 50 silicon atoms wide.

http://www.wolframalpha.com/input/?i=11nm+/+width+of+silicon...


That just means we need to move to a three-dimensional process. Speaking of which, aren't memristors supposed to be layerable?


A back of the envelope calculation says that a linear shrink with a factor of three means nine times as many structures on a given die area. This is consistent with the factor of ten in a decade. There will be hard technical problems lurking in the details, but the end is not yet nigh.


I think the refute of this rule of thumb will not come from the asymptotic nature of them in about 1000 years or whatever, but from the new memristors technology.

So all memory will be the same.


Nowhere near 1000 years. We're scheduled to be at 11nm in 5 years. That's 1/3 the width. So, in 1000 years, it'd be 1/600th the width, or 50/600 = 1/12th the width of an atom.

We really don't have very long left with the current paradigm. Past about a decade or so, we'll need something new if we want to continue growth.


People have been saying that for decades. AMD, Intel et al. are still very confident that Moore's law is good till at least 2020.


Well, if you keep saying that Moore's law will fail, you will eventually be right. Like if you say a certain celebrity will die this year, eventually you will be right. Like how economists have predicted 8 out of the last 5 recessions.


We're reaching the scale of process where quantum effects are starting to rule. I didn't say we wouldn't be able to keep expanding, simply that this current model will run out of steam. You can't continue to get smaller ad infinitum.


Ten years ago it was because the features were smaller than the light wavelength used to print the mask (not sure of terminology here). There's always something that looks like an insurmountable barrier, but billions of research dollars can work miracles. Intel spent $5B on R&D in 2009.


And don't forget, that R&D is funded by RAM profits.

So, do your part, and upgrade soon. ;-)


32 GB ram is absurd in ordinary usage - what do you use your computer for?


Statistical machine learning and data mining. 32GB isn't enough but, then, it's hard to name any amount that would be enough this kind of work.

Almost all of our machines are either 32GB or 16GB.

As btmorex pointed out earlier, "Once your data set doesn't fit in memory, it is indeed a scarce resource. And 24 GB is really not very much data."


I wouldn't say that 24GB is enough, no way. It depends always on type of thing you use your computer for. For a normal user who installed windows and recently plays games, 24GB might be more than enough. For almost any kind of developer this might be also more than enough, in the end this developer develops for an end-user, which used not to have much RAM.

However, I am a scientist, and my machine at work has 96GB RAM with 24Cores, so at the end it comes to around 2GB/Core, which isn't that much anymore. In order to run algorithms on big data and not to bother about disk accesses (SSD or not) more RAM is just crucial. My previous machine had only 8GB ram and it was a big problem to stress algorithms with big data sets on them. So in my case, there are never enough RAM ;)


For the average consumer, I think 4gb is more than enough Ram, they don't need anywhere near 24gb. I wonder what percentage of the computer market is scientists who need the massive processing power yours does?


What is the point of this article?


Jeff Atwood likes to tell the world about all his computer upgrades.

His is bigger than yours.


Rubbish, mine is larger.

And whilst I really don't want to engage in this kind of dick waving, since it's Jeff Atwood I will.

I have a HP Z800 ( http://h10010.www1.hp.com/wwpc/us/en/sm/WF06a/12454-12454-29... ) at home with 48GB RAM currently and it's upgradeable to 192GB RAM.

Of course, having hardware that you're not using is just dumb... why have such a large capex for home hardware if it's not required. I only have this beast because I worked on a project in my spare time around distributed data. For this I built 17 virtual machines with which to test my work. I calculated the cost of the opex of renting this in the cloud would exceed the cost of purchasing over the duration of the project. And because spending money on this kinda hardware is still dumb I then put it to work on a piece of software I sold to a local consulting company. The result is that at least the machine I have has been and is used, and has paid for itself a couple of times over.

BTW, this computing power really helps to build Chromium in 25 minutes rather than hours that most people experience.

Also... since I'm now thinking of the Blackbird ground speed check story ( http://groups.google.com/group/rec.aviation.stories/browse_t... ), someone else has got to step forward and obliterate my home machine with some more impressive dick waving.


What's the bottleneck on the Chromium build on your machine?


The way Visual Studio builds (it's a Windows machine at the moment) and CPU.

It pegs a couple of cores but the others have relatively low load. I've followed the optimisation advice ( http://www.chromium.org/developers/how-tos/build-instruction... ) but can't manage to get it below 25 minutes.

The bottleneck certainly isn't disk or RAM.


Why does it only peg a couple of cores? I suspect Chromium is a pretty substantial build.


So he's the new Jerry Pournelle, is what you're saying?


Or Ron Jeremy, or something.


Just letting people(who don't already) know that 24gigs of RAM is attainable today at reasonable prices?

Anyway, I like Jeff Atwood's writing because it's true geek celebration and entertainment. If you don't like him, maybe you can just pass on?


Unfortunately, just like roads[0], all that memory will be filled up and used.

[0] http://en.wikipedia.org/wiki/Lewis-Mogridge_Position


Unfortunately ? I would never buy it if it wouldn't be used, good RAM is used RAM.


The memory should be filled up. With file system cache etc.


i7 920, clock for clock, made a huge difference over my Core 2 Q6600 on the desktop. I estimate it was between 2 and 3 times faster, both overclocked to 3.4GHz, depending on whether I was doing a build of the source tree, or transcoding a video.

Definitely not "blah".


I have to ask, did anything else change at the same time? Although the i7 920 is a fearsome chip, and a great upgrade over the older 65nm Core 2 Quads, anything approaching even 2x performance per clock I'd probably consider an edge case. Sounds like there was probably another variable in there.


I think a lot of it comes down to hyperthreading, actually. The 3x improvement was in transcoding, which is a reasonably pure test of CPU and memory throughput, rather than amount of memory (4GB -> 12GB), or I/O speed (the transcoding was actually done with both input and output across a gigabit network to my NAS).

Nailing down the improvement in build time to any single change would take more time substituting things than I'd care to expend right now. The extra memory for file cache no doubt helped quite a bit, as did the SSD, but different parts of the build are I/O heavy, and other bits are CPU dominated. My MacBook Air 13" (4GB RAM, SSD etc., but slow processor, running Windows 7) isn't particularly fast at the build.


What are the power usage / heat production implications of that much RAM in a system?


Pretty small; each DIMM uses less than 10W.


I guess no one has attempted to run the gaussian computational chemistry algorithms on molecules with more than tens of atoms.


To me, this begs the question: why are higher memory VPS instances so much more expensive? I know they typically have more processing power, but the pricing still seems disproportionate.

And I think the more exciting issue is the falling cost of SSDs. I was recently reading about in-memory databases being much faster not just because of faster access, but because they spend so much less CPU time working around disk access issues. Can the whole OS be written that way? We are quickly heading toward exciting times.


$299 for 24 gigs of ram is ridiculous. I thought for sure that had to be the per-stick price, or something. Prices sure have dropped a lot since the last time I built a system.


I was actually surprised as well - I bought a powerful machine like 16 months ago, when i7 became available exactly to get a lot of RAM. At that time, 300$ got you 12 Gb.

Also, SO-DIM became quite cheap: it is really affordable to get 8 Gb in a laptop (IMO, the difference between 4 and 8 is big because @ 4 Gb, running more than one vm on mac os x is not so practical, whereas at 8 it is a no brainer).


I could go for like a week without having to restart Firefox!


It seems like a sound principle, in theory, to trade memory by programmer time. After all, one can buy a lot of RAM for the cost of a programmer's work day (even the expensive ECC RAM used on servers).

However, most people don't seem to be able to tell where that stops making sense, and the end result is software that's slow no matter how much memory you throw at it.

Anyone that has worked for any amount of time with "enterprise" software knows what I'm talking about.


But it's not. Period. As an application dataset grows and presumptions of yesterday blow through sane limits, you will re-visit this topic. It happens again and again.

Hypothetical example: Boss asks if you can fit several millions of products into the database. Keep them up-to date by setting related prices on them. Each synchronization task needs to be aware of product deltas: is it available, sold out, on sale and so on. Oh and there are many suppliers, catalogs. All the while you're running set intersect, difference and so on on multi-million count objects, you still need to be aware that access to this data needs to be consistent and sane. Access times of several hundreds of milliseconds are not acceptable.

Obvious solutions is to distribute the load, counting and updating amongst machines. Load publicly available data from high performance memory tables. At some point, it all comes unravelled when you get a request for custom sub-sets where each one can live queries by price, text, brand ...

Memory bound problems are just that. At some point you hit the wall. I think planning for that impact is the best insurance a shop can make.


What "unexciting i7 platform upgrade" is he talking about? SandyBridge is out but it is DUAL channel, not triple. So 16Gb, not 24...


For contrast, I am working on an honours project proposal to develop a blogging system (yes, I know, but I have a novel twist) and I plan to target it at 64Mb VPSes -- partly because at that level it's price competitive with shared hosting.


For "price competitive with shared hosting" you ought to be able to a lot better than that? I pay 19.95 a month for what is technically a variable-RAM instance (pay-per-resource), but I use somewhat more than 64Mb and don't come even close to reaching my actual limits.


What I'm driving at is that a lot of people used shared hosting for running a blog because for a long time it's been the cheapest option (Dreamhost charge $7.95/mth, for example). VPSes may overtake as the preferred option.

I predicted this happening in a piece I posted in 2008.

http://clubtroppo.com.au/2008/07/10/shared-hosting-is-doomed...

And the HN discussion. http://news.ycombinator.com/item?id=241952


Ah, a tighter definition of "competitive with". That's fair. I've been around long enough that a VM image for $20/month is still a bit mindblowing when I think about it too much. :)


This reminds me how some companies still run old web apps on huge server farms made of boxes with 4GB of RAM. They're basically building in a limitation so the programmers can't try to gain performance boots via memory caching if they wanted to.

Also consider that though the article indicates a ~$900-1000 for a full 24GB system, it may be only ~$3000 for a S5520SC-based system with 96GB. I suppose one would be good for horizontally-scaled web caches and the other for big databases or whatever funky app might make tons of random seeks over a large dataset.


With 24 Gigabytes of memory I could even avoid to use the hdd.


but what if you have a power cut?


i was thinking more or less the same. i'm running on a 40GB SSD now, with about 25GB to spare...


Since most applications are still 32-bit, most users are still OK with 4GB. Case in point - games on PCs. I upgraded my video card at least 3 times in the last 3 years but the main memory of the PC has always been 4GB and i have no problem with any game.

Sure I am really glad that my work PC has more than that, so I can open several IDEs with large solutions and not worry about memory, but for home - I still don't see a need to upgrade.


> To me, it's more about no longer needing to think about memory as a scarce resource, something you allocate carefully and manage with great care.

Why am I not surprised to hear that from Mr. Atwood?

Yeah, 24GB is nice and stuff but the "not care about memory"-part rings my alarm bells. I give him 2 or 3 months of careless coding till we see a "64gb ram ought be enough" post ...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: