Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: When did computers get 'fast enough' for you?
29 points by karmakaze on Feb 21, 2022 | hide | past | favorite | 88 comments
Unless gaming, machine learning, or the like, computers have been fast enough not to make a difference to me for quite a while. I was trying to recall any pivotal moments but have to go pretty far back. I'm also excluding overcoming slowdowns from abstractions because we could.

The last time I remember computers being very slow was waiting for C++ compiles in the late-90's/early-00's. Of course C++ compiles could still be slow for people, but I haven't had reason to need to use C++ since because of languages like Java or Go.

We always want faster server software as there can be many users using it at once, but even then an SQL db, indexes, and thoughtfully written queries generally get the job done. I remember there was a time that vertically scaling/federating dbs was almost not enough on bare metal, but add sharding and there's very little that you can't handle unless you're FB/Twitter.

[The slowest things I've run into in the last decade was Spring/Boot startup, and a React front-end where running all the assembly (TypeScript, CSS, images, polyfills, what-have-you) took over 30s to reload a page.]




Computers - a long time ago. The software that runs on them? That swings back and forth. More specifically, I feel like we're in a "slow" part of the pendulum swing, and I keep hoping it will reverse directions.

I mean, my $400 crappy laptop can run Factorio flawlessly (because the devs give a fuck about performance), but gets bogged down while scrolling on a webpage. It can run gvim (or Sublime Text or Notepad++) with dozens of plugins as fast as I could wish for, but if I pull up a "modern" IDE, suddenly my computer "can't even". A Logitech mouse using default drivers is smooth as silk, but the moment I set up branded drivers, the computer gives up for several minutes.

It, frankly, makes me pretty sad.


They're still not, due to software bloat.

Technical and human-imposed restrictions mean iOS devices tend to be the closest to feeling "fast enough". Desktops are hampered by little always-on utilities being written in fucking Electron and eating a GB+ each while also burning lots of processor cycles while not doing anything. Or feature-weak productivity suites living in the browser and taking 20x the resources of a more-featureful native equivalent from years ago (or well-written, native, modern alternatives). Or bad low-level building blocks leading to e.g. UI freezing, stuttering, or jankiness. Or vendor malware (Windows) doing god knows what.

I've only seen a couple operating systems that seem to give what I'd call enough priority to keeping UI snappy so the user feels like they're really in control and not just begging the OS to give them some time, in competition with their own software. iOS and BeOS... I struggle to think of another. QNX Proton was pretty good but I think that was a side-effect of design decisions made for other reasons, though I guess that still counts. Those would probably struggle to remain so running five different JS + HTML layout engines at the same time, like a modern OS is commonly subjected to. Human-imposed restrictions on iOS are what keeps it from suffering that fate.

However I don't think the bloat has yet overcome the benefits of SSDs, in particular, so things are still overall better than they were in, say, the 90s or early '00s.


Steadily though at times sluggishly going along on my (pretty much exactly) today 14 years old Laptop with HDD ... I dread the next few years ... almost everyone (developers/users) will take the luxury of fast seek and generally fast disk IO of SSDs for granted ... and this lack of incentive for strict IO optimisation will lead to medium to big everyday applications such as office suites, web browsers and others becoming pretty much unusable (on my machine).

Paired with ever increasing RAM usage it'll make my machine obsolete.

On the other hand I'm thankful for the selection of quite efficient applications that still allow me to do most things even with the luxury of a GUI .. to another 14 ^^


Sadly, I think the benefits of SSDs are about to be overwhelmed at least for gaming. Some 'direct storage' APIs are about to hit the market, which will inevitably make SATA SSDs (which were plenty fast enough, thank you) into the new spinning HDDs in terms of subjective performance.


Until recently I thought my 2015 MacBook Pro was "fast enough", and I think arguably it still is for casual usage. But I've just gotten one of the new 2022 MacBook Pro's, and for work usage it's making an absolutely huge difference. Things that I had to wait for are now instant, my editor's language support is much more responsive, and my computer now no longer lags when video calling and screen sharing.

The slowest tasks I had before were probably live-reloading a React app, which has gone from "takes a few seconds" to "pretty much instant". And compiling a fresh build of an iOS app, which has gone from ~20mins and the my computer lagging slightly while doing other tasks to ~3mins and it performs other tasks perfectly fine while it does it.

It is almost 10x faster (2x faster in single core, 4.5x more cores) so perhaps it shouldn't have been surprising.


I have a 2015 Macbook Pro for myself and a 2021 Macbook Pro for work, and I still don't notice much difference between the two. 2021 is a bit snappier, but 2015 is still quick for everything I use it for too (writing, coding, browsing the internet). Coding a React app on it right now, actually.

I have a separate gaming desktop for gaming and game development.


Hmm... that's interesting to hear. My 2015 also needs its battery replaced. I wonder if that could be causing it to throttle.


Thermal paste on CPUs usually only lasts about 5 years. Might want to consider replacing that, might help with any throttle issues. I've considered doing the same on mine, just in case.


I believe the biggest leap is in latency, it can still be improved but it is a very nice and very welcomed new trend that was ignored for so long.


For everyday usage, it's pretty simple and can be traced back to a moment in time: computers without SSDs are not 'fast enough', computers with SSDs are.

For programming, the 2 applications where computers aren't quite fast enough is compiling massive C++ projects and getting julia to first plot. I recently lept from having ~6 cores to having ~32 cores, and this has helped with the C++ but not really with julia - I think that's a programming issue.


This is why languages like Go are a jewel. It was made to compile very fast, disregarding our current era of software bloat and turtles all the way down.


There are two types of tasks that run on my computer. Tasks I told it to do, and tasks someone else told it to do (with my begrudging acceptance).

The former category has always felt fast to me. Word processing, programming/scripting, number crunching, compiling code, viewing documents, etc.

The latter category is what feels slow even to this day, on high-end hardware.

- OS/software updates that take hours (monstly on Windows/macOS).

- Janky animations that drop frames (Discord "stickers").

- Webpages overloaded with useless JS.

- Horrendously slow backend webservers (technically nothing to do with my computer, but it affects my UX nontheless)


Hm… I would say in 2013 when I bought a laptop with an SSD.


I'll second this. Getting an SSD was the last big "oh shit" upgrade. Now the only things I recognize myself waiting for are really heavyweight programs, like yes, gaming.


Third. Still using my 2013 mb air. I’ll probably replace soon but not because I’m having performance issues.


I still use mine too! It has started to feel slow though compared to newer Macs. Not sure why.


While there is often a perceptible difference between a SATA SSD and an NVMe, it's nothing compared to the gap between a spinning disk and any reasonable SSD.

If you're doing anything on the web, a good adblocker makes a big difference.

If you have 'enough' RAM, more doesn't help.

For most tasks, CPUs are wasted most of the time. 2 3Ghz x86-64 cores are good enough not to be noticed on a general desktop.

Latency remains more noticeable than bandwidth at modern levels: the difference between 100Mb/s and 1Gb/s is only noticeable when doing bulk transfers.

All that being said... most of what we use our end-user computers for is wasteful overhead. Too many layers.


I agree, for me computers mostly became fast enough when SSDs became standard.


Same here. It was criminal how long Apple sold 5400rpm drives in macbooks.


Computers became fast enough for me with the advent of the 25MHz 486. I no longer had to write drivers in assembly and could write them in C instead! It was like magic! Writing C code, C code!, and it being a driver! I was blown away. My productivity soared! (Though my knowledge of x86 assembly has atrophied somewhat).

The next bump came with the advent of the 2GHz quad-core i7. Then you no longer felt compelled to work with compiled languages (for purposes of this discussion I'm considering Java and C# to be compiled languages). There were many tasks and applications where these interpreted languages were "good enough" and allowed you to get solutions in the hands of users faster. Building serious business applications in interpreted languages was something that never seemed possible when writing drivers in assembly!

Finally, the most recent bump has been treating AWS as a runtime platform. Not only can I create solutions in Python, a language I absolutely detested back in the 90's, but hell, I don't have any servers to manage any longer! Things scale under load and HA is easy to achieve. I no longer even think of AWS as being "the cloud", it's a compute platform - and a very powerful compute platform at that!

I've come a long way from writing assembly for x86 processors with a few MHz of compute power and RAM measured in kilobytes!


I'm old enough to remember typing (and also clicking? it's getting fuzzy) in Windows 95 and having to wait for the computer to catch up. Amazingly actions generally always completed eventually, but I wanted to do things faster than the computer could.

I used a 2013 Macbook Pro (4 core, 16GB RAM) for a really long time and had very few complaints, but I then graduated myself to a desktop machine with 12 cores and 64GB of RAM, and I can finally say that this is fast enough. There is almost never anything to wait for, and if there is, it probably isn't the fault of the hardware.

The biggest bottleneck for computers I think for the last 30 years is really network bandwidth. Again, I'm old enough to think that 768k is probably enough for just about anything, and if you really need to transfer a large chunk of bytes, just go to sleep while the computers are working, but even network bandwidth is pretty cheap these days. 1GB residential connections, 10GB LAN. We're good to go.

The problem is that these innovations mean that we can get away with building less efficient software. Ultimately this means that no matter how many computing resources are available, I think we're likely to find a way to use them all, and more.


Pentium III. One of my favorite computers was a dual coppermine, one ghz. I know my computer now can deal with much heavier multimedia workloads but the experience of sitting at it and using it is the same at best, and sometimes far worse. Recently I started a new job where I need to use Microsoft Teams and it brings the computer to its knees. None of this advancement matters. It offers me nothing.


Oh yes, I remember these times. I was already into multithreading with OS/2 and Windows NT, before the multi-core days. I remember my first dual-slot Pentium-II's and dual-socket Pentium-III's. The best feature was that you could actually multitask with Netscape/Mozilla open because it used 100% of one of the CPUs leaving one free for everything else.


If I had a sufficiently fast computer, I would love to "solve chess".

Crunch through all possible games to figure out if ...

- White always wins

- Black always wins

- Every game ends in a draw

... when the players have enough computing power and all uncertainty is removed from the game.

But there probably won't be a fast enough computer in the foreseeable future.


Ever. The combination of all chess games is larger than the number of atoms in the universe. There is no way to store all the progress you need to find an answer.

Maybe someone can mathematically answer the question, but that won't be by exhaustive solving (though it might be work down to a subset of all games that then are solved exhaustively)


> because of languages like Java or Go

Sounds like you haven't spun up a large project recently :) Anything Maven build times are terrible and Go isn't much better once you start bringing in a ton of dependencies and codegen frameworks for stuff like protobuf and dependency injection. Java builds can easily take 30 mins and Go at least 3-4 mins.

Don't even get me started on MacOS Docker IO performance. Every time something gets faster we fill it up with more inefficiency.


I did mention working on Spring projects, which was a monolith that did use Maven. The slow part wasn't Maven or the dependencies, it was the Spring startup with all the dependency injection and filter loading taking most of the 2 minutes. There was also some Thrift generation but again that was rather quick.

I would classify this as self-inflicted as I don't believe there's anything intrinsic to what's going on for it to be as slow as it is, only poor implementation of extensibility or other ideas (maybe JPQL, idk).


Oh, that's a good point, and one that I've forgotten about.

These days most of my work is in Python, with some Terraform and shell scripting thrown in. In 2019 I was working on JVM projects: both Clojure and Kotlin. The JVM build tooling is painful, even on a very powerful machine.

> Don't even get me started on MacOS Docker IO performance.

This seems to have gotten much better for me. It sucked for a while with the introduction of the M1, but it seems to have gotten back to about where it was, even when using x86_64 containers.

I've been playing with aarch64 containers on my personal M1, and they seem to be significantly faster. I'm expecting (and looking forward to) ARM making big inroads into the SaaS hosting arena. It seems like it would have significant benefits in terms of power consumption at the datacenter level, and with more and more developers moving to ARM (via Apple Silicon), it will only get easier to develop for.


Just curious, how big is a project that requires a single source file edit to take 3-4 minutes? Do you have a NVMe disk? At least a quad core CPU?

I've seen some pretty big projects compile way faster than 3-4 minutes, even with protobufs.


Somewhere around late 2016 or early 2017, at which point most machines I used had SSDs for system drives and a cheap SoC was running video from the media array to my TV (a Pi3 at the time, now a Pi4 is doing the job as the 3 struggled with x265 at 1080p). I'd just put together a new main PC with a GTX1080(6Gb). The biggest upgrade in that build was the 1080, which is hardly taxed ATM as I've stopped playing much by way of fancy games (due to a mix of having picked up other hobbies and the rest of life getting in the way).

I have upgraded my main desktop recently, late 2020 IIRC, because I started doing a bit of video recording on it. The old CPU was fast enough but the new one does in less than an hour what would have been an overnight task for the old. The old kit was handed down to improve the home server, mainly to give it more RAM rather than for speed (the old motherboard in there wouldn't take more than 16G, at least not officially).

The other “fast enough” question is home Internet. For me that was probably 2011 when FTTC became available with 10mbit upstream (more is useful, but 10mbit is fast enough). For downstream I'd say even earlier with 17mbit down (the 1.4mbit up I had then was much more limiting) and anything that didn't happen quickly at that rate wasn't urgent enough to leave running while I did something else. Of course what I do has grown but the connection has grown too so stayed in the “fast enough” range. I'd like faster either way (currently up to ~76down/17up though right now ~50/10 as after multiple disconnects I think due to the recent storms (my last section of line is string down the street from a pole)) but that would be luxury not something I need at all.


Computers felt pretty fast for me around 2010-2014 era.

That's around the time when quad-cores become cheap/ubiquitous, SSDs were a huge performance improvement and Windows was in its good era (W7/W8.1) and wasn't a dumbed down yet bloated tablet OS that we have now.

Also, our ISP was doing multiple free speed upgrades and we went from 25/25 to 100/100 while paying the same amount.


It depends on the OS/application. My accelerated A500 at 14Mhz is perfect for playing VIRUS, my accelerated A3000 at 25Mhz makes the game almost impossible to play. The A500 is too sluggish for PageStream and PageRender 3D. Both these machines are running SCSI HDDs, so even sequential disk access is a tad slow (ha) compared to what's common these days. But for a lot of things, even running on 35 year old hw, the classic AmigaOS runs just fine compared to Windows 10 on an i7 or XEON CPU, etc. And you really can't beat Autoconfig and Datatypes, and all the other brilliant efficiencies of the OS. How much of today's CPU/RAM/NIC/SSD performance is sucked up by in-your-face advertising and behind-your-back telemetry?


That depends a little.

For every day use - since the first SSD I've not really had problems.

For games - I'm not usually one to chase fresh AAA titles, so either my 2008ish Core 2 Duo or my 2012 i5. That were 4 years and then I only upgraded in 2019 after 7 years, while still using the machine mostly for games.

Work is a whole different thing though. From 2013-2017 I had an i7 (laptop), which was kinda fine as I was "only" doing Java, Python, ops stuff etc - so a huge SSD and a lot of RAM were more important. End of 2017 I started doing mostly C++ and they gave me an i5 laptop, I hated every minute of it because of the compile times. Now I'm not doing so much C++ and now an i5 is fine again.


I am still waiting on ≈45 minute software builds a few times a week and am constantly annoyed by slow-loading web sites. I don't expect computers to get fast enough "to not make a difference" for daily tasks in my lifetime.


When SSDs became the norm, performance stopped being an issue for most of my work. I can still push a machine to its limits with some of my work, but that’s not a surprise: modeling and simulation generally can saturate any system you use.

The only times I feel performance issues these days are in the self-inflicted category thanks to developers who adopted the write-once/run-everywhere/consume-all-the-resources mentality. I’m pretty convinced if those developers vanished and we had native apps for most things, performance would be a non-issue for most computing users who aren’t doing heavy gaming or resource intensive analysis tasks.


For me, there was a clear transition when SSDs came on the scene. Prior to that, when using spinning disks (particularly in laptops or laptop-like scenarios such as iMacs), you could always tell when the computer would start to thrash and things were getting bogged down. Your productivity is halted until the computer gets its act together. Now with blazing-fast SSDs, that _rarely_ happens. It was one of the early draws of tablets—no matter what you were doing at any single time, the OS as a whole remained fully responsive. Now we generally have that experience across all computers and computer-like devices. It's awesome.


It's been stop and start. Computers are fast whenever I do things that haven't changed in a while, where I know the workflow and can expect stability. They're slow when I'm confronted with more things to learn and it's buggy and uses new tech stacks generously.

The feeling of fastness isn't just the literal latency of pushing buttons and seeing things appear, but also how well the task has actually been optimized: sometimes "sending an image" can feel slow because the task has been made more complex, for example, because I don't know where the image was saved.


Computers became "fast enough" in the Pentium 4 days for me, which is when I was finally able to properly multitask. I couldn't do much simultaneously on my Pentium 3, as I recall (e.g. couldn't browse the web and listen to mp3s at the same time without occasional distortion). Installing Gentoo took hours.

I built a P4 box with 15k RPM Seagate Cheetahs at the time, and on Windows XP, everything loaded near instant. Things feel way more bloated nowadays, though everything still loads near instantly, so I think we're roughly even to what we had once threaded processing came out.


> e.g. couldn't browse the web and listen to mp3s at the same time without occasional distortion

This comes from most desktop operating systems being shit at the task of being a desktop operating system. BeOS could do that just fine on a fast-ish Pentium I. Windows and Linux at the time, couldn't do it on hardware twice as powerful (and I don't think either's gotten much better about prioritizing media stability and user input performance)


Maybe the early 2010s? SSDs were ubiquitous and cheap enough for at least the OS and the most commonly used applications and games. Around that time CPU performance plateaued for a while. There were improvements but nothing like going from single core to multi-core, or the raw speed improvements from earlier generations. The processor I bought in 2011 easily lasted me until around 2018 without issue or any real desire to upgrade (and that was for gaming, if all I was doing was general purpose computing I'd still be using it).


1999 when I built a dual Pentium pro machine - overclocked the CPUs to 200mhz (the CPU was available at that speed, but I found a deal on lower spec CPUs). It did everything I asked for many years, but it was getting old and the sirens call of laptops got to me. I'm now on a pinebook pro, and the speed is plenty fast for most everything. (the screen, trackpad, and keyboard leave something to be desired, but the CPU is fine)

Now at work I often am rebuilding millions of lines of C++, there I'm still screaming more more more.


Curious, where's the slowdown in C++ builds these days? With precompiled headers and incremental dev compilation what's left, disabling optimzations, the linker?

Also maybe cheating here, but are there network peer-to-peer build systems that work out-of-the-box?


I've never got precompiled headers working. Incremental dev works well, but I'm often switching branches and then you are doing a full rebuild. (my workflow is not typical!)

I am a maintainer of icecc, it works in some cases, but work from home is one where it doesn't work.


Ah right, switching branches. I usually keep two checked out copies (not C++ but for dependency updates), one for my main development and one for others.


Toshiba NB-305, a netbook with a fairly early Atom processor. Which normally would have been too slow to use really, but with an SSD it was perfect.

The processor was pretty weak, but that lead me to be careful with my makefiles. And write scripts so that LaTeX could be compiled asynchronously after writing in vim.

Nowadays my builds are slower because I have too much processing power and got lazy about working out dependencies. Everything is a little annoying because I don't bother hiding the limitations.


They're not. I have a 2017 Macbook Air and gmail frequently takes many seconds to display an email when I click to open it. I used to use Mozilla Thunderbird and emails loaded basically instantly. I encounter minor annoyances like this almost every time I use a computer. Mostly I don't even consciously notice them anymore since these things happen so frequently, but if they magically disappeared I'm sure the change would feel incredible.


Hasn't happened yet. A regular job at work involves kicking off a process that spreads itself across 100 nodes and takes a week, munching through tens of terabytes of data. Even at home I'm processing 30GB-sized datasets, which understandably takes a couple of hours. And editing an hour-long video means a lengthy stage of thumb-twiddling.

I think if computers were faster, we would simply invent new tasks that take a long time with them.


Probably 10 years ago with first ssds, however it feels like software developers think they can affort to write even worse than ever before because their code is still 'fast enough'. Or on the other hand if it's really slow they blame the hardware and not themselves which drives me nuts.

On multiple occasions I had to force them to fix some completly abyssmal performance regressions in our software.


13" 2013 MacBook Pro with 16gigs of Ram. Fast enough. All slowdowns were software features perfected with a fraction of the resources more than ten years ago.

The amazing thing about services such as Google Stadia and Geforce Now, combined with high speed internet is that I will never need a gaming machine again.

And, I can now use virtual environments for high horsepower development environments for cloud work.


I was doing Yocto Linux development with an older computer, spending ages building OS images. Recently I upgraded to a Ryzen 5950X and now builds complete in a few minutes. They could complete in a few seconds or milliseconds, so it's still not "fast enough" for me. I think I can always come up with something useful to do with an even faster computer.


Believe it or not, I don't really use computers outside of work - no gaming, video editing, etc...so that impacts my answer.

NUC8i5 was the first time I felt like machines weren't really getting faster year after year. I bought a 4x4 last year just because I assumed things have changed, and don't really notice any difference in day to day usage.


A couple months ago I took on a project that involved working with coda, figma, with a company on the google productivity stack (gmail, meet, gcal), and the workload (mainly ram requirements) really crushed my computer. I had to get a new one, and maxed out the ram, and now I’m fine.

So I’d add web-based or electron-based apps to your “unless” list.


Yes absolutely, I thought of calling out Electron but instead went with a classy 'slowdowns from abstractions because we could' self-inflicted category.


For me, SSDs pushed me over the line. I've been running Linux on Dell's highest spec XPS13 for five years and recently updated to the latest model. The only thing either the new or previous laptop labours at is video transcoding, but even that takes a fraction of what it took previously. I'm happy.


Every new computer I've bought was fast enough but becomes completely unusable over time.

The one notable exception was my MacBook Pro 2019 which drove me insane because the stupid fans were always running.

Recently, my MacBook Pro 2021 M1 Max with a lot of RAM is running perfectly and I've never heard the fan on it.


Around 2012, when a Retina screen, fast SSD, and a 8 core CPU was something you got in a base model laptop.


Computers have always been fast enough, software has almost always been too slow, recently more and more


Around Haswell (4th gen Intel), but I avoid using bloated software, which is extremely common, so YMMV.


Haswell[1] definitely felt like a culmination of a lot in flight. June 4, 2013. The only reason I eventually upgraded was because I found a very cheap OLED 2-in-1 on ebay, and it made a better mobile system than the refurb Dell Venue 11 Pro (7139) 2-in-1 that I'd been using for a long long time.

Up until then I still felt like the Core2 line - with it's Q6600[2] icon - was vaguely modern, & "fast enough". Haswell wasn't even that much radically faster, but it's platform/power-management/feature-list was so much better. Great chip, really great mobile chips. From January 7, 2007.

That makes me think of the other "fast enough" chip that made me so happy, the Pentium M[3]. A Pentium III designed low power chip from March 12, 2003. Changed my idea of what a laptop could be. Got one a couple years later as my first work machine, lasted quite a while, no complaints. It's notable to me that even back in 2003 we had decent IPS 1080p displays and 8+GB ram notebooks, nice low power dual cores. Some weren't even that expensive! I want to say my friend got a decent Dell for <$1200 around then, so equipped.

Last, the Abit BP6[4] motherboard with dual celerons was maybe not fast enough, but felt like a gobsmacking huge amount of power, was a huge change in life quality. At a price I could afford! Having dual cpus was such a huge upgrade.

There's a bunch of other areas I'd consider too. I'm not sure if I've ever felt like video cards were enough, always wanted more. I'd love to get a fancy Optane drive.

[1] https://en.wikipedia.org/wiki/Haswell_(microarchitecture)

[2] https://en.wikipedia.org/wiki/Kentsfield_(microprocessor)

[3] https://en.wikipedia.org/wiki/Pentium_M

[4] https://en.wikipedia.org/wiki/ABIT_BP6


Like many here, computers for daily use got fast enough with ssds. Processors haven't been the bottleneck for a long time.

My current laptop I bought in 2016 and it's fine. I only replaced that because the screen on the old one died.

So I guess for at least a decade, nearly.


Still not. I could do with a couple of orders of magnitude for better audio DSP.

No-compromise 3D video rendering would be interesting too. Not game quality - which is very good, in limited ways - but full truly photorealistic render quality at game frame rates.


The Intel Core i5 3320M in the Thinkpad x230 is a CPU from 2012, and I reckon for me that could be the turning point where even laptop CPUs were about good enough. Desktop CPUs have been good enough for a bit longer than that, maybe mid-2000s?


Around 1991

It was the first time the computer was not slowing my workflow. AutoCAD and Turbo Pascal running on a 80286 with a floating point coprocessor and a 50MB HDD.

Previous hardware/software stacks in 8086/Tandy/Osborne/PCjR... were not fast enough.


Actually this was true for me too. I remember Borland and Brief editor, and MS C with CodeView around that time as well.


With the notable exceptions of advanced use cases, such as web browsing and the I/O speed of the SD card, the Raspberry Pi 4 is fast enough for most common computing tasks (assuming you prefer vim over IntelliJ/PyCharm).


Web browsing is an advanced usecase?


There was a small dash of sarcasm in my previous comment. There is no reason why web browsing should be slower on a Pi4 than on a Windows XP machine with 512MB of RAM from 2001.


I used to work on an 8086 networked computer with multitasking. It booted in 12 seconds, to the shell.

But it took an hour to compile 9MB of source code (our operating system).

So it's all relative to the task.


While processing and downloading have gone down so much responsiveness time has been going up with every animation improvement.

The last time things felt slow was loading a tape on a commodore 64


I have a 2016 desktop with a E5-1620 CPU, 16G ram, and a Quadro K2200. I never feel like it's slow.

Browsers are the main thing which seem to gobble up resources, specifically memory.


Computer has been fast enough for me when I no longer wanted to play AAA FPS games, i.e. probably around the year 2008. Then as the last time I actually built a computer.


Lots of times.

1. When I used an Acorn Archimedes.

2. When I had an AMD K6-2, had a Matrox Mystique and ran tkdesk

3. When I had a Pentium III ‘Coppermine’ and ran KDE

4. When I had a dual Athlon and a Western Digital Raptor

4. When I first got an SSD

5. M1 Air


My Vic 20 was plenty fast for me. It needed more RAM.


When I can play Elden Ring in 4k with all graphics options set to High.

Next year, of course, there'll be a new game with even higher settings.


I do embedded systems. The 68040 was fast enough (1990). The 68060 was very much fast enough (1994).

For desktops, probably 2000.


Still not fast enough. When I can compile the Linux kernel in less than the time it takes me to blink let’s talk.


My personal laptop is a 13-year-old x201. Its "fast enough" for 99% of my uses that aren't gaming.


I bought my boy a basic gamer rig in 2012. I'm using now it because I can do everything I need to do on it.


I wanted ECC, got a 2015 Xeon 4c/8 Thread E3-1230 v5 (cheaper than the i7 of the time), 32gb ECC ram, gtx1070, and a SSD. It's been reliable and fast. I think I paid $1,500 for it (not including 2 27" 2560x1440 monitors). I do admit to upgrading from a small SATA SSD to a larger M.2 NVMe SSD which did seem to make boost some things nicely. Especially working with large directories of audio, video, photos.

Had a few friends buy $2k-$3k laptops around then, all of which have been replaced years ago.

Building software (unless it's a big Rust project) is quick enough to not care. Occasional gaming (random steam games, warzone 2100, even valheim) seems fine full screen. Even watching 4k downscaled to 2560x1440 seems fine.

Sure 7 years later I'm considering an upgrade, but not sure I'm going to notice. A zillion terminals and chrome tabs don't seem to slow it down at all. Looking for PCIe-5, DDR5 (twice as many channels), and a mid range Alder lake, or the coming out later this year zen 4. I would like ECC so I'm leaning towards an alder lake flavored Xeon (when they come out), or a Zen 4 desktop or laptop. Or maybe a SFF (typically laptop parts in a small desktop case with a integrated GPU). The new "zen3+ and RDNA2" chips sound like plenty for the next 8 years.


For me, speed peaked around 2005.

Today I have 32 gb of ram and can barely run chrome, acrobat, and word.


It depends on the use case, really.

Outside of gaming, I've not felt the need to upgrade for raw performance reasons since ~2012. That was the year I bought by first MacBook Pro. That's not to say that "Mac" was the turning point - that MBP was just the first laptop I've owned where the build quality was sufficient that it wasn't the limiting factor for device longevity. That particular laptop lasted me until late 2016, when my youngest daughter spilled a full glass of milk onto the keyboard. The one I bought to replace it was thinner and lighter, but only enough "faster" to be barely noticeable.

Most of my personal and professional computer use relates to development. There's a point where want I'm working on grow to be complex enough that _no_ laptop is going to be sufficient, and I switch to using some sort of hosted runtime. A powerful desktop would push that point a bit further, but again, not enough to be worth the added expense and inconvenience.

I use CLI tools 90% of the time or more; the only real use I have for a GUI is a web browser. As a result, I've used some pretty limited environments productively and without real issues - I used a Raspberry Pi 4 (4GB) at home for a few months between "real computers" a couple of years ago, and went without a laptop for almost a year before that. I used an iPad with a Bluetooth keyboard and a remote developement environment (a Digital Ocean droplet running ArchLinux, using Blink+MOSH) and Safari was fine for checking incremental progress.

Lately I've become more active with my photography and have begun doing some videography as well with various drones. The M1 Macs are substantially better for that kind of work, but I don't see myself falling back into a frequent upgrade cycle.

TL;DR: My upgrade cycle doubled from ~12-18 months to 3+ years in ~2012 for general computing.


I was pretty impressed by a Quadra 900.


Last year, when I got a MacBook Air M1.


That's a very high bar. What do you do that it was too slow before, video editing?

Maybe the problem was not having enough memory. I tend to max out my budget on memory with every system since my first 386 with 8 MB multitasking on OS/2 v1.3.


2002 after the release of Windows XP


I'm still waiting


Sandybridge in 2001.


I think it was right about when SSDs were coming out. I even kind of proved it.

I built this machine with latest greatest hardware in late 2010.

AMD Phenom II X6 1090T 16gb ram 128gb SSD crossfire amd HD 5900 gpus

various upgrades over the years but 12 years later, that was still more or less within the specs of today. I got my moneys worth for sure.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: