Hacker News new | past | comments | ask | show | jobs | submit login
Intel's New Optane 905P Is the Fastest SSD (tomshardware.com)
122 points by rbanffy on May 5, 2018 | hide | past | favorite | 118 comments



I'm surprised that many people don't realize that Optane to SSD for consumer computers is similar to SSD to HDD. Consumer computers don't run workloads with QD 32. Their workloads are overwhelmingly low-threaded, so random performance for low queue depths is what matters. And Optane beats even best SSDs by a large margin.

I guess that modern SSD are just so fast, that going faster is not that big deal. But if you want the best SSD for non-server workloads, Optane is a way to go.


Im not sure this is true. Looking at the graph, the 970 outperforms the optane for sequential reads whereas the optane outperforms in the 4k random read/write range. So, it would seem that cases which are more sequential than random which include watching/streaming video/assets would run faster on the 970. So video game enthusiasts would still prefer the 970. If you're running a database, that 4k random read rate would seem to boost performance by quite a margin. Or am I missing something?


Normal consumer workloads are heavily biased towards 4K random performance (especially read). Sequential performance is largely inconsequential, both because it's rare (how often do you really suck in a full 20+ GB file at a single go?) and because consumer SSDs are fast enough at sequential performance that it's not really a bottleneck. 500 MB/s sustained is really fast enough for most users.

Streaming video from the internet does not use the SSD at all, and a high-quality 1080p video file is maybe 5-10 MB/s of bitrate, you can easily pull that off a spinning HDD that was manufactured 20 years ago.

Video editing at 4K or 8K is one of the few use-cases where NVMe's sequential performance does provide a big benefit... assuming you are not editing using proxies.

Optane's QD=1 random-4K performance does present an opportunity for big speedups on consumer use-cases. But Intel really has to get the prices down if they want to see consumer adoption, right now there is an obvious benefit to cheaper SATA SSDs that allow you to get more data off spinning-rust drives vs a smaller, massively expensive Optane drive (even if it is incredibly fast).


I think GTA V or another AAA title could suck up 20Gb from disk to RAM/GPU in one go to load all assets (models, textures, audio).


I'm a little doubtful, on the basis that it would take HDD users about three minutes to load into the game (probably 4, by the time you start doing something with all those assets). I know HDDs are slow, but 4 minutes of loading is extremely excessive by any standard.

Also, on PS3 it runs in less than 256 MB of RAM... where is it loading these assets to? Most games have RAM consumption on the order of 4-8 GB in most situations, not all of that is assets, and not all of that is read sequentially.


Planetside 2 would take 4+ minutes to fully load on the HDD before I upgraded to an SSD. Around 40 seconds on the SATA SSD. Haven't tried it on the m.2 yet.


You would think that, yet new World of Tanks patch introduced 10-20 second map load times for HDD users :-) SATA SSDs load in 1-2 seconds.


Much less. Most gamers don't have 20GB VRAM GPUs. The game doesn't use 20GB RAM either, I think around 6-8 GB. So there's just nowhere to keep 20GB of loaded assets.


But not necessarily as a sequential read.


Video game loading is not improved past SATA SSD speeds, so there would be no difference between a 970 and Optane there. See https://www.youtube.com/watch?v=ecCA0gx_eZk


NVMe doesn't have much of an advantage over SATA SSDs at QD=1 in the first place. It really shines when you have higher QD or on sequential transfers, which are not typical consumer desktop workloads.

On the other hand, Optane does have a big advantage at QD=1, which is the point people are making here. Looking at NVMe as an indication of Optane performance is the wrong approach since it has very different performance characteristics.

https://img.purch.com/r/711x457/aHR0cDovL21lZGlhLmJlc3RvZm1p...

(of course you're not incorrect that games aren't really using access patterns that are optimal for superfast SSDs)


Sequential read/write is usually optimized for burst workloads on consumer ssd's. The Intel 900p can sustain its peak read/write performance for hours.

Anyway, the big difference with Optane is that its performance is more similar to having more system memory. I have not benchmarked this, but I believe a system with just 8GB ram + Optane will run better than a system with 16GB ram and Samsung 970 pro for workloads using up to 16GB of ram. There is a reason Intel announced its memory drive technology for Optane DC4800x https://www.intel.com/content/www/us/en/software/intel-memor...

EDIT: adding a link to lwn.net article explaing the use of swap/memory overcommit https://lwn.net/Articles/704478/


> but I believe a system with just 8GB ram + Optane will run better than a system with 16GB ram and Samsung 970 pro for workloads using up to 16GB of ram.

No, not even close. PCIe is still way slower than DDR4. Once your working set grows beyond available RAM, the Optane SSD will be a better swap device than any flash-based SSD, but you still notice your system slowing down drastically from all the swap activity.

Intel's Memory Drive Technology is really intended for situations where you have multiple Optane SSDs, and a workload that wants a large amount of total RAM but seldom has a true working set larger than actual DRAM capacity.


Nothing will beat a workload requiring 16GB of RAM, than a system with 16GB+ of RAM. You don't even need to benchmark to know that much.


for comparison, write speed to RAM on DDR4-3200 is somewhere between 50000 to 60000MB/second on a fairly normal skylake workstation platform.


Nope that’s more like cache write speed. Single-threaded sequential access to RAM is typically more like 10-15GB/s for current server CPUs: https://panthema.net/2013/pmbw/results.html - it’s true that workstation machines tend to have somewhat higher sequential throughout compared to the server-heavy list at that URL, but it’s more like 20GB/s, not 50-60.


RAM is not nessisarily single threaded. Unganged Dual / quad channel memory is effectively multithreaded. So really his numbers are real world even with CPU bottlenecks. https://www.techspot.com/news/62129-ddr3-vs-ddr4-raw-bandwid...

PS: DDR4-3200W in quad channel mode is crazy fast and should break 100GB/s though sill a long way from a 1080Ti 484 GB/s.


PCIe latency is probably the bigger issue.


This. Which is why Intel is working on Optane DIMMs.

Traversing the root complex can take upto 100 cpu cycles each way.


yep but Optane DIMM /NVDIMM is not for client product. It is on data center product list. Persistent Memory isn't just a concept on the textbook. It be realized now.


So was Optane at first and so were SSDs in general. DDR4 already has an optional spec for NVDIMMs, DDR5 will make it mandatory.


Please forget Optane, just think about intel persistent memory. If you really care about performance /latency.


Or a system with faster RAM or faster swap device :-D


One of the reasons SSDs felt so much faster than HDDs was the massive improvement in 4k r/w. If Optane is considerably faster in that space then it would definitely be a consideration as your primary/swap drive.


> One of the reasons SSDs felt so much faster than HDDs was the massive improvement in 4k r/w.

Also because that was a bottleneck. Now the CPU is often the bottleneck, so I doubt you'd notice Optane (most people can't even notice NVMe vs. SATA).


Samsung outperforms Optane in sequential read by 15-20% and loses at random read by 3-4x (at low queue depth). And watching video won't show any difference, you can watch 4K video from HDD. I know nothing about streaming, though, but I would be surprised if it makes any difference.

Typical database load is an example of big queue depths. You have dozens of queries executing simultaneously and you're interested in throughput. So Samsung SSD is fine there (and raids are even better), unless your database is not typical and serves only a single client with low latency requirement.


A typical db may do index scan on disk, which requires several times as many io operations per 8kb (Postgresql/Oracle size) page compared to seq scan. These are executed with low queue depth.


OTOH there's concurrency and prefetching logic increasing average queue depth.


Are index scans supposed to be faster with prefetch? It is random read at 8kb. I usually set read_ahead_kb to 8kb on disks/tablespaces for indexes.


As someone who writes server-side software, I'm very excited about this product.

Having the ability to handle a large volume of random writes, relatively cheaply, makes building reliable, fully restartable software much simpler.

Many storage engines that I make use of are B trees or LSMs these days. These engines are usually selected because they provide very good read performance, and acceptable write performance.

Improved random access writes only make these engines more attractive and performant. For instance, boltdb, which is similar to LMDB, requires 2 iops to perform a durable write. In the benchmark for 4K random writes, Optane achieved 180,000 iops. This gives us 90,000 writes per second, sustained. Possibly more at peak, since presumably that's how they get the claimed write iops of 550,000.

That's a phenomenal number of writes. It makes in-memory stores like redis irrelevant for a wide range of applications that would have required it not long ago.


FWIW Redis Labs already offers a Redis-on-Flash solution, which also happens to work really well with Optane: https://redislabs.com/blog/redis-enterprise-flash-intel-opta...

Full disclosure: I work for Redis Labs.


That's very cool, and I didn't know about this. Thanks!


Have you been able to get access to / test the server version of Optane? Looks like Facebook and Stanford did some research with it as a DRAM replacement in MyRocks DB:

https://dl.acm.org/citation.cfm?id=3190524


I haven't had the opportunity to test it myself, but I'm certainly looking forward to it.


If you think PCM-like byte addressable flash for disk block caching is awesome, you may interest another PCM killer application - Intel persistent memory(pmem). They already implemented pmem and pmem doesn't just live in a textbook or a simulator now.


Most people probably didn't see benchmarks how abysmal the real-world performance gains were between SSDs running SATA-1, SATA-2, SATA-3 and M.2 PCIe. They don't understand that throughput is not the main factor what made SSDs being so snappy, but the latency comparing to HDDs.


Have you used one in a desktop? Can you feel the difference in speed between Optane and NAND they way you can feel the difference between a SSD and a HDD?


If you think it's a bit odd. the reason maybe is : Another PCM like memory killer application is Persistent memory(pmem). If you only use PCM like memory to implement pmem, the demand is rare and it maybe isn't economic. But if you also sell them as SSD or something block device caching, then them could contribute the demand for PCM to support another real killer application.


>I'm surprised that many people don't realize that Optane to SSD for consumer computers is similar to SSD to HDD.

Funny enough, I skipped over SSD entirely, and upgraded to Optane from HDD.

It was like trading in a Model T for a Model X.


I'm pretty happy with my 950 PRO NVME. It hits 2.5GB/s read (256KB), 1.5GB/s write (128KB) in ATTO. Random (IOPS) are 276k read, 95k write as reported by Samsung Magician. And these numbers are all with a fairly full drive.

My biggest issue is that it's only a 512, the biggest available at time of purchase (mid 2016; I only replace my systems every 5-6 years or so; 6850k/32gb mem was my limit then).

Also, doing benchmarking stresses loads up the thermals and when it hits 70c after a little while perf is throttled down somewhat; this doesn't happen in normal use, but nevertheless I've ordered a NVME heatsink which I hope will be good for it.

I see the Optane is a PCIE solution. Seeing as my m2 slot is filled, and noone makes u2 drives (the only other spare slot for such things my motherboard has), that'd make it the perfect upgrade extension for myself if I can fit it in between the GPU and SB card - hopefully it'll come down in price a bit over time.


It’s not true. I bought the previous model and it wasn’t anywhere near the expected step up from the 960 NVMe (which is admittedly stupid fast already).


My old plasma TV is 600 Hz and I can't get $20 for it. Sometimes fast enough is fast enough.


Isn't that due to the very high energy consumption [1] than refresh rate? Who would want a TV that consumes ~300-600 watts per hour?

[1] https://www.treehugger.com/gadgets/plasma-tvs-suck-electrici...


>300-600 watts per hour

Watt is a measure of energy/time. It doesn't make sense to say watts per hour.


Electricity consumption is traditionally metered per kilowatt hour, that is the amount of energy used sustained over the course of an hour.

I believe this is what the GP was referring to - the sustained usage of 300-600W over the course of an hour. Thus being very expensive to run for long periods of time.

In my area I’m charged at 16.56 pence per kilowatt hour, meaning at an average consumption rate of 450w - that television would cost me in the region of £13.41 a month to run, assuming 6hours usage per night. Which is quite expensive.


> It doesn't make sense to say watts per hour.

Perhaps, but that doesn't change the fact that Plasma TVs are generally more expensive to run than modern LCD or OLED TVs. [1]

Pioneer's 9th generation Kuro KRP-600A had an operating power consumption of 478 watts. [2]

[1] https://www.rtings.com/tv/learn/led-oled-power-consumption-a...

[2] https://www.whathifi.com/pioneer/krp-600a/specs


That was true only for the very old plasmas. Newer ones were comparable to same age LCDs.


Can it handle a 144 Hz input signal?


Afaik Optane is limited to Intel (whether that is support on motherboards or manufacturers) and I would rather have us all adopt a standard where there are more suppliers and not a single company that holds the strings for everyone else.


AFAIK Optane is an ordinary SSD. There are some Intel-only technologies, like using Optane for HDD caching, but if you just need fast SSD, you should be fine.


> There are some Intel-only technologies, like using Optane for HDD caching

That isn't intel only either[0], although IIRC their hardware implementation is intel only.

[0] https://en.wikipedia.org/wiki/Dm-cache


There is a bit difference between Optane and ordinary NAND SSD. Optane is PCM like memory, it is a byte addressable flash. So the sellman said it maybe have longer wearout time than normal NAND SSD. They cooperate with Micron to implement this economic system . They can use optane demand to help persistent memory to thrive too.


It's not the "fastest SSD" on all metrics. My laptop's NVMe SSD, a Samsung PM981, benchmarks at 2100 MB/s sequential reads, whereas the Optane 905P measures only 1669 MB/s in Tom's Hardware bench.


You won't notice much difference between a regular SATA SSD and M.2 PCIe in normal things, like opening apps, playing games, booting etc. Only if you do 4K RAW video processing, copying files all the time and similar. This new SSD should give you the first noticeable boost in normal apps comparing to the rest of SSDs. Pity Intel couldn't deliver its original promises of 100x lower latency, that would have been even more impressive.


I would imagine that HN audience does more than consumer tasks.

Here is a real world figure. I can compile my FreeBSD NanoBSD build in 30 minutes compared to 2.5 hours on SATA3 SSD's

Using a Toshiba NVMe XG3 mounted on a paddle card.

I see zero heat issues with the Toshiba.


Assuming enough RAM to hold the source tree, how does build time with root on mfs/tmpfs compare to the NVMe?


The 1000x latency wins are time to first byte from the media, not including transfer times.

The NVDIMM stuff intel is working on helps alot with latencies, but, as others have noticed in this thread, NAND bandwidth is comparable to crosspoint in practice. So, if you are transferring many KB at a time, you don’t see the 1000-100x in practice. This breaks the performance assumptions made by all existing storage systems, so nothing but research prototypes are seeing anything close to 100x.

Most of these points are moot for NVMe drives, because PCIe latencies are so high. The reviewed model, gets about 10x better latencies, for 4x the price, and that’s still significant.


Difference is noticeable, especially in boot.


Yes, but you won't feel like you just switched from HDD to SSD. Going from 15s to 12s is nice but won't make you super excited. This new Optane is the first SSD that might give you that feeling.


I think you're underestimating the improvement you get from dropping SATA and overestimating how much room for improvement is left for typical users - when the i/o times get to the level of being perceptually unimportant, changes to them don't matter as much and developers are less incentivized to make things as i/o efficient. HDD to SSD type of change is hard to repeat.


Optanes reach their performance at much lower queue depth.


I'm surprised that intel requires specific CPU support for their optane solutions. Can anybody explain rationale behind this? Why every other SSD implementation just use m2/sata standarts?


> I'm surprised that intel requires specific CPU support for their optane solutions.

The only Optane-related thing that requires a particular CPU platform is their Optane Memory caching software for Windows. That's the very bottom of the Optane product stack, while the 905P is the top.

Intel's Optane Memory caching for Windows is built on top of their consumer platform's NVMe software RAID functionality. Intel's consumer chipsets have a mode where they hide any NVMe devices connected through the chipset, making them no longer appear in regular PCIe device enumeration. Instead, the drives can be accessed through non-standard interfaces on the chipset's SATA controller. This remapping ensures that only Intel's software RAID drivers can bind to the drives, which makes things a bit simpler for Intel.

This NVMe remapping feature was first added to Intel's Skylake generation chipsets, and Optane Memory was introduced with the next generation (Kaby Lake). Skylake systems didn't get the firmware updates necessary to add Optane Memory caching support (for booting off a cached volume) even though they had all the hardware capabilities and their firmware already had NVMe RAID boot support.

All Optane products released so far are standard NVMe SSDs and can be treated as such. If you want to use SSD caching software or software RAID from somebody other than Intel, it won't care whether or not your drive is an Optane product.


I'm not sure why they would compare to the Samsung 970 EVO (budget) and not the 970 PRO. Looks to to head to head in random read /writes and the 970 PRO is faster at sequential operations


I too was surprised to see Optane compared with Samsung EVO. It is not the correct comparison to make.


Whoa, 575,000 IOPS For Random Read on a single drive.


A friend of mine tried Optane SSD and he didn't notice any performance increase in software compilation or anything else comparing to 960 SSD from Samsung. So I don't expect to get much from such upgrade and will just keep my current 981 SSD.


Compilation doesn't sound like an I/O intense workload. It's more CPU intense. Some heavy DB usage or video editing on the other hand is I/O intense.


Compilation itself, no. However, linking can be very I/O-intensive, particularly if using static linking of large libraries. And packaging up of build products at the end of the process. These can often be serial rate-limiting steps in comparison with parallelised compilation, so can end up taking a disproportionate fraction of the total wall clock time.

While it can clearly vary widely depending upon the nature of what you are building, it can in some cases be worth using faster storage to shave many tens of minutes off the build time.


Compilation times didn't even improve when moving from HDD to SSD. That will just always be a CPU-bound task except for (maybe) some real edge cases.

https://www.joelonsoftware.com/2009/03/27/solid-state-disks/


The promise for XPoint was DRAM-like latency with NAND-like throughput (and...nonvolatility of course). I’d be curious to see some of these benchmarks against (say) ramfs, or see how well it works as a swap drive.


It's rather pointless to benchmark Optane SSDs against DRAM. Later this year we'll have Optane DIMMs that aren't constrained by the PCIe bus and will actually be able to demonstrate the latency capabilities of the underlying 3D XPoint media without all the overhead of NVMe and PCIe.


3D Xpoint latency is still several times slower than DRAM. 3D Xpoint also still suffers from limited endurance. 3D Xpoint NVDIMMs will be used as pmem block devices, not as primary RAM.

For DRAM-equivalent latency and endurance you need STT-MRAM, which has already been available in DDR3-compatible DIMMs. Both STT-MRAM and 3D Xpoint will be available in DDR4-compatible DIMMS too but only STT-MRAM will run fast enough and long enough to replace DRAM.


Is PCIe latency that much worse than main memory? I honestly don’t know; I was under the impression that both had latency in the 10s of nanoseconds.


NVMe devices using battery-backed DRAM or MRAM offer about 5µs access time, and current Optane SSDs are under 10µs. (NAND flash based SSDs with RAM-based write caches also have write latency below 10µs.)


So is the bottleneck really NVMe? Could there be a different protocol over PCIe that had lower latency?


NVMe was more or less designed to offer the lowest possible latency for a block storage protocol. You could beat it with direct memory mapping, but then you're limiting compatibility to systems with working 64-bit I/O addressing and storage media that doesn't require complicated management like NAND flash does.


You maybe consider about NVDIMM and persistent memory. But isn't available with consumer client products.


Sequential read is still outpaced by Samsung. So if you're optimizing for boot time or other reading-big-things tasks, this may be slightly worse. And certainly not worth the markup.


Samsung drives also rely on DRAM which isn't Powerloss safe.


there isn't anything wrong with using dram for buffering, as long as flushes are respected. is that the case with samsung?


No Samsung respect the flushes, but it is also very slow. Optane 900p will flush 4kb writes 5000x faster than Samsung 960 Pro. https://www.servethehome.com/exploring-best-zfs-zil-slog-ssd...


Often it's not the case, which is why general consensus with SSD's for ZFS SLOG devices is to go with ones that advertise end-to-end power loss protection unless sufficient testing has been done to guarantee the drive protects against partial-writes and doesn't lie about a flush being complete.

Samsung has bigger issues with writes though, almost all of their current drives are TLC NAND which has extremely poor write performance so while short bursty writes can be fast (due to there being a chunk of SLC NAND cells as a write buffer) sustained writes are terribly slow. Optane doesn't typically meet the same peak write performance but it will beat most (all?) TLC drives on the market in sustained workloads.


The sustained write performance is why you might buy a Samsung PRO model. The PRO models have a lower burst write but can sustain high write speeds indefinitely, while the EVO model's speed crashes when it hits the limit.


So, single drive configurations - great.

Anyone using more advanced configurations (e.g. RAID) for these type of devices and what/how..

e.g. how to take advantage of things like this for large RDBMS installs which require N>1 devices for capacity alone (if not more throughput)...

Like: Taking GPU/PCI-slot optimized server and threw 8 of this style of device in there as a soft RAID50, etc.

would love to perform some experiments, but I don't have a spare $20k lying around 'for fun', so real world battle tales would be great...


Linus Tech Tips did 4x 960s in Raid 0 and hit 7.5 GBps sustained write: https://www.youtube.com/watch?v=lzzavO5a4OQ


Would something like this be possible to install on older motherboards that do not have an M.2 slot, or is there some limitation in the chipset that I don't know of?


you will need PCIE slot, preferably x4 one, and in case of old chipsets a BIOS mod to sideload UEFI NVME driver in order to be able to boot form your new SSD.

https://www.win-raid.com/t871f50-Guide-How-to-get-full-NVMe-...


I am using a 512GB Toshiba XG3 mounted on a cheap paddle card. The board had no M.2 slot. I tested on IvyBridge and upward to Skylake and it was supported. Booting was supported on all but IvyBridge. So I would say any PCIe 3.0 slot board should be fine. Maybe next I will try SandyBridge. They only use PCIe 2.x.


If you want to boot from it your bios would need to support it, but it should work in any PC as a storage drive.


I believe all that is necessary is NVMe support in both the BIOS and your OS. Standard mini-PCIe NVMe drives are seen as PCIe devices and this is the same thing, just in a full size card form factor. Modern Linux based OSes will boot from it just fine, as will Windows 10 and Windows Server 2016. I don't believe macOS allows booting from NVMe PCIe devices on the legacy Mac Pro even with High Sierra, but the OS has native support for using it as a storage drive.


> Standard mini-PCIe NVMe drives

I don't believe anyone has actually made one of those. MiniPCIe is a different form factor than M.2; the latter is newer and has almost completely replaced miniPCIe. MiniPCIe only provides a single lane of PCI Express, while most M.2 variants allow for two or four lanes, and almost all NVMe SSDs support at least two lanes.


Sorry, you're right, that's what I was referring to. M.2 is still PCIe just with more lanes than mini PCIe, as you said. My point was it's all PCIe to the computer and OS.

Thanks for catching that! :-)


If you find a program to take a long time to open and you do it often, I recommend RAM disk. Trade a bit of that ram you don't use, for blazing fast loads.


for a read-only workload (such as opening a program) I don't see how that has any advantage over the OS' page cache.


The OS doesn't know what data is important; it can only guess. You can know that you want immediate access to data at any time even if you haven't used it in ages.

I doubt that's true in practice, especially on a desktop workload.


It can learn. If you take ZFS as an example, it has both MRU (most recently used) and MFU (most frequently used) caches. Thus it can adapt to retaining memory data which is accessed repeatedly as well as data which was just read which might get reused. It's not perfect, but it should do a good job with a sufficiently sized cache.


You can pin files into memory using vmtouch, if you like. Not sure if there is a Windows equivalent.

It would be nice if there was a resident manager that watched for certain processes to be launched and then pinned their directories (or certain files) into memory, then unpinned them when they were shut down.


There already is such service bundled in Windows and enabled by default - PreFetcher (in Windows XP) and SuperFetch (since Vista). It was a bit too aggressive in Vista and caused minutes of HDD churning after booting up.


last time I checked prefetch was disabled when windows detected being booted from SSD.


I'm using Windows with SSD and I saw superfetch process using disk aggressively sometimes. I'm not sure that it's disabled, at least completely disabled.


MS says ''Windows 8 and 10 automatically disables SuperFetch service''

https://channel9.msdn.com/Shows/The-Defrag-Show/Defrag-Disab...


What's a good spec for a server for a data science group right now?

I'm thinking a shared resource attached to a hadoop cluster with R & Python workloads across 100's Gb's of data - so offload to Hadoop for embarrassingly parallel over 20+ TB but go as quick as you can for large somewhat parallel or sequential loads. About 10 users...

We have a couple of GPU servers for DNN's so that's a separate workload for us.


Depends on the "data science".

Start with a fat machine and see how far you can go: https://www.hetzner.com/dedicated-rootserver/matrix-ax

You don't need Hadoop if you crunch through 100GB.


>You don't need Hadoop if you crunch through 100GB. True that, but Hadoop is for the >100TB stuff where we need the throughput and cost efficiency for storage (we have multiple >100TB sets and we are not able to afford 100's of optanes! But Hadoop is not good for some of the problems that are culled out of these data sets. We can't afford Hadoop nodes with lots of ram and fast disks.


Making up for slow hdd speeds was kind of the major reason Hadoop was invented: if you spread your data around 100 of slow disks and then run MapReduce job that reads from all 100 of them simultaneously then you effectively get 100x read speed. Plus, with rise of Spark which processes data primarily in RAM you can say that disk speeds are not really an issue for Big Data.


So with such high IOPS, this should be similar to the jump from HDDs to SSDs, right? Finally seeing some real-world boost in apps, as between SSDs SATA-2 and M.2 PCIe the real-world difference was very small, only large file processing got a noticeable boost (when M.2 didn't get too hot).


Comparing 100 seconds to 1 seconds makes things look huge (well they are if you really would have to wait that long). But comparing 0.1s to 0.001 or 0.01s is alot less noticeable even though the percentage difference is the same.


We just have to wait until the next iteration of "What Intel giveth, Microsoft taketh away" and all the current SSDs would feel like super slow drives.


what will that be? Popular VR?

My impression is that Microsoft has been falling down on the job for some time. in the '90s and aughts a computer felt slow after three years, and was nigh unusable after five, even for just word processing and web browsing.

These days? the almost new MacBook I'm typing this on only comes with 8gb ram; that was a decent (but not great) loadout in 2011. And my thinkpad? the other laptop I use? It is from 2011, also has 8gb ram, and runs just fine. As far as I can tell, the big difference between my ancient laptop and my new one is that my new one is way thinner (and has a keyboard that is dramatically more vulnerable to foreign matter)

I mean, I do own an occulus rift, and it does very much require a modern computer to run, but as far as I can tell, that's pretty niche. Hardware requirements just aren't going up the way they used to.


While we're on the topic of Samsung EVO and Pro SSDs, does anybody know if there's an 870 series of Samsung SATA SSDs coming soon?


Doubtful, the 860 isn't that old.


I won't call it a consumer SSD, it beats most enterprise SSDs in mixed workload and Q1D1 workload.


I wish something like that existed for a laptop. I know there is 800p, but I would need 250GB.


There are indications that Intel will soon release a M.2 version of the 905P, but it'll be pretty rough on your battery.


I don't mind - I am rarely in a position where I can't connect my laptop to the power source.


Anyone know the speed in relation to ddr3 market ram?





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: