Hacker News new | past | comments | ask | show | jobs | submit login
Raspberry Pi microSD follow-up, SD Association fools me twice? (jeffgeerling.com)
133 points by geerlingguy on July 25, 2019 | hide | past | favorite | 96 comments



I really can’t understand why SD cards are considered applicable as a main storage device for a general-purpose computer. Reliability is the primary bottleneck in everything, and IO speed second. SD cards are the single worst experience I have had in both speed and reliability in any commodity computer stuff for many years.


The reason why rPi uses micro SD is that it's the cheapest possible storage solution. You are not buying an industrial-grade workstation for $35. (For those using the rPi in industrial settings, there is the Compute Module which has eMMC. It adds to the cost, though.)

I am not really a fan of "embedded" devices that have to be shut down properly. You can never assume that you won't lose power, and I think there are a lot of flaky/bricked rPis as a result of SD not handing power loss well. But it's cheap, and you just reimage the thing and go when your SD card gets corrupted.

When I last worked on embedded Linux, our OS image was read-only and the writable storage was literally initialized at boot with "mount /dev/mmc1 /storage || mkfs.ext4 /dev/mmc1". That is how you get a consistent state after power loss. But throwing everything away at boot doesn't fit the Raspberry Pi model of being half-embedded half-workstation. The result is flakiness.


> I am not really a fan of "embedded" devices that have to be shut down properly.

the problem is the Pi is not an embedded device, per se.

It's running a desktop operating system that has to be shut down properly. This is the main reason SD cards get corrupted - there is an unchecked flow of writes to the SD card. This increases the chance of outstanding unsynchronized writes and SD card burnout.

I've said many times that raspi-config should allow an option for mounting the filesystems read-only, with an overlay ram based filesystem.

You can run openwrt on the pi, and the filesystem is set up this way. I run a pi like this and have never seen a corruption problem.


>>I've said many times that raspi-config should allow an option for mounting the filesystems read-only, with an overlay ram based filesystem.

I've had success with this:

http://wiki.psuter.ch/doku.php?id=solve_raspbian_sd_card_cor...


yes this was my problem, as well as they actual SD card itself and the SD card slot going out sometimes from frequent rewriting


> SD not handing power loss well

Is it that SD cards themselves don't "handle power loss well", or is it bad assumptions made by the filesystems chosen to be used on them? Could a filesystem with a specific design avoid SD card corruption?


My limited understanding from reading articles on the topic is that the SD card is doing things like wear leveling in a way that the OS can't control, so if it loses power while it's doing that, your data is gone. It's like a separate computer, and the integrity of your data requires running the program to completion.

It seems like a bad model if you ask me, but performance is what reviewers publish, not reliability under failing power, so this is what we get. It is obviously possible to fully journal these internal operations and recover them when power is next available (resulting in FS-level errors, instead of internal errors). But the cards do not do this.

With eMMC, you just have the raw flash cells available directly, so you can write this code. With SD, you don't have enough control to write correct code, so you are stuck with what the vendor gives you.


Why does this same problem never occur with USB flash drives? Is it just that a USB device is usually large enough to have room for some power capacitors?


I have had 2 USB flash drives fail, one definitely after being unexpectedly disconnected (about the other one I don't know).


USB is also designed so that data disconnects before power.

I'm not sure if that's enough by itself.


Not if the device loses power, right? I'm guessing USB flash drives will have the same internal mechanisms as SD cards and thus suffer from the same problems.


Oh, are you talking about flash drives used as system drives on a Pi? Because in desktop use power rarely goes out and even more rarely has anything been written recently. In the Pi use case I have no idea.


Partial out-of-order writes can happen with an SD card. Most filesystems do not handle partial writes well, particularly when they happen out-of-order. A redundant checksumming filesystem like ZFS might be better, but that's only because ZFS can totally handle up to N drives per vdev going totally batshit (where N is configurable when setting up the vdev), not because it is designed for SD cards.

Spinning drives can also have misdirected writes, but enterprise drives have for decades now (older than SATA) supported commands for flushing their caches, so proper barriers can be maintained.


Thank you for bringing my attention to the rPi Compute Module, I hadn't known this existed.


That's why I recommended oDroid to people who want to regularly use systems like rPi's. Most of their offerings support eMMC add-ons


If you were to describe and prioritize issues I have with SD cards, "not handling power loss well" would not make my list. Silently dropping writes while fully powered is more like it.


Would a battery acting as UPS help then? One used just to power the system long enough to properly shut down in case of main power failure?


You don't even need a battery; even a 220uF capacitor gives you a couple of milliseconds at USB power levels[0].

0: USB is supposed to draw < 0.5A @ 5VDC, a resistance of >10Ω. 1F*1Ω=1s, so at 10Ω, each 100uF gets you one millisecond. You do need to worry about voltage dropping off over time though.


eMMC is not at all better then SD card though. You really need a real SSD for industrial applications.


It really depends, I think. If you are accessing the eMMC at the lowest hardware level, then you can write OS/filesystem-level code to maintain consistency when power is lost. Meanwhile, an SSD, like the SD card, is doing a bunch of internal operations that the OS can't control, so it's up to the hardware to maintain data integrity. I imagine that SSDs include a capacitor large enough to finish housekeeping, or use integrity-preserving operations internally (journal, etc.), but micro SD cards don't.


There's a lot more to preserving eMMC than just handling power losses.

A lot of what proper SSDs is to do the right wear leveling to make sure that damaged flash blocks don't cause corruption. eMMC is a pretty broad term that can cover something that's just a soldered SD card to something with a full-blown wear leveling controller.

That's why you still see a lot of eMMC failures in embedded systems when someone forgets to not log to the persistent storage or similar wear heavy operations.


Thanks for the tip. I was wondering about an easy way to boot like this. Better than nothing.


Speaking from SSD firmware development experience, the reason is those devices are so hardware constrained (in particular SRAM and code) that the firmware team spends a lot of time being clever in making it all work under huge time pressure. Couple that with SD cards being a testbed for the newest flash technology. That is why I don't use SD card for anything I care about reliability.


Any alternative suggestions for air-gapped home PC backup?


My primary concern is that those small form factor storage media might not be so reliable for backup. I'd be okay with mSATA , SATA or NVMe form factor from a reputable manufacturer. I always use something one generation behind or been out for atleast an year. Let others test the new NAND and firmware.


Boot from a read only device, either DVD-ROM. Copy data through a USB data diode.

That is airgapped, but really expensive.


"USB DataDiode" is a brand name of write blocker, all it does is block write(6,10,12) SCSI-commands

https://www.os3.nl/_media/2015-2016/courses/ccf/ccf_tom_fran...


Is it possible to have a unidirectional USB connection? USB is strictly master-slave.


Yes, just like how it is possible to firewall a TCP connection.

https://www.cru-inc.com/products/wiebetech/usb-datadiode/


no, OP meant filtering write commands


Get a USB sata drive caddy and mount before backing up, unmount afterwards.

Swap the drive.


How do you airgap a backup?


One way is to have a pool of two or three removable flash devices, with only one plugged in at a time, the others in safe places elsewhere. Automate nightly date-stamped copy to the card of just your essential data. At some interval that strikes a balance between convenience and safety, cycle the cards. You'll also need a scheme for pruning older backups to recover storage.


In that case, understand the SSDs these days don't have great retention specs when not powered. Its one of the drawbacks of using smaller transistors and multi-level cell technology to increase capacity. When powered up they can do a background scan to scrub the data though.


> understand the SSDs these days don't have great retention specs when not powered.

The standard for consumer-grade SSDs is retention of at least a year after the drive is worn out. When the drive has been used only lightly (which is likely to be the case for a backup drive), retention will generally be several times longer. Flash memory is still not a good choice as an archival medium, but it's not like your bits are going to fade away as quickly as a rechargeable battery.


So what is it that makes SD cards unreliable for use-cases such as this?

Is it just the limited amount of flash available for remapping? If so, can one use the same trick as with regular SSDs and simply avoid partitioning some space to improve reliability?


Not OP, but in my experience it's the broad range of SD card quality on the market, and the sketchy/'temporary' physical connection between the gold contacts on the card and the gold contacts in the card reader. In this case, the reliability issues result in the storage device dropping out sporadically entirely during reads and/or writes, or dramatically varying speeds sometimes as low as several KB/s.


Size, Cost, Cross Platform Support and Availability.


Those are very appealing features for sure. My point is specifically that SD cards aren’t actually admissible as a solution - specifically in the context as a main storage device as a general-purpose computer - they are not admissible based only on a cost-benefit ratio because they are broken. Like a glass that is big, cheap, holds any liquid, and is right next to you isn’t admissible if it has a hole in the bottom.

There’s another shadow factor. Probably that they are perceived as the only available thing?

Probably also it is stupid easy for a hardware designer / implementer to tack on an SD card interface.


uSD cards are not broken. You just have to not buy from unknown manufacturers that just put some third party parts together and call it done. And if you use quality power supply and wiring, to avoid brownouts, SD cards can be quite reliable. Choosing an appropriate filesystem may also help (like f2fs).

I have several multigb PostgreSQL databases (with checksumming enabled) running from uSD cards for years, scraping data from the web hourly.

I find uSD cards quite reliable, actually (for rootfs use), in contrast to what's being said online all the time.

I also like that it's removable. If it breaks you just pop in another $4 32GB card, restore, and continue where you left off. Compared to soldered on eMMC, that's much more user friendly.


I did all the correct things and encountered all the failures described online :)

It’s heartening to hear that they’re really, actually workable.


In a 5 - 55 dollar device, they make a lot of sense. If your application is such that SD isn't good enough, the rpi itself probably isn't either. Both are great for the price to capability ratio. Of course they won't last forever, but you don't complain about not getting ceramic mugs at the local takeout coffee joint, either.


I wish rPI could boot from network


The RPi 3 and late revisions of the RPi 2 support PXE on the built-in Ethernet adapter: https://www.raspberrypi.org/documentation/hardware/raspberry...

The RPi 4 has a boot process with a programmable boot ROM that should allow for a more "real" bootloader like uboot in the long term, but currently PXE isn't supported.


Command queuing is such a dangerous thing.

This is the disk equivalent of 'buffer bloat'[1] but with disk data. The question you have to ask is what happens when you lose power and there are 'n' writes queued to the storage device. How has the kernel marked that data? Historically once the disk accepted the write and returned kernels would say "ok, data on disk" buffer clean now.

Of course if you lost power and that write never actually completed, well you now had file system corruption (possibly silent if it was just a data block).

Another problem that cropped up is "ok we can re-use this data buffer" so it gets fill with different data, and then the storage device says "oh hey, send me that data you told me write ..." whoops.

The third problem that crops up is that the computer on the storage device crashes (or watchdog resets) and it goes back and doesn't remember what commands it had in the queue. So some time later your kernel better ask "hey why haven't you acknowledged this write we sent 50mS ago?" and have the disk say "What write?" and replay it.

One school of thought was "Command queuing in the storage device is always bad, the kernel knows more, has more memory, and is ultimately responsible for what is and what is not on disk." So the kernel would 'hold' writes to the disk until the write was sufficiently aged or until enough writes had accumulated to do a "streaming" write (multiple sectors on the same track). But for non-spinning media there is no seek time so there is no advantage. Except for flash, you have to erase a page which can be a lot of data which then has to be rewritten with the new stuff.

I enjoyed the writeup but it really needs to start with a highly fragmented SD card (one with lots of random I/Os to it) so that it can measure the latency hit for page rewrites.

[1] https://www.bufferbloat.net/projects/bloat/wiki/Introduction...


PC drives have had write caches and the necessary barrier/flush commands for many years. This is nothing new or dangerous.

Additionally operating systems also play exactly the same "dangerous" game with writes, they keep dirty pages in memory and flush them out gradually. You are only guaranteed that things actually got to disk when you fsync them which sets off a cascade of writes and flushes if they didn't already happen in the background.

Write performance is largely a very well-maintained illusion if you don't have something like Optane. If you open files in SYNC mode write performance craters, even on NVMe.

What filesystems do to maintain sanity among this madness is to order writes so that if a crash occurs you lose a few seconds of data, but you don't lose consistency, you just get an older state of whatever made it to disk.

SD cards are not special in this regard.


>The question you have to ask is what happens when you lose power and there are 'n' writes queued to the storage device. How has the kernel marked that data? Historically once the disk accepted the write and returned kernels would say "ok, data on disk" buffer clean now.

Is there a reason why storage devices can't accept the write, and then notify the kernel when the the write is complete? I'm fairly certain that's how it works right now.

>The third problem that crops up is that the computer on the storage device crashes (or watchdog resets) and it goes back and doesn't remember what commands it had in the queue. So some time later your kernel better ask "hey why haven't you acknowledged this write we sent 50mS ago?" and have the disk say "What write?" and replay it.

Don't you run into the same issue if you were doing writes and the user yanked out the power? What's written after the last fsync is pretty much undefined behavior and it's up to the application/filesystem's journaling mechanism to deal with it.

>One school of thought was "Command queuing in the storage device is always bad, the kernel knows more, has more memory, and is ultimately responsible for what is and what is not on disk." So the kernel would 'hold' writes to the disk until the write was sufficiently aged or until enough writes had accumulated to do a "streaming" write (multiple sectors on the same track). But for non-spinning media there is no seek time so there is no advantage. Except for flash, you have to erase a page which can be a lot of data which then has to be rewritten with the new stuff.

So command queuing is bad except on hard disks and flash. Then what else is left? Those two cover 99.99% of consumer storage.


additionally SSDs use garbage collection algorithms so are fraught with the possibility of a write taking an unbounded amount of time, even if nothing is malfunctioning, which is pretty scary.

To head off replies about fixed time garbage collection, sure it exists, but you cant guarantee it will reclaim enough space in a fixed timeslot (think of a massively pathological case where 99% of every flash chip is full, but super fragmented and data needs to be shuffled around perhaps hundreds or even thousands of times before enough space can be reclaimed)


> To head off replies about fixed time garbage collection, sure it exists, but you cant guarantee it will reclaim enough space in a fixed timeslot (think of a massively pathological case where 99% of every flash chip is full, but super fragmented and data needs to be shuffled around perhaps hundreds or even thousands of times before enough space can be reclaimed)

That's why you have spare space. It's not overly hard to clamp write amplification to a number like 10x or 20x, even in the absolute worst case.

Note that the SD standard here requires a minimum number of writes per second inside a 256MB area, and even 1% spare space on a 32GB card would give you a >256MB buffer.


> (and this has become a bigger problem every generation of Pi—you need a good power supply or you'll have a lot of annoying problems)

I wish the pi just had a 12v barrel jack.

edit: it first dawned on me that the pi power situation could be improved when I read this article years ago:

http://www.bitwizard.nl/wiki/Reducing_power_consumption_of_a...


That's one of the things that disappoints me most about the Pi, to be honest. Not the power situation, but the "no way to remix it" situation.

While it's not quite as beefy as the Pi (especially the Pi 4), nor as cheap, the Beaglebone family of boards are awesome. They come with schematics, and you can order the processor in qty 1 from Digikey. I haven't yet remixed one yet (a 400-BGA is going to be stretching my abilities...), but I have leaned on the schematic quite a bit to figure out some clever things I can do with it.


When the Pi first came out I talked with a hardware engineer about it and he said they made some weird choices.

One thing he pointed out to me was the pin design. Everybody else would design it with female pins so that you won't accidentally bend the pins or short them out.

He loved the beaglebone though and recommended it.

That said, I think the pi has a bigger community and more software. (I might be wrong)


> That said, I think the pi has a bigger community and more software. (I might be wrong)

Most definitely. From an "ease of getting started" perspective, the Pi is absolutely amazing. NOOBS, or whatever the distribution is called, is pretty painless to get going. You flash it to the SD card, plug in a monitor and keyboard+mouse, and you boot straight into a familiar GUI with tons of tools pre-installed. Plus, the $35 price point (well, more once you include SD card, power supply, etc) is pretty hard to argue with.

Debian IoT (the "standard" distro for the BeagleBone) is cool in its own right, but definitely not as approachable for someone who's new to embedded Linux. Things like GPIO mappings need to get tweaked sometimes, you're dinking around with device trees here and there, etc. For the stuff I've been working on lately (robotics), the BeagleBone Blue has been an absolute godsend. Tons of ports that speak different protocols (UART, I2C, PPM for driving servos, GPIOs, etc), and librobotcontrol is really straightforward to start building with.

I guess a great way to distinguish between them: I pretty much always use a keyboard and mouse to get a Pi set up. I don't even know if the BBBlue has video output onboard, because I've never looked or tried. On boot it exposes itself as a network device over USB; you can SSH into it and start doing what you're trying to do.


Even despite the issues around A1 and A2, the read speeds on SD cards are just too slow, and will probably always be too slow.

What the RPI needs is a simple m.2 slot for a "real" hard drive, which are getting cheaper and cheaper.


I think the goal is ease of access with the memory. How many people have a m2 dock or adapter ready to write the raspbian image on it? A memory card reader is much more common.


It seems like it would be fairly simple to give people a minimal USB image that then installs to the M.2.


The Beaglebone Blue has an onboard eMMC that comes blank from the factory. You can:

a) write an OS image onto an SD card and just run off that if you want

b) set a flag in /boot. Upon reboot, the LEDs go into Cylon/Knight Rider mode while it copies the OS image from the SD card onto the eMMC. When they're done, pop out the SD card and reboot. Done!


USB live "cd" that installs itself to M.2 ?


I think that will start to present a power problem. They would probably be forced to go to some kind of power supply larger than a typical usb power brick.


You should mention very clearly which Raspberry Pi board you're using. I'm not sure if CPU throttling can affect I/O but if yes then it might also matter if you use a good case or use active cooling or heatsink.

Also, which OS version did you use? What Linux kernel version?

I'm more interested in longevity tests and industrial grade microSD card performance.


Raspberry Pi 4, latest Raspbian Lite image (2019-07-10 with kernel 4.19), and did apt-get update and upgrade prior to test and rebooted, official case with Fan modded in [1] to ensure it does not get anywhere near throttling.

[1] https://www.jeffgeerling.com/blog/2019/raspberry-pi-4-needs-...

And everyone asks for longevity tests and industrial grade performance, but I always answer the same: I have had four Pis running almost continuously (99.9% uptime, occasional reboot for updates) for 3+ years, and have used the same set of 6 Evo+ cards for 3+ years now, and have never had any problems with corruption in any of those cards.

I also run a Kubernetes cluster with 4 Pis (see www.pidramble.com) from time to time (right now it's running), and I have never had a corruption issue either.

The main thing I attribute this to is the fact that I'm using reliable, quality power supplies (either Pi Foundation official supply or some other name brand or good PoE power supplies that support at least 2A).



Reading about SD cards reminds me of this idea I had. Is it possible at the moment to have a wifi or 4/5g transmitter small enough to fit into a chip within (or instead to bind to equivalent connectors of) sd cards to automatically upload sd card contents to a hub or router that can sync with the cloud? So that you can effectively have an unlimited capacity sd card? Anyone working on this?



Love this: New (2) from $5,000.00 & FREE shipping.

Personally, I used an Eye-Fi card when I needed wifi.


What do you expect? It's a discontinued, niche product that probably didn't sell a lot to begin with, so aftermarket prices are going to be insanely high.


Looks like there are several competing options from other OEMs in the $30-$100 range. Toshiba FlashAir to name one.


toshiba card is identical inside


Awesome post, thank you!


This has actually been around for many years: http://eyefi.com/


The only difficulty being how the thing is managed and such; I believe for the EyeFi at least, the camera has to have some level of support for the card so it doesn't power it off in the middle of a transmission.

Also, microSD's form factor is probably too small to effectively house a reliable antenna, much less the additional WiFi chip. SD is probably the smallest feasible form factor today, and CFExpress and other cards are larger and might afford a little more space.


IIRC this requires SDIO. Back in the day when I was into this (around the time of the Nokia N810) this didn't work (well) on Linux.


Isn't SDIO for when you want the host to get access to the resource?

These cards work by inspecting the FAT32 filesystem on the card and separately connecting to wifi uploading the files on their own (so they work with any typical point-and-shoot or DSLR camera)


Wouldn't the read latency be insane? A quick search say the Q1D1 IOPS for sd cards is between 1000 and 2000. That's 0.5 to 1 ms of latency. A wifi/4g uplink's latency would be orders of magnitude higher, making it unusable.


You could also use some FUSE filesystem like sshfs to do the same thing in software.


A lot of the remote FUSE filesystems actually cache to disk and upload asynchronously to avoid noticable latency.


NFS will give you better performance if you've got a trusted LAN.


People expecting a $35 childhood educational computer to perform like a mission critical embedded device get what they should expect from that.

Having said that, what is so hard about running regular backup images, and simply burning a new one every time the current SD card inevitably fails? That's been working for me for years. Can't reboot... burn a new SD card from the backup. Reboot. All the super-timely stuff is on github, so at most, I might lose an hour or so every 6 months to a year. And anything that isn't altered often just has a rolling updated image for me to dd if needed. It's rare enough I just don't care about it.

I use rpis a lot, and since I work with MPI, usually there are quite a few running at all times. I just back up regularly and don't get alarmed that they don't last like a desktop at 100 times the price would. Of all the SD cards and actual rpis that died, I heaven't come close to that price, and have lost nearly nothing for all that I've gained in the experience.


I read here on HN about someone who had a SD card show old data. He reinstalled the OS only to reboot into his old OS. Same thing happened to me the next week with a camera, the SD card showed old pictures from before a reformat. Using these things now is super scary. I wondered why some cameras had dual sd slots, now I know why.


Happened to me too. It's a good failure mode, IMO. :) If the card runs out of total write capacity, it turns read only. It might be better for it to reject writes, but whatever.


The rpi is nice but I'd prefer a CPU 10x times slower if it had memory to store the OS. The rpi's CPU is way too powerful.

The only one I would buy would be the RPI zero.

Having something reliable can matter.


The description of A2 standard features sounds very similar to the new Linux multi-queue block device framework.

From quick look at latest Linux source code, there are couple mentions of "blk_mq" in drivers/mmc, but there seems to be no actual support for multiple queues (unlike e.g. in drivers/nvme).

Does anyone know, if those are supposed to be same things? Does SD driver actually need new firmware to support multiple queues or is it something, that can be implemented on kernel side in software only? At the very least, it might be possible to use a third-party reader as long as it's kernel drivers are updated to take advantage of multiple command queues.


> The description of A2 standard features sounds very similar to the new Linux multi-queue block device framework.

Unless I missed something, it's not that fancy. It's just one queue, not multiple queues -- the only new thing is that the length of the queue is more than 1. A multi-queue device is a device with more than one independent queue. This is useful for very high performance devices (NVMe, for example) that can allow multiple CPUs to submit requests at once without interfering with each other. If there's only a single queue, then all CPUs trying to submit need to contend for access to the queue.


this is like people putting premium gas on cars that ask for regular.

read your specs, use what is recommended. not the mopst expensive. it's just engineering 101.


At this moment in time 0 cars can utilize this "premium gas".

This isn't communicated well, the author points out that it's marketed as better, but you have to dig into the technical details to find out the host must support it as well.


I think most people would be surprised if premium gas actually made their cars run worse, its hardly common sense.


I don't think it has been well communicated that you need to do anything special to benefit from an A2 card.


Isn’t the point of these articles is to determine what is recommended by benchmarking the cards?


tl;dr: I tested an A2 'Application Performance' class card and found that it was half as fast as it should be, based on the SD Association specs (minimum 4000 IOPS 4K random read, 2000 IOPS write).

Then some people dug deeper and found in the specs that 'Command Queue and Cache functions' needed to be supported in device firmware and/or on the kernel level for A2 specs to be reached—though I haven't yet found a way for any consumer to get their hands on devices or software with this support... so no way to achieve claimed A2 performance.

So then I bought an A1 card from the same manufacturer (SanDisk), and it is actually much faster than the A2 card (though for price/performance you're still better off buying the not-Application-Performance-class-rated Samsung Evo+ card).

And in the end, being able to use an SSD or mSATA drive would be even better (the former of which is semi-possible today with the Pi 4 and USB 3.0—though you can't fully boot without a microSD card yet).


An A2 card on a host device that doesn't support Command Queue or Cache should be the same from a protocol standpoint as an A1 card. Differences probably come down to firmware issues at the moment in the A2 cards' internal controllers.


One would hope :(


> you can't fully boot without a microSD card yet

Not true! The Pi supports several different USB and network boot modes.

https://www.raspberrypi.org/documentation/hardware/raspberry...

This documentation doesn't cover the Pi 4, but USB/network boot is available there too, cf.

https://www.raspberrypi.org/documentation/hardware/raspberry...


As your parent says, the Pi 4 doesn't support USB or network booting yet. From your link:

> Support for these additional bootmodes will be added in the future via optional bootloader updates

I believe they need to write (bootloader) drivers for the PCIe host and for the new USB3 chip before USB boot can be supported by the bootloader, while netboot requires a driver for the new gigabit ethernet core.


You can use micro sd card for the bootloader and boot from USB 3 ssd on rpi4.

https://jamesachambers.com/raspberry-pi-4-usb-boot-config-gu...


That's not _quite_ booting _without_ an SD card though ;)


> This documentation doesn't cover the Pi 4, but USB/network boot is available there too, cf.

No, they're not. Network booting is a couple of weeks out and USB over a month. It's a topic of discussion on the Pi forums.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: