Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Raspberry Pi 4 V3DV graphics driver achieves Vulkan 1.0 conformance (cnx-software.com)
319 points by pfrog on Nov 25, 2020 | hide | past | favorite | 103 comments


This is what sets the raspberry pi apart from all other SBCs in my experience: software support. Kudos, this is fantastic stuff! Will love to see the ecosystem of stuff that springs up that takes advantage of this.


It's important to note that competitive x86 SBC do exist, and have the typical, full, x86 support. Of course, the downside is that the price is higher (around twice as much for a full system).

A downside of ARM SBCs is that they pretty much all have an expiry date. Due to their closed nature, when the community pulls the plug, they're gone (SW-wise). x86 boards last virtually forever. While they're somewhat compatible with standard Linux distro, in order to have full support, one needs to use the adhoc ARM distros.


There's quite a bit more to it than that, though.

First, the "closed nature" thing is just as true about x86 as it is about ARM; you don't get full specs for any machine these days, Broadcom or not. Rather, from the perspective of "Linux developers who write code on and deploy to Linux" — or maybe "Linux Desktop Users" — the difference is in peripheral discovery and setup, of which x86 has ACPI and UEFI to dynamically discover and configure devices upon every boot, while ARM boards use device tree, which is static and requires up front, BSP-specific descriptions.

Userspace binaries for AArch64 Linux will work across most devices. A RPi4 running Ubuntu and a RockPro64 running Ubuntu will use the same binaries just fine. The problem is when vendors have kernel patches or workarounds that make booting harder. But it's not like literally all software stops working on a specific date. If you have the kernel and dtb, you can make it work and keep your system booting. It's definitely less than ideal, if you think of it like a desktop, where timely updates are common and easy. But it's doable. And more devices seem to be moving towards an upstream-first model, so this seems like less of a problem anyway. I'll also note it's not actually hard to package a device tree blob with a userspace+kernel, it's just that most "desktop" distros make spinning custom variants unreasonably hard, IMO, so you need Buildroot/Yocto or whatever to build custom images. And again, "Linux users" like, say, most people reading this comment, don't actually want Yocto — they want Arch or Debian or whatever. A bit chicken and egg.

Furthermore, the RPi4 also supports ACPI and UEFI (upstream Tianocore) which is continuously improving, so it can support generic Linux distros. I have a "generic" Fedora 33 install working on my RPi4, with UEFI boot, all from a USB3 stick. It even installed using an ISO, and I configured the install using Anaconda, identically to x86. The Pi4 also has a production date of at least 2026, and almost the entire userspace software stack is completely upstreamed now, including kernel, graphics drivers, and peripherals. So from the perspective of someone who treats the RPi as a kind of "mini Linux Desktop", most of your complaints don't really apply at all. But...

---

The actual biggest problems with the lack of specs/dtb shit/"closed stuff" isn't when you want to treat it like a normal Linux machine with keyboard/usb/ethernet. That's easy and works today. You can get most of the desktop Raspberry Pi experience, including generic distro boot, today! It's when you want to treat it like an actual "embedded device" where you write custom drivers or poke GPIOs or use hardware features, or whatever. That's where the closedness sucks, but in that case, even if it sucks or there's no specs, there's normally no realistic x86 alternatives.

Sure, you can buy a $100 x86 device in an RPi form factor with RAM and plug a keyboard into it, and it's OK if it uses 2x as much power at load, if you just want a mini desktop that is cool. But where's the $20 x86 device that has a shitload of I/Os I can use to interface with various peripherals of my choosing, with all the accompanying good stuff like good DAC/ADCs, GPIOs, camera support, eMMC? And where can I buy them? Where's the $5 RPi Zero alternative? Because the ODROID-H2+ is $120 USD, and doesn't even have GPIOs!


With pi's that last point is at least somewhat mitigated. The foundation has committed to supporting specific hardware for several years.


Heck, they still support the original Pis and A+; I just installed the latest version of Raspberry Pi OS on my A+ with 128 MB of RAM, and while it is slow, it works just the same.


It helps that the pi zero hasn't been superseded yet, and probably won't be for a long time. (They've surprised me, heck, everybody, before though)


The problem with the $ PiZero w/o WiFi or the $10 PiZero w/WiFi is you don't seem to be able to buy them in quantity. They are simply not 'available' to use in any sort of 'product', it seems.


Which x86 boards would you recommend?


If you're looking for something in the Raspberry Pis price range, the Rock Pi X is only $20 more than the equivalent 4GB Pi4

It has 32GB eMMC onboard too so unlike the Pi4 there's no need to deal with external USB storage or flaky SD cards


I've used UP boards at a previous job for resource constrained systems that needed to be x86 and they're actually quite nice: https://up-board.org


odroid h2+


> While they're somewhat compatible with standard Linux distro, in order to have full support, one needs to use the adhoc ARM distros.

Maybe you mean something specific by "adhoc" or "full support" that isn't apparent to me, but Ubuntu has ARM distros today and I expect that ARM support is only going to get better as (1) ARM continues to make headway in the server space and (2) Apple Silicon pressures desktop/laptop OEMs to adopt ARM.


Just because Ubuntu has "ARM distros" doesn't mean those "ARM distros" have the necessary drivers needed to actually use all features of your SBC. The fact that Vulkan support in raspberry pi landed in 2020 should tell you something.

Even the most well known SBC series is getting driver support at a glacial pace. When was the last time you waited for your x86 iGPU to get vulkan drivers?


And yet one of the reason it achieved such great software support is because it had so much success, despite the non-standard hardware (armv6 at launch, broadcom, no EFI boot, broadcom, no GIC, broadcom, no armv8, broadcom, gpu boots before CPU, etc.)


I think you are missing broadcom on that list. But for real, why is Broadcom so bad?


Broadcom is notorious for hiding everything behind very restrictive NDAs. You want a CPU from them? Don't bother contacting them unless you plan buying six figures, is the common sentiment on HN.

On the other hand, there aren't many competitors that are better in terms of accessibility. Sigh. Implementing embedded devices with any sorts of "smarts" beyond some Atmel uC from scratch is a pain - one has to go with ready-made modules (ESPxx, Raspberry Pi compute module, COMexpress) to not go insane.

And it's not just the data sheets where you will run into issues (at least, usually you can find them somewhere pirated to get started). Design guides, layouting rules and certifications are a way bigger pain point... looking at you Thunderbolt.

And when you finally have your first PCB version ready, you'll soon find out that any components more intelligent than a couple transistors have sometimes ridiculous minimum order quantities. Let's take a SI2494 56k modem chip... 45$ apiece at a MOQ of 43 - which means if you want one or two you'll have to shell out about 2k $! And to add insult to injury, it's apparently still profitable to sell them for under 9$ at high quantities. This fucking rip-off is the reason why many hobbyist electronic projects are restricted to using dumbass components.

edit: I totally forgot about the software side with embedded SoCs. Basically, to boot Linux or anything else on a SoC you need firmware blobs (e.g. for the GPU or wireless radio parts) and lots of custom code (e.g. to bootstrap clocks, memory, IO). Usually the SoC vendors have an ancient fork of a Linux kernel and u-boot in which over each new chipset there gets ever more custom cruft and (often shoddily) backported stuff from newer kernels. This conglomerate is called "board support package" or BSP - and is, despite consisting of open source code, often guarded as heavily by NDAs as the data sheets.

Sometimes, especially for Mediatek stuff, these BSPs get leaked on Github... and it's always a wild ride looking into them. It's no surprise that upstream Linux doesn't have the support for whatever specific CPU... because the quality of the code is more often than not so hardcore rotten that it's a wonder you don't hear every day about some compromised IoT device.


Thanks for the detailed answer. That is indeed meh. I suppose when RPi started and the quantity they ordered it still made sense.

Hope that more open hardware will come out over time.


We need something to drive that change though.


The ways to change this is either regulation or market pressure.

Regulation is out of any question, both in the US and the EU. While there may be some hard-won progress in the "right to repair" fights recently, the needs of hobbyist electronics people aren't on any politician's radar and I doubt that will change soon (even if easier access to high quality electronics could unleash so much potential for innovation!).

Market pressure is out of the question too, simply because the volumes that us hobbyists order are too small. For what it's worth I'd pay a couple hundred bucks for an 1:1 q&a/consulting session with an expert from a semiconductor or other company for my hobby projects and I'd also accept a reasonable "small volume handling/shipping fee" for getting my hands on ten chips... but I'm the utter minority of hobbyists who can actually afford that for a project that won't bring any profit. Corporate entities however who are going to sell thousands or orders of magnitude more units of some random gizmo can afford putting up six figures upfront anyway... so they have no incentive either to pressure vendors into providing better service.


Is in-house CPU printing a (future) possibility? I don’t know a lot about manufacturing, but is a an etching machine a possibility to make custom boards or chips? Even if it’s not up to today’s tech.


Not with current fab / material processes. Silicon Valley hosts the most Superfund sites in the US for a reason - there's an awful lot of really nasty chemicals involved in semiconductor manufacturing, some of which can probably be used to make drugs or explosives (so subject to various control laws), and the feature sizes are so small that the fabs require expensive air filters...

Custom boards are a thing already, you can go up to three layers in an amateur/hobbyist setting AFAIK, but that won't help you much when dealing with high-frequency stuff or very fine pitch settings as the thickness and other parameters will vary across the board. Many countries already have somewhat cheap-ish "rapid prototype PCB" shops, some even do assembly for you. That's good enough even for most HF stuff.


Are there any group buys that could be set up? Surely there are at least 43 people in the world who want to buy that part but don't want to buy the full MOQ


I'd be glad if the big shops (Mouser, Digikey, Conrad) could set up some pool solution... everything one can do as a private person opens up a hellhole of liabilities: international shipping (and tracking), return rights, warranties, import/export tariffs, import/export arms control (GPS receivers come to my mind), CE/RoHS compliance, international taxes (VAT, sales taxes, the US with their mind-boggling state/county/city additional taxes), supply chain integrity (people don't want their "pool buy" for 20 bucks turn out a counterfeit, and it's hard enough for big companies to secure that one).


It's almost like companies value their employee's time and don't want them dealing with non-profitable sales to hobbyists or buying small quantities to poke around and copy IP.


> don't want them dealing with non-profitable sales to hobbyists

Have said hobbyists pay an appropriate amount of money. If an expert of TI can help me get started I'd find something like 100-200€/h reasonable.

> or buying small quantities to poke around and copy IP

Anyone wanting to steal IP for profit can already do so, most datasheets are available on pirate sites.


It would cost more than that.


200€/h results in a gross yearly income of ~380k €. That should be way more than enough, even accounting for overhead costs.


Overhead is typically 40-50% for this kind of corporation. So not really enough.

However the calculation has nothing to do with what guessing a reasonable hourly rate.

The issue is whether someone with the expertise to do this would benefit the company and their own career more doing something other than a tech support role.


> Overhead is typically 40-50% for this kind of corporation. So not really enough.

Even assuming 50% overhead something just short of 200k remains. I agree, for the US it may be on the lower end of the scale (given the ridiculous costs of housing and healthcare), but Europe or Asia? Way more than enough.

> The issue is whether someone with the expertise to do this would benefit the company and their own career more doing something other than a tech support role.

At least in my experience it is definitely good for people to spend time directly with customers. I agree it may not be worthwile for a chip developer or a documentation writer to spend their full time on doing support - but a set amount of time, say four or eight hours a week? That's direct, unfiltered input from the customers where the documentation is either missing, unclear or buggy.

Generic questions ("which combination of chips to choose if I have an USB-C female receptacle and want to provide bidirectional PD, bidirectional DisplayPort and USB3.2 over it") / 1:1 tutoring like "how do I best route a PCI-E differential lane pair", "how to properly calculate trace widths for said lanes given fab house X's stack specifications" can be directed to less specialist / sales staff or to a partner PCB design house.


“At least in my experience it is definitely good for people to spend time directly with customers. I agree it may not be worthwile for a chip developer or a documentation writer to spend their full time on doing support - but a set amount of time, say four or eight hours a week? That's direct, unfiltered input from the customers where the documentation is either missing, unclear or buggy.”

I agree that direct experience with customers is good for many engineers. It is definitely not good for all engineers.

However there is a huge difference between professional customers and hobbyists.

You are now taking about a multi-tiered support organization.

You personally may be willing to spend 200 euros per hour for access to such an organization.

The question is, are there enough customers like you to justify millions of Euros in investment to build such an organization.

Starting from the hourly rate you personally think engineers should be paid gives exactly zero information about the demand, and therefore zero information about whether such a rate is meaningful.


For first year(s?), Broadcom RPi components (GPU) were closed source (opaque binary blob) and need RE work. For a hardware supposed to be "open to hack", it was considered a non-sense. I suspect things have changed regarding RPi situation.


Is there an "open" SBC by your definition?

Intel has ME/AMT, and requires an opaque, PK-signed-and-only-intel-and-NSA-have-the-key blob. AMD is the same with PSP.

I'm not up to date, there might be a RISC-V core available today that is blob free, but in 2012 when RasPi was introduced I wasn't aware of anything blobless.


RISC-V is the best example of open hardware, but it also has a lot to do with the software. Take the GPU space, for example - NVIDIA has closed source blobs, but AMD has open source drivers which makes it easier to tinker with.


Freescale's i.MX SoCs are full open. No blobs at all. Check them out. They are sweet. Boards exist.


i.MX8 requires blobs for HDMI. The controller will only load an NXP signed blob. No blob, no HDMI (or displayport).

https://forums.puri.sm/t/the-i-mx8-cannot-be-deblobbed-nxp-s...


But it will boot and run. It has other display outputs you may use.


That's definitely better.

But why do you trust that the silicon doesn't have any backdoors more than you trust the blob? We know for a fact that Intel puts one into the silicon, and that it takes significant reverse engineering to turn most of it off (it's not clear if all of it can be turned off at all) see e.g. https://hackaday.com/2020/06/16/disable-intels-backdoor-on-m...


I wish there was a multi CPU RPI board with M2 option. Sort of like this one:

https://www.solid-run.com/nxp-lx2160a-family/clearfog-cx-lx2...


we're pretty much there. The new pi 4 compute module breaks out the pcie bus. The official pi4 compute base board routes that to a standard pcie x1 slot instead of USB 3.0 which is a massive win in my book (hello more ethernet/sata/fpga/nvme/etc). It's a baby Arm mobo with pci and I'm a little excited :-)

I have a pi4 compute + base board on pre-order plus I just bought a pcie x1 m.2 M-key card off amazon. I plan to use the SD card (emmc on my model) to boot a kernel and use the m.2 card for storage.

I think the pi foundation finally realizes that the pi has the potential of becoming an affordable and accessible SoM (system on Module) with a huge community behind it.


Almost! :) Not sure about the IO performance of this solution.

> The official pi4 compute base board routes that to a standard pcie x1 slot instead of USB 3.0

Do you have more info on this? I am not sure how it works.




It would be nice it if wasn't perpetually on backorder though.



Dumb question - if I want to do simple image processing on a pi4 (2d ffts, small kernels, summing 2d arrays in one dimension, finding Maxima), and I care about performance, is this a reasonable stack to use,with decent prospects or is it faster/safer to stick on the Arm, despite the GPU. 1k x1k monochrome images, at 3-10 fps ( or more)? Jetson nano seems to be the obvious commodity but pricier alternative with GPU access, but smaller ecosystem.


An extremely rough estimate, FFT (prolly the most expensive one of what you mentioned) needs 5N*Log2(N) operations.

If you have 1M source floats and want 60 FPS, translates to only 6 GFlops.

On Pi4, on paper the GPU can do 32 GFlops. Again on paper, the CPU can do 8 FLOPs/cycle which translates (4 cores at 1.5 GHz) to 48 GFlops. That’s assuming you know what you’re doing, writing manually-vectorized C++ http://const.me/articles/simd/NEON.pdf abusing FMA, using OpenMP or similar for parallelism, and have a heat sink and ideally a fan.

So you’re probably good with both CPU and GPU. Personally, I would have started with NEON for that. C++ compilers have really good support for a decade now. These Vulkan drivers are brand new, and GLES 3.1 which added GPGPU is not much older, I would expect bugs in both compiler and runtime, these can get very expensive to workaround.

While I don’t have any experience with Jetson, on paper it’s awesome, with 472 GFlops. Despite the community is way smaller, nVidia is doing better job supplying libraries, CUDA toolkit has lots of good stuff, see e.g. cuFFT piece (I did use CUDA, cuFFT and other parts, just not on Jetson).


It still gives me a giggle that the flops numbers you're talking about were supercomputer-level when I was a kid, and now I can buy that kind of power with beer money and lose it in the back of a drawer.


On the other hand, it’s sad how we failed the software.

We have devices capable of many GFlops in our pockets and many TFlops on our desks, yet we pay by hours to use computers operated by companies like Amazon or Microsoft.


Just try it out, says it works with Vulkan 1.0:

https://github.com/DTolm/VkFFT


If you're willing to experiment there's this thread on pixls.us[0] that might be interesting to follow:

> rustrated with heavy dependencies and slow libraries, i’ve been experimenting with some game technology to render raw image pipelines. in particular, i’m using SDL2 and vulkan. to spur some discussion, here is a random collection of bits you may find interesting or not.

> also please note this is just a rough prototype bashed together with very little care and lots of hardcoded things just to demonstrate what’s overall possible or not.

Since that is SDL2/Vulkan based you might get something out of the discussion there

[0] https://discuss.pixls.us/t/processing-that-sucks-less/13016


Vulkan can be used for compute, although I would guess that few applications support that.

For pre Pi4 boards, there is an OpenCL implementation.

https://github.com/doe300/VC4CL


I would start with whatever is simplest and use the GPU if you actually need to optimize performance.


Great! Now it just needs to support the extension salad up to 1.2.161.

https://vulkan.gpuinfo.org/


Noob question - is there a quick summary somewhere why Vulkan is better than e.g. OpenGL? Whats different? And what kind of performance enhancements one should expect when using Vulkan?


OpenGL is single threaded with a single state machine (ok there are some atomic supports via extensions). Vulkan allows you to manage the stream so you can multi-thread. This allows you to update more that 1 state at a time, for example in OpenGL loading textures and mesh data at the same time can't happen easily. In vulkan you can tune this yourself. (this is a very simplistic example). I usually say that Vulkan is more akin to you writing the graphics driver. Where as OpenGL gives you the driver.


OpenGL supports multi-threading via creation of multiple contexts, sharing resources (textures, shaders etc) between them.


Vertex array objects are not which can be a bit of a pain.


As I understand it, Vulkan allows the application much more control over the rendering pipeline, whereas OpenGL has a lot of internal state you control indirectly.


One issue with OpenGL is it was originally designed in the 90s when modern GPUs didn't exist. Whilst the API has evolved since then there was a reticence to chunk it all away and start again with a clean slate, or at least have a radical resdesign (something DirectX didn't shy away from).

Vulkan is built based on modern GPU architecture as well as being lower level then OpenGL. This makes it easier to get the best out of the hardware.


Answer from another noob: AFAIK it is not about being better, but lower level.

Being closer to how GPUs work underneath gives developers more flexibility for optimizing.


Often it has been that Vulkan is closer to how GPUs work simply due to GPU manufacturers having to change their GPUs to work well with Vulkan. To for example add a co-processor to the GPU so that it can do job scheduling in a more Vulkanish way.

Vulkan was after all a spec from one GPU manufacturer (AMD), there are a handfull others with their own archs developed with their own assumptions.


Mantle was the single GPU manufacturer API, Vulkan was based on Mantle, but its creation had input from all the major vendors including Nvidia.


'had input' = can be accepted, not 'is also based on'


Isn't Vulkan just the new name for the efforts that historically went into OpenGL?


It’s a clean slate API by the same industry group that’s lower level than OpenGL.


Does anyone have a good solid modern guide or pointers on making easy 3D stuff in text/CLI mode (not x-windows, etc), and hopefully using a convenient library? I'm trying to make a simple fast-booting instrumentation display.

I picked up a Pi zero and muddled my way through what I could in C/C++ from piecemeal guides, and from text boot mode managed to initiate a graphics mode and draw a spinning triangle. Think I threw darts at the board for linking libraries in the compilation.

I have no idea what I'm doing, and of course, the level of scaffolding that had to be put into a simple spinning triangle was just astounding. My prior experience was using three.js and BabylonJS, and poorly. Is there something convenient to use library-wise in text mode, or is a safer bet doing windows -> firefox -> three.js?


If you’re OK with .NET, you can try my library: https://github.com/Const-me/Vrmac/ The library supports both X windows, and bare OS kernel DRM/KMS.

Spinning textured cube: https://github.com/Const-me/Vrmac/tree/master/RenderSamples/...

Spinning teapot with lightning, inertia, mouse and keyboard input, etc: https://github.com/Const-me/Vrmac/tree/master/RenderSamples/...

Dependencies: https://github.com/Const-me/Vrmac/blob/master/Installation.m...


Thanks kindly, will take a look.


BTW, that thing also supports Windows 10. The same .NET binary runs on top of Direct3D 12 there, instead of GLES 3.1.

Way easier to debug on Windows, because visual studio and renderdoc. There’re a few incompatibilities but these are minor. Here’s one example, in a vertex shader: https://github.com/Const-me/Vrmac/blob/master/RenderSamples/...


ImGui [0]

Immediate-mode UIs are by far the easiest for quick and dirty work. Near trivial to get stuff up on screen you can interact with.

[0] https://github.com/ocornut/imgui


Thanks for the pointer, will check it out.


What software already supports vulkan (so that it in theory can just be "switched on" for rpi)?


This GPU abstraction library can do that: https://diligentgraphics.com/diligent-engine/

Haven’t tested their Vulkan support, but I did GLES 3.1, D3D 11 and D3D 12, all 3 worked great for me.


> Vulkan 1.0 conformance means the V3DV Mesa driver has passed all tests from Khronos CTS and should be compatible with most applications using this version of the API.


I've played around with the pi and pygame.

Is there a way this will help with accelerated pygame graphics?

(last I tried, all blits were in software)


Pygame still has no GPU/3d API support. But it can leverage NumPy arrays IIRC so that's accelerated (compared to pure Python code at least).

At some point we'll have to migrate to new terms for "in hardware" for offloaded operations since we are no longer using fixed function hardware :)


Yes, pygame can use the Pi graphics hardware. It supports vulkan, OpenGL, OpenGL ES, and some other modes through SDL2.

Note, the CPUs on the Pi are faster than the gfx hardware, and gfx is usually memory bandwidth limited.


What games can I play on it this way?


Good question, does this mean my family's holy grail of performant Minecraft on a Pi is coming closer?


I actually wish there was a more powerful Raspberry PI.

Whenever you say you want more powerful the answer is the current way it's primary goal is to be affordable and its primary target audience are schools etc.

E.g. The standard version of the Pi 400 with 4 GB covers most (consumer / school children / students) application purposes. 8 GB are rather needed in the area of video editing / prosumer / server, and would bring the price significantly above the „magic“ 100 € limit.[1]

The vendor just doesn't want to acknowledge the real role of Raspberry Pi is not limited to being a cheap tinkerer board anymore, it has became a standard (for an ARM PC and a hackable set-top-box/console in particular) and a vibrant ecosystem has grown around it - there are plenty reasons to still want a Raspberry Pi original (rather than something the competitors offer) when you don't need it to be so cheap (or even so small) but actually need more power, faster IO, more ports, more GPIO pins etc.

[1] https://pi3g.com/2020/11/04/will-the-raspberry-pi-400-be-ava...


> The vendor just doesn't want to acknowledge the real role of Raspberry Pi is not limited to being a cheap tinkerer board anymore

The vendor is a charity with a mission that they have chosen [1]. They can target the Pi however they want to meet that mission. The fact that it doesn't happen to do something that you want it for is your problem not theirs.

There has been a virtuous cycle that something originally aimed at education has been of use to so many hackers, resulting in high volumes and all the benefits that brings. But that doesn't mean that they also need to focus on other sectors.

[1] https://www.raspberrypi.org/about/


>> The vendor just doesn't want to acknowledge the real role of Raspberry Pi is not limited to being a cheap tinkerer board anymore

> The vendor is a charity with a mission that they have chosen. They can target the Pi however they want to meet that mission.

I don't say they can't, I say that doesn't really reflect the actual reality. Many people buy it just because it's THE thing and a somewhat uniform standard so it's easier to target and to benefit from the existing ecosystem around it.

This also is a real role among those Raspbery PI plays in the real world. This is a fact the vendor marketing seemingly ignores - such is the meaning of my statement.

I don't judge them or demand anything.


"The vendor" is a foundation that has education at the center of its charter.

https://en.wikipedia.org/wiki/Raspberry_Pi_Foundation

They are not doing this for profit, although they are self-financing at this point (from what I've read).


Ostensibly, yes. But consider that Eben Upton is an ex-Broadcom guy that helped build videocore and that they get everything at cost, and it looks like an incredibly powerful PR move for Broadcom with side benefits.


Well, I like to think the better of people. I don't know Eben, but he comes across (in what he wrote and I've seen him speak) as honest, unassuming and set on the mission (even if he is now running the commercial side).

I'd give some minor portion of my anatomy to be part of something like the Raspberry Pi project and make some sort of difference (even if it might be seen as pandering to geeky tinkerers rather than helping schoolchildren, it's still one of the most fun and rewarding things I can envision as an engineer).


I typically do, too, but the Pi4 compute module and board are all too eerily similar to a product my team demoed to Eben at CES... right down to the MCO for the port layouts, form factor, and the high-density module interconnects.


For about 2x the price of the Raspberry Pi 4 8GB model, I can buy a significantly more powerful X86 solution, in terms of compute and I/O.

Compared to that, what would make you go for a Pi at that price point?


What would you recommend in the x86 space?


I haven't actually purchased it myself so can't recommend as such, but for example the Biostar A10N-8800E looks[2] interesting.

Has a quad core AMD APU, 2x DDR4 DIM sockets, PCIe 3.0 x16 slot, one M key M.2 slot, 2x SATA, 2x USB 3.1 Gen 1.

Board itself with integrated CPU costs just ~20% more than the Raspberry Pi 4 8GB, at least here in Norway. Add in 8GB of value memory and a small/spare PSU and you should be not far off the 2x mark.

[1]: http://www.biostar-usa.com/app/en-us/mb/introduction.php?S_I...

[2]: https://www.techpowerup.com/review/biostar-a10n-8800e/


No GPIOs there, though.


Sure, but the post I replied to specifically said

> The vendor just doesn't want to acknowledge the real role of Raspberry Pi is not limited to being a cheap tinkerer board anymore

To me, tinkering means GPIO.

That's why I was asking why the want for a Raspberry Pi, why not get something else that already exist. I'm curious what they're looking for. Of course if they still want GPIO then that's a valid point.


> not limited to being a cheap tinkerer board anymore

being not limited to being a cheap tinkerer board doesn't mean not being a [slightly less cheap] tinkerer board among the rest of roles. I didn't mean I don't need GPIO. I actually want more GPIO so I could connect a "hat", an infrared port, a cooler and still have spare pins for actual tinkering.

Another cool thing available exclusively with Raspberry Pi is polished Raspbian OS coming with free Mathematica and other goodies.

Hardware codecs also feels nice and, AFAIK, you don't get them with x86.

At last but not at least (this arguably is the most valuable part actually) it is THE SBC. This means it's easy and efficient to target for a developer (incl hardware developers) and easy and efficient to share problems and solutions for the community.


Ah, fair enough.

> At last but not at least (this arguably is the most valuable part actually) it is THE SBC.

I'm not sure a significantly more expensive, but more powerful, Raspberry Pi would have the same market appeal. Then again, what do I know :)


> 2x USB 3.1 Gen 1.

USB 3.1 Gen 1 means simple USB 3.0, right?


Well yes but they couldn't just call it that, now could they?


Any of the Udoo boards, most specifically the x86 Advanced Plus II. Plenty of GPIOs from the Braswell core, and another chunk from an integrated Arduino.

Its a bit more than 2x the price, but the flexibility and standard boot and peripherals it offers is worth the cost for hobbiest hacks.


Well... No one is stopping you from pulling the Beowulf maneuver. I.e., duct-taping a couple together. Via UART/USB/I2C/Ethernet.

It isn't as sexy or prone to looking cool, but technically it is possible. I'm actually planning on trying to tinker a traditionally networked cluster that I?m going to evolve to an attempt at an SSI'd cluster.

That's really all what most Integrated Circuits are going to mowadays. Just take a few ALU's, clock circuits, register files, Memory, some caches, a few MMU's, connect it all with buses, fab x gang bustahs and you've got a new system. The toughest part seems to be getting someone to be frank with you and just giving you an accurate datasheet/not screwing you with locked down firmware and rent extractiom arramgements.


You could get akready packaged solution

https://www.picocluster.com/collections/raspberry-pi


if only there were a bunch of other sbcs that had different performance and price levels available...

oh wait just buy a khadas vim3 or the new tinkerboard etc etc


Do these support Raspberry Pi hats (e.g. the TV hat) and OSes (e.g. Raspbian)? I've read the software/compatibility part of Raspberry Pi alternatives (incl. those from big brands) is very bad. Perhaps this is not the case with Khadas, I hardly know anything about this brand.


Depends on the 40-pin peripheral layout. The tinkerboard doesnt run Raspbian, but instead runs ASUS' own OS.


That's great!

Is the driver also performant?


Quake3 runs about 40% faster on this Vulkan driver vs the OpenGL driver. Not sure if that's particularly impressive for Vulkan or not but it does seem more performant than previous options on the Pi 4.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: