> The primary reason someone would buy this thing is for automotive development. Since most cars and infotainment systems run on Arm chips, building software with an Arm workstation makes sense—it's a lot faster compiling Arm software native on a monster Arm CPU than even the fastest AMD or Intel CPU with emulation.
I think this is a misunderstanding on Jeff's part, or maybe he was given some bad marketing material.
Emulating ARM software is slow, but cross-compiling for ARM is not. Cross-compilers do not rely on emulation to compile.
The reason this machine compiles quickly is that it has 128 cores and a high TDP.
> Note that you can buy every part of this system—aside from System76's case and support—from a retailer like NewEgg (e.g. the motherboard + CPU combo for $2349) should you wish to build a custom arm64 workstation PC.
This link is to a different CPU that is not quite as fast as the one he tested (though still fast) and only has 10Gbe, not 25Gbe. Still a good bargain, but not the same part used in the system.
> I think this is a misunderstanding on Jeff's part, or maybe he was given some bad marketing material.
It was more the former, though mostly miswritten... what was in my head and what I wrote at 11pm last night were two different things :) (I've just updated the post)
The point is, this machine was built at the request (IIRC) of an automotive manufacturer who was tired of the delay in running a suite of I believe hundreds of ECU tests. So needing to have a machine which could quickly compile the arm64 software, then run it in dozens of VMs to run their test suite. It was expensive in the cloud, and took much longer on a local workstation, so the Thelio Astra was born.
Ampere has a synthetic benchmark to kind of illustrate this point (though of course the benchmark is slanted completely in Ampere's favor, as Coremark is one of the few benchmarks where Ampere does better core-for-core than AMD/Intel): https://github.com/AmpereComputing/qemu-coremark?tab=readme-...
> This link is to a different CPU that is not quite as fast as the one he tested (though still fast) and only has 10Gbe, not 25Gbe. Still a good bargain, but not the same part used in the system.
That's a good point, again I had only checked late last night and thought it was the same M128-30, but alas, it's the 2.6 GHz version. To be honest, I'm not sure how to get the dual 25 GbE version of the board either, maybe ASRock Rack is only selling direct to partners. I will ask about whether it could be put into a bundle on NewEgg, I have no idea how they set up that arrangement, but IMO it should be available!
And having the CPU's easily purchasable without a motherboard bundle would be nice, too. They're probably not a high-volume item, but they have their place.
Cross-compiling can be a pain to set up, though. Sometimes it's easy, sometimes it isn't. I've definitely compiled under emulators in the past just to not have to deal with cross-compilers.
I definitely wouldn't spend $7000 to avoid the pain though!
Isn't it typically a flag, nowadays? Even for python, getting an install that works on ARM from X86 is largely `pip install --platform manylinux2014_aarch64 --only-binary=:all:`, if I have it right.
I confess it took me longer to find that flag than I'd care to admit. But AWS Lambda using ARM made this something worth finding. And many of the alternatives on how to get the artifacts to deploy did focus on containers for emulation, if I'm remembering correctly.
In theory it is a flag. However in practice it almost never works. I cross compile all the time and there is nearly always something that the developers didn't think of (probably didn't know about) that makes their code not cross compile. Automake never works right. Even Cargo often fails to get something right (and I don't know rust/cargo well enough to know how to figure out what, much less how to fix it though I suspect Cargo would fix my bug reports if I could figure out what is wrong with enough detail that they can figure out the issue). Cmake always works in my experience.
Note that I didn't mention python above. If python is acceptable performance you probably have enough power (including memory) on the target to just compile everything on that system thus avoiding cross compiling.
Fair that there are often problems. My expectations would be that nothing is flawless, with how often native builds can be difficult. Still, I thought for the common targets with minimal dependencies, it was largely a flag.
And apologies for the confusion of mentioning python. I did that largely because it was frustratingly hard to get a cross target python for a long time.
Python is doing a lot of abstracting for you. In the realm of building C/C++ software using an odd menagerie of system packages and from-source dependencies one might need to figure out the cross-build process of each dependency, and make a prefix for the cross-arch system libraries… it’s doable but the paved path is definitely narrow and there’s lots of thorns when you stray into the jungle. And then you still need to run the tests.
Apologies, my point was that for a while, getting python to go cross platform was surprisingly difficult. Specifically if you had any native included, of course.
And agreed that you needed a cross compiler with all of the necessary libraries. For common targets, I don't remember this ever being that particularly difficult with gcc?
I don't understand the value? Why not build for your native CPU and run the tests there. Eventually you need to cross compile for the target as well and run integration tests, but the vast majority of your tests should be portable to run on anything you compile for. You probably want to write some custom text fixtures/emulators so that you don't have to test everything in a full car - dangerous error conditions in particular should be tested in emulation as much as possible before you find a lab to run a real world test (if you test them - is it worth exploding an engine just to verify you show the correct RPM?)
In the 1980s everything was written in assembly and you depending on the clock speed. However these days embedded should have learned that the CPU you write for will go out of production and so you should write portable code except for the small shim layers that talk to the real hardware.
> In the 1980s everything was written in assembly and you depending on the clock speed. However these days embedded should have learned that the CPU you write for will go out of production and so you should write portable code except for the small shim layers that talk to the real hardware.
Ah... the illusions of "portability". A lot of the reason you want to test your code is that there may be behaviors specific to the target platform (either in the hardware itself, in the compiler, in the runtime, or in support libraries/frameworks). You may well write with the intent to be portable, but you test for the reality that you never really are.
Testing on a desktop machine with many cores is not a good simulation of that environment. You run your tests on your local machine as much as possible. Then you run on the real production hardware enough tests to be confident.
I don't work automotive, but my application is very similar - an embedded display running ARM CPUs. I do a lot of testing on x86-64 and only rarely encounter portability issues. When I do discover such things they have always been things that I need real hardware to debug on not something an ARM desktop would help with.
I have they are rare though. once ever 10 years for a good sized project seems right. Where they habpen I wouldn't expect anything other than the exact hardware to reproduce them so I'm still not wanting a workstation live the target.
I've run into tons of optimizer bugs. While it's possible they would require the exact hardware to reproduce them, that has never been the case. Generally you just need something to run the actual output of the optimizer. You can even reproduce the bugs in emulators, it just takes way longer.
Most optimizer bugs are about the optimizer making assumptions about properties of the source code, not assumptions about properties of the hardware. Certainly it can go both ways, but of course you want to test the actual output of the optimizer. Doing otherwise is not at all what I'd call standard practice.
Yeah, I'm not entirely sure who this system is for. A lot of automotive infotainment is AMD now (if you are gonna sell to console makers..) and ARM64 is a perfectly "sane" architecture that has very decent software support in everything you would need to make a cross-compiling toolchain. Ironically the kind of systems that don't have the latter generally can't be very well emulated (or bought as a workstation) either.
The last time I wished for a native high performance ARM64 system was using PyInstaller to deploy something not very important to an ARM64 based embedded system which needs the native environment. But that's the reason it was "not very important"; I won't live to see them make a sane Python deployment story.
I cannot wait until they get into Mobile ARM and PopOS with Cosmic is a thing. Apple's quality on the hardware has [finally] returned after the War On Ports was declared over, but the software is still getting jammed with useless BS, parlor tricks, and gizmos that will be forgotten, while ignoring core stability, QOL issues, and further becoming a nanny-state. I'm ready to jump ship.
I wonder if Linux/ARM (or specifically System76) has figured out how to do hybrid sleep-suspend out of the box. “closed laptop” performance in AMD/Intel land seems to have regressed substantially in the last 15 years; Microsoft is even encouraging vendors to drop support for regular S3 sleep https://www.spacebar.news/windows-pc-sleep-broken/ and my Steam Deck AMD machine makes no attempt to hibernate and will drain the battery in sleep in about a day and a half.
I personally had an okay time with Linux sleep in 2019 but for the life of me couldn’t get sleep-to-hibernate working and a quick google does not instill confidence in the current state of the world here. At least the process to set it up in 2024 is more straightforward on PopOS compared to my 2019 Ubuntu experience https://www.adlrocha.com/til/2024-02-13-linux-hibernate/
Power management is the one area Apple’s nanny state is a substantial advantage. Android took a long time to get good at battery compared to iOS (on my Nexus in 2013 I remember doing a LOT of fiddling to make sure random apps weren’t waking the CPU too much) but it seems great today.
Hopefully mainline Linux laptops get there too. Until then I’ll keep updating my mac-setup.sh script with ever more work-arounds…
I agree with you on the sleep-hibernate thing. Unfortunate that Linux sleep is noticeably worse. Switched from Mac to AMD Framework 16 with Fedora and I discovered it quickly.
My experience with the 13" AMD framework is that a lot of the sleep-related jank seems to come from user-space, i.e. Desktop inplementation.
First I had a Fedora/KDE, and had lots of trouble. Now I'm on Gnome and everything just works, nearly as well as on macos. Battery life sucks of course but that's AMD for you.
> MacOS is far buggier than either Windows or Linux.
On the UI/UX side of things, definitely a worse experience than even stock Windows or Linux with Gnome, and that's coming from someone who generally likes it but got so tired of recent (at the time) bugs and jankiness that I had to migrate elsewhere. This was probably around 2016 or so, around the time when it seemed like there was no QA department at all at Apple.
> On the UI/UX side of things, definitely a worse experience than even stock Windows or Linux
There's definitely been a bit of jank and a few poor design decisions made, but to compare it to Windows unfavorably is a bit overblown, considering Windows doesn't seem to have ever deleted code. Version to version on Windows just adds thin veneer over the old one, and leaves me scratching my head; many "They still have THIS!? wtf" moments, like putting carpet over a crack in the foundation
> Version to version on Windows just adds thin veneer over the old one
This + keeping the old stuff around is what makes Windows more fitting than macOS in a professional environment. Still, its hard for me to understand using either if you're a developer who favors stability.
Just the other week I was unable to poweroff/reboot Windows without it locking down an update on next boot, which pisses me off. Of course, next time I wanted to use Windows, I just needed to check on something quickly, but then this forced update came back to bite me and what should have taken 2 minutes at tops now took 15 minutes.
> This + keeping the old stuff around is what makes Windows more fitting than macOS in a professional environment
Of course, particularly in professional industries like engineering that rely on expensive and complex Windows-only software, running old games etc.. I don't claim it's not worthwhile, but it's also a lot of baggage weighing it down. I'd be fine with running it for that purpose, since my job would not be to have opinions on my operating system necessarily. Having been in a supporting role for general office Windows pcs recently, I did find its stack of even first-party quirks very frustrating to deal with, even though I use it regularly for gaming. IIRC I even had to dip into the registry editor to fix some value that was left unset by the imperfect configuration of the bizarre Outlook profile system.
Most of the time I just install the game I want, run the game, shut it down, leave it at that, and rarely run updates. Relying on it for anything more complex would drive me crazy.
Macos could be better in some ways, the Settings app lately is miserable and the mouse tracking controls suck, SwiftUI has performance/documentation issues and should be open source, but it's reliable and the UI is pretty consistent, and while I'm sure there are some deeper quirks once you dig deeper, I'd really have to be looking for them.
Haven't tried Gnome in a while, it's always seemed fine enough, don't have much to say about it atm
1. Don't like Gatekeeper? Run `sudo spctl --master-disable`. Afterwards, go to System Settings, Privacy & Security, and you'll have the option to allow applications from "Anywhere." You can now run unsigned code whenever you want. No notarization required. No disabling SIP required. Although, if you don't like SIP, `sudo csrutil disable` from Recovery Mode will do the trick. Don't like needing to "bless" your boot image to ensure a lack of tampering? `sudo authenticated-root disable` from recovery mode, and we're mostly back to the older days of macOS. Don't like code signing anywhere, even for system services? nvram boot-args="amfi_get_out_of_my_way=0x1" from recovery mode is all it takes.
I would never use a broken signed system volume on a work machine. However, good news, macOS lets you have completely isolated secure boot practices for each volume. As a result, you could have a fully-secure stock macOS with Apple Pay and all the features; and a macOS with no SSV, apps from Anywhere, and a homebuilt XNU kernel side-by-side without any loss in functionality.
2. "useless BS, parlor tricks, and gizmos" is equally applicable to Windows; and frankly many Linux distributions at this point. Only KDE was shameless enough to ship a rubber ball for my desktop (https://www.omgubuntu.co.uk/2017/03/kde-bouncing-ball-plasma...). I actually do appreciate some of macOS's "parlor tricks" and use them regularly (such as "desk view" with my iPhone), so YMMV.
3. Don't like the in-box apps? Put them all in a folder, disable notifications, never think about them again. At least they won't nag you to turn them back on, like Windows begging you for a Game Pass trial. Or, if you're really hardcore, disable SIP, mount the root disk, delete them all, "bless" your disk image (unless you killed authenticated root), and reboot.
Sure … it’s possible to turn off security. But what I want is a working and secure system that isn’t annoying. macOS today feels as obnoxious as Windows Vista did in 2007. I still weep for everything we’ve lost since Snow Leopard (greatest OS release of all time)
Like yesterday double click title bar to maximize windows mysteriously stopped doing anything on my M4 MBP. Why??? Guess I’ll try a restart and see if it helps…
By release I’m referring to the whole 10.6.x series I have no illusion that 10.6.0 was some perfect bug free dvd. And by everything we’ve lost, I mean how the overall macOS experience degraded from the high snow leopard overall represented:
- grid Expose (Mission Control returning to the jumble in Lion… whyyyyy)
- persistent grid of virtual desktops
- one button access to widgets
- Fun, visually distinct and obviously interactive icons/buttons
- window title bars and proxy icons
- bundled apps like Pages and iPhoto before they were simplified/rewritten/dumbed down
- System Preferences (RIP) before they buried display settings and network things
- green traffic light maximizes window instead of annoying full screen thing
ok, how do I uninstall siri, screentime, alert center, apple intelligence, spotlight, tips, get back 32 bit support, remove the OS level advertising tracking support, and uninstall game center?
We were piloting the new Snapdragon X Elite Surface Laptops to roll them out to our org about 3 months ago. Loved everything about it, but when it came to software... things became messy. The MSFT Office suite was great, as well as any browser of course. Then we had to make sure some older Excel plugins worked for financial users, they didn't. Had to use some workarounds. We tried our front end app Node build pipeline.. we gave up and had to put it in a container to get it to build properly. Tried a few things through the cpu emulator, it was no where near Apple's at their launch of M1. Ultimately, we had to go back to the Intel Surface Laptops. It felt like the ARM laptops were rushed about a year too early...
The really sad thing is the complete absence of support for WinARM when shit goes wrong. I've got one of Dell's Qualcomm 8CX Gen2 based inspirons that broke its windows install when it crashed on boot one day.
Its been sitting on a shelf for almost 8 months and I periodically take it down to see if shits been fixed for restoring its install.
Win11's built in self restore tools attempt to wipe and reinstall windows but ultimately fail and it reverts to the broken state. But this is actually better than 4 months ago when online reinstall function was ENTIRELY broken.
It took until a month or so ago for Microsoft to even offer Win11 ARM iso's on their website that weren't virtual machine files. But these images don't work on my inspirion, I'm guessing they don't bake in 8cx Gen2 support. I can get it to boot into the installer but usb is broken entirely and therefore it can't see anything on the installer usb once its booted. Keep in mind WinARM products have been on the market for years at this point. Its insane vs restoring an x86 laptop.
Dells tools for building a system restore usb are completely broken. I had to go into their text logs to get the web link and authkey to download the win11 iso. Even after building the boot drive it installs a broken version of win11 that can't successfully update, and has a constantly crashing explorer.exe.
I contacted Dell support about this and got strung along for 3 months of back and forth emails before they told me I needed to upgrade to a paid support tier to have a functional laptop again.
And then we get to linux where 8cx Gen2 support is a fucking void no one wants to touch because it has a different device tree than Gen1 or Gen3 which seemingly have some manner of linux support.
I really hope any IT squad that gets suckered into buying SnapdragonX laptops is ready for a world of pain when something goes wrong.
Announcing Windows 11 Insider Preview Build 27744 (Canary Channel)
Hello Windows Insiders, today we are releasing Windows 11 Insider Preview Build 27744 to the Canary Channel.
We are also not planning to release SDKs for 27xxx series builds for the time being.
What’s new with Build 27744
New Processor Feature Support in Prism
In today’s Canary Channel Insider Preview build, we’re previewing a major feature update to Prism, our emulator for Windows on Arm, that will make it possible for more 64-bit x86 (x64) applications to run under emulation by adding support for more CPU features under emulation.
This new support in Prism is already in limited use today in the retail version of Windows 11, version 24H2, where it enables the ability to run Adobe Premiere Pro 25 on Arm. Starting with Build 27744, the support is being opened to any x64 application under emulation. You may find some games or creative apps that were blocked due to CPU requirements before will be able to run using Prism on this build of Windows.
At a technical level, the virtual CPU used by x64 emulated applications through Prism will now have support for additional extensions to the x86 instruction set architecture. These extensions include AVX and AVX2, as well as BMI, FMA, F16C, and others, that are not required to run Windows but have become sufficiently commonplace that some apps expect them to be present. You can see some of the new features in the output of a tool like Coreinfo64.exe.
Edited the OP to clarify that it's compiling + running test suites, which I believe was the main motivation for this machine's existence (an automotive developer was frustrated with the speed running hundreds of arm64 ECU VMs.
I would love to know how efficient it is through those ports and translations, vs running smoothly through sheer horsepower (Xbox 360 is nearly two decades old).
Based on the presented multi-core benchmarks, the single-core performance is 40-50% of a single M2 Ultra core: 40% on the HPL Linpack benchmark, and 50% on Cinebench. That is compared to the average M2 Ultra core, counting both the fast "performance" cores and the slow "efficiency" cores. So the M2 performance cores are even faster than that, never mind the M3 or M4. Then again, you cannot buy a 128-core computer from Apple, so it's an apples to oranges comparison.
>The nice thing about System76 selling this machine is you get their excellent support (you can choose from 1-3 years of support at purchase). This is a huge boon to businesses who elect to run open source software, because they get a fully-supported hardware configuration instead of having to figure out compatibility themselves.
This entire thing reads like an advertisement.
>Note that you can buy every part of this system—aside from System76's case and support—from a retailer like NewEgg (e.g. the motherboard + CPU combo for $2349) should you wish to build a custom arm64 workstation PC.
7000$ for a computer you can build for around 3500$.
You can literally just take their part list and install the OS yourself.
I understand companies need to make a profit, but this is nuts.
Likewise in the laptop market your almost always better off installing Linux on a Windows laptop vs paying 2x for a System 76 version.
Linux can't be simultaneously easy to install and justify on 100% markup.
For my own computers, I prefer to build myself, as I save a ton, and usually end up with a machine I'm happier with.
But as someone who's helped with many IT purchasing decisions, having a supported configuration and a company to call (instead of me being front line IT support, PC support, hardware support, etc.) is a huge benefit.
And right now there aren't a ton of Linux-first hardware vendors, so System76 charges a premium. It's worth it to some, but not to many.
Note that the system starts at $3299 — $2,000 of the price of the as-configured workstation was the 512 GB of ECC RAM. Those sticks cost $160/ea on Amazon.
There's a markup, but it's not $7000 for a computer that costs $3500 DIY. The DIY cost would be somewhere around $4000-4500 and would have a 2.6 GHz CPU and Dual 10 GbE instead of 25 GbE.
I'd like to see Ampere get all their CPU models out to retail availability though. Right now if you bought the base model for $3299, you could buy a M128-30 CPU for $2299, but only through NewEgg's "Request a Quote" system, which is a potshot.
>$7000 for a computer that costs $3500 DIY. The DIY cost would be somewhere around $4000-4500 and would have a 2.6 GHz CPU and Dual 10 GbE instead of 25 GbE.
I'm seeing some deals on the ram which can get it down to 1200 or less for the 512
So instead of a 100% markup, a modest 75%.
I just can't imagine the person who.
A) Needs this setup.
And.
B) Can't fit some computer
parts together and install an OS.
Even then eventually issues will emerge. I have more faith in being able to fix my own PC vs waiting a full week for tech support. All the parts should be under warranty.
As Jeff said, professionals and businesses. Imagine asking 300 engineers to furnish their own workstation with a budget. Everywhere I worked this would be nuts, maybe 2% of engineers would be excited to build a machine, 98% would just buy OEM. And what happens when something goes wrong for those two percent? If someone at my team meeting would say “sorry I can’t help with the project this week I’m arguing with MSI support so I can RMA my motherboard” like… what?? Losing a few days of work is already $2000. Having an unbounded worst case remediation time from failure is just unthinkable.
When I have a hardware problem with an Apple device, I can drive to an Apple Store and exchange the device for a new copy the same day, errand will take 2 hours in total. Big OEMs like Lenovo will send a guy to your location to fix the system within a day or two. System76 will mail you a new copy as soon as they can build it.
I used to build my own gaming PCs and home servers but the next one I buy will probably be from a high-end OEM because I will gladly pay a 75% markup to spend my time on other things. I used to find scouring the Internet for good deals, researching fan ducts, and zip-tying cables fun, but I’ve hit my lifetime limit doing it - now it’s just busy work.
System76 has their own UEFI BIOS and EC firmware, but that doesn't even make it better since you can hypothetically buy a Clevo laptop and mod it to be a brother from another mother.
Just buy a Thinkpad or another laptop that has good support for Linux.
I definitely get the argument that your supporting Linux by buying a System 76 computer, but the last time I checked IBM Red Hat isn't exactly a charity.
I know there are a bunch of laptops that have good support for Linux, but I'm talking one more level deep than that. Good support for Coreboot UEFI is hard to come by, and FOSS EC firmware (this makes laptop specific features possible) to my knowledge is unique to System76.
Those two components are the unique selling propositions System76 has over your standard Linux-ready laptop. And they aren't quite unique when you realize System76 is commissioning laptops from the likes of Clevo.
E: I know I just rehashed by last comment a little bit, but it felt like the point didn't come across properly. I was supporting the mention that the Linux-first USP of S76 wasn't strong enough, and building on top of it.
Something to do with free software everything, not features that Coreboot provides that a standard InsydeH2O UEFI BIOS doesn't have. I don't quite get it myself, and it is quite a niche, but some people care more about it more than others. A very small amount of people.
There's great risk installing such a thing yourself, as you can end up bricking your laptop with very little recourse than to break out a soldering iron. Or if you don't do your homework carefully, end up with a half functioning PC because things like display brightness and audio apparently require driver binaries within UEFI that need to be there to work.
That is a Clevo laptop, but many vendors do use their models and don't advertise as such. Gigabyte, for example, uses Clevo for their bottom tier models with no special marketing name. I don't doubt that their other models are Clevo, but I can tell that those ones are.
I'm looking on Newegg and it looks like the the motherboard/Ampere CPU bundle is $1434 (Asrock Rack Ampere Altra Bundle ALTRAD8UD-1L2T Deep Micro-ATX Server Motherboard Single Socket (LGA 4926) with Ampere Altra Q64-22 64 cores). case and power supply $300. 64GB RAM $65. 1TB M.2 SSD $55. A400 graphics card $180 (not a great card). That comes out to $2034. So yeah, they've got about a $1250 markup.
I'd love to be able to get a specced down, "low"-power, version of this to use as a home server:
- lower-end cpu (the 32-core version would be just fine for me)
- keeping the possibilty having lot of ram around (128gb? 256gb? 512gb?)
- low-end gpu or no gpu at all (it would be headless anyway)
- keep the ipmi around
- maybe some 4 hot-swappable 3.5" drive bays ?
I could see myself blowing 2000-3000$ on something like as long as I can keep it working for ten years. Maybe update it a bit over the years.
And i'm not joking about the ten-years: I'm still using a 2014-era HP MicroServer Gen8 and it works great. It shows its age, but it does still work great.
Yeah I was reading the article and started fast-forwarding through the description of how the machine was built because I wanted to get to all the Windows/Linux stuff, then was like wait this is System76 man! Their laptops are off the shelf stuff but their desktop hardware is just so classy (I worked for them for a year so I'm a bit biased).
I'm waiting for their in house laptop they keep saying they are working towards. They say their workstation cases and keyboards are learning steps to creating the laptop, but so far nothing is here
I sure wish Jeff would learn some Nix or NixOS. Not only does cross-compilation not need to be hard, there is a cached flake that supplies Cosmic for aarch64-linux.
I think this is a misunderstanding on Jeff's part, or maybe he was given some bad marketing material.
Emulating ARM software is slow, but cross-compiling for ARM is not. Cross-compilers do not rely on emulation to compile.
The reason this machine compiles quickly is that it has 128 cores and a high TDP.
> Note that you can buy every part of this system—aside from System76's case and support—from a retailer like NewEgg (e.g. the motherboard + CPU combo for $2349) should you wish to build a custom arm64 workstation PC.
This link is to a different CPU that is not quite as fast as the one he tested (though still fast) and only has 10Gbe, not 25Gbe. Still a good bargain, but not the same part used in the system.