> Note that since the server program exposes an HTTP server you can actually navigate to the IP address of your PI from any device connected to the same wifi and control your pedal chain from there.
That's a killer feature for me, hiding at the end of the README. I have a Fractal Audio FM3[0] at home, and the only way I edit my patches is using their editing software over a USB connection to the device. Adding the ability to program (and even control) my patches live over any wifi-enabled device is even cooler!
I had sketched an idea for a web ui that would talk to a VCV instance, outputting signals to a real eurorack device with an expert sleepers module… need to keep hacking on that.
The Raspberry Pi 4 has separate USB hubs: one for WiFi and ethernet (an internal hub), and another for external USB ports. The USB port service loop will run with higher priority, so there doesn't seem to be any serious adverse affect.
The same isn't true for SD card access, which does cause dropouts. I've seen a video that suggests that disabling power management for the SD card hardware will correct the problem -- specifically that changing power state causes a 3ms delay. But I'm not quite sure how to go about disabling that on a Raspberry Pi OS.
. WiFi doesn't seem to affect audio latency. That's not true for Raspberry Pi 3, where WiFi and USB ports do run on the same USB hub.
Lol ok. Meanwhile the rest of the industry is moving on. I mean, how do you even do an wired connection to a drone? A lot of times guitarists will perform on a stage and the clients wants aerial views without the cost of a crane. Do you just tell them you can't do it and have them go with a different company?
I'm a little confused by this thread - it seems like you suggest that the drone should be controlled by the SBC in an effects pedal, that the guitarist is using to provide low-latency audio processing? As a techie and perpetual-intermediate musician, that seems a bit odd...
But, incidentally, at my day job I'm working on an embedded Linux system where audio latency matters, and which may well wind up with a WiFi radio (where latency probably doesn't matter so much). So, I'd like to understand issues in this space.
With special nonstandard proprietary wireless protocols is the answer you're looking for. The same shit wireless stage mics use. Or control over cellular data. I can guarantee you they aren't pairing them to a 802.11 router. Lol ok.
There are a bunch of USB over IP boxes you can buy, so it depends on the m what you're looking for, port-wise. That plus a wifi router gets you what you're looking for.
Always hear about the Fractal Axe FXIII--seems like the gold standard in guitar FX. Didn't realize they had smaller, non-rack-mount form factor devices. Very interested in trying out the FM3 now that you brought it to my attention. Thanks!!
Simplifying a bit: It's a pedalboard with a Linux SBC inside where you can load LV2 plugins and chain and route them as you wish. It has a sleek web interface for management and some short of "pedal store". Like an advanced multieffects pedalboard. Originally the MOD devices where crowdfounded.
And Modep (as @tcrenshaw wrote in another comment) this is a MOD emulator for Raspberry pi, just in case you want to play with it: https://blokas.io/modep/
The QC is my go to pedal these days. I have a Helix (rack) and a Kemper (rack). While there have been some hiccups on software it really is a solid little box.
AxeFX is absolutely insane in amount of control it gives you over your tone. I would say far too much control. Probably the best tool for a tone tinkerer.
Also Pisound (https://blokas.io/pisound/) which has the benefit of built-in din MIDI, but without the active community sharing software (you kind of have to build everything yourself with PD or SuperCollider). Some people have gotten Norns running on pisound, but I could never get it to work.
This is a neat little box, but pre-soldered ones seem to be only available on the used market. There are bare PCBs out there, but I’m not very confident with SMD parts.
This is the thing that bums me out with DIY audio: people come up with extraordinary designs, do a limited run and then never (or rarely) make any more.
Norns is made and supported by Monome, and it is very much supposed to be a Product That You Can Buy... except unfortunately it's based on the Compute Module 3, which has been unavailable for a good while. They have been available in small batches occasionally over the last year, and hopefully will be more available soon.
Surface mount soldering is not too hard. I can't view the BoM on mobile, but from the photos the soldering looks achievable for someone with experience soldering through hole. Take a look at the document here https://github.com/monome/norns-shield/tree/main/bom - if it's mostly 0805 sized components you should be fine. Even a few 0603 would be okay if you have good vision and a steady hand.
> people come up with extraordinary designs, do a limited run and then never (or rarely) make any more
I never bought a milkymist. There are no more being produced. The design has never been updated for modern formats, such as hdmi. I have no clue how to design hardware.
Sweet, was just looking for something like this the other day. My use-case is not for guitar, but just to offload some fx processing out of a DAW - a friend and I have gotten into jamming recently, and while you can do just about any
musical production task these days with a budget laptop and enough patience, where you quickly run into limitations is running multiple realtime effects. I'm not yet serious enough about it to start spending $ on effects pedals (which typically cost hundred of $ each), or even to have a clear enough idea of what pedals I would want, but I know enough about electronics to realize most modern ones are just a glorified arduino with a a 500% markup, so a budget-friendly programmable swiss ay knife pedal would be a dream.
For my use-case the touchscreen is entirely unnecessary (programming it via a WebUI sounds more convenient anyway if you don't need to use it sans PC), which is inflating t
he BOM by about 500%, and of course RPi4 is a uniquely poor choice of target platform at this particular moment in time, so seeing if it can run on a headless Pi Zero is definitely going on my endless to-do list. ;)
I was thinking about something similar but wondering if you could do it over Ethernet and the VST API with low enough latency to be useful for a DAW. Or if you can figure out a way to make latency less of an issue all together and enable remote VSTs.
The 500% markup should also get you reliability and a decent chance that your tech is familiar with it were you ever to be on stage. Look around, everyone uses the same stuff, for the most part.
1. I'd have a hard time seeing that small screen onstage, and my big foot would likely mash the wrong effects button. Others might find it easier.
2. There are tons of good, cheap effects boxes out there, and easy to find used. I like Pi boxes, but this seems like a homebrew replication of what's on the market.
3. All good boxes are low-latency, in my experience. It's a fundamental thing I think most players need.
You’re definitely right. I think the draw of all of this is to make it yourself. The same could be said of people who make their own diy home weather stations or web servers. You could always outsource for the same thing that’s better, more frequently updated and probably cheaper when you factor in the time it takes to make. It’s just neat to make the tools that you ordinarily have to dish out catch to get. :)
As others have mentioned, I think the interesting thing here would be understanding the latency for processing the signal. Anything in the single digit milliseconds would be fantastic! I know at one point I was looking into Raspberry Pi and ended up on Pedal Pi[0], though I couldn't get the parts I needed to make it work.
I ended up using Teensy[1] and related audio shields[2] to get things working from a sound/acceptable delay perspective. But being able to get things going on a Pi would probably make more of the advanced input controls much simpler to implement simply from a OS support perspective (like in this project with the WebUI). The UI I'm seeing in this project looks great and it would be cool to potentially see something like kits/preinstalled images roll out for this!
IME, if you can't find an important quantity like that front¢er in an advertisement, then its value is going to be terrible.
Applies to many things related to an advertised product. Things like price, quantity, material, country of origin, standards met, certifications, scores, etc.
I'm still patiently waiting for future digital mixing consoles to do all processing in software on inexpensive x86 or ARM processors. Currently due to latency and reliability requirements all DSP work is done on dedicated chips or FPGA which brings up the BOM and engineering cost. They often have a small ARM/Linux module which is used for the displays and network control.
The CPU tech is here today, and modern general purpose processors do a good job of handling low-latency audio. Someone just needs to put all that together in a unified and stable package...
Not exactly what you're describing, but I've been running my band directly into Logic Pro on an M1 (both for recording, and live shows). Dry signals go through amp sims and effects processors, and then route to both a FOH mix and an in-ear mix all on the laptop.
Wish I had seen the OP's project months ago, but one bonus of the setup I describe is the ability to swap effects after the fact (by virtue of having the dry signal) and the ability to automate effects (so I can engage distortion etc as soon as we hit measure XYZ instead of having to click a pedal)
I've done live mixing with a laptop, DAW, and interface in the past and it does work but it's not something I would be comfortable with for an important show. Even with something like SAC (which is specifically designed for the task) the chance of hangs, crashes, etc. go up as at the end of the day it's just a program running on top of your OS. The setup and config also gets a bit hacky and you'll be the only one able to use it. As far as I know the only specialized system that does this is the Waves LV1 which has a dedicated OS running on top of x86 hardware for processing. While I haven't tried it apparently it works quite well.
However I was more thinking of mixers like the QSC Touchmix/X32/etc. where the DSP probably eats up quite a bit of the unit cost, and how the price could be significantly brought down if the innards merely contained analog I/O and converters all tying into a powerful SoC.
Modern CPUs are surprisingly good at low-latency video as well. SIMD on something like a Zen4 core is a really big deal if used properly.
I've got some prototypes in C# that can draw a 1080p bitmap and encode to JPEG in under 10ms. Using single threading, socket mux servers and aggressive multimedia timers means my network delay is usually right at 1ms.
I feel like if you are just worried about audio, there is definitely enough bandwidth here to do what you need to per unit time.
Spot on. For context: 60fps gives you ~17ms of wiggle room before you start dropping/delaying frames. With 96kHz audio you have 0.01ms between samples. Drift above 0.05ms and you'll start introducing time domain issues in the human hearing range.
In other words, 'realtime' audio processing needs to happen 1700x faster that 'realtime' image work. Bandwidth isn't limiting factor, deterministic and uniform latency is that challenge for any signal as the sample rate goes up.
Poor word choice on my behalf. Drift was meant in terms of sync between different channels or parallel processing paths and in reference to GP. Guitar effects (generally single channel, sequentially processed) bypass that.
That 10ms benchmark is a good one though. At that time window you've reached a full wavelength at 100Hz and it's right about the point where well practiced humans (e.g. musicians) will begin to perceive delay. It's a fascinating intersection between physics/engineering and psychology as signal latencies make the jump from being perceived as timbre to delay.
Fair point. Audio also has a tendency to require more serialized throughput in complex signal chains. Video is more trivial to chunk out and process in parallel.
> I'm still patiently waiting for future digital mixing consoles to do all processing in software on inexpensive x86 or ARM processors.
Harrison Consoles have done this for more than 10 years.
I cannot confirm this in the same way, but I think it also likely that both Lawo and Studer digital consoles do this, and also possibly Allen & Heath. All 4 companies run Linux internally on their consoles.
Most processing of audio isn't CPU performance limited... For most realtime audio mixing, it doesn't matter if you use 1 watt or 10 watts for your CPU - the big speakers will easily be drawing far more, and the performers time will be costing far more than the electric bill anyway.
Modern consoles are plugged into the wall and are large enough for active cooling. For a 1U rack unit they can be cooled with small server fans; noise is not an issue as the amp fans are just as loud and the music will drown them all out anyways.
Does this form a problem for digital mixing consoles? As far as I know, these already have fairly beefy fans for heat exhaust, so I don't know if they couldn't just add more airflow?
Not especially. I'm more thinking of things that have a potential of being near a live mic, where a cooling fan is a no go, at least without a defeat switch.
A better alternative to the Raspberry Pi which is more suited for musical applications (and currently much cheaper) is the Daisy Seed by Electro-Smith[0]. You can program it in C++ / Pure Data / Arduino and Max/MSP Gen~. The community is very helpful and there are plenty of examples to start with. They also provide a few options to get started with some knobs/buttons. I'm not affiliated with them, just admire the whole ecosystem.
I've had some amazing ( horrible ) adventures in low latency music stuff lately. It has made me think about going back to the hardware side of music production. Previously I was an ableton-only dude.
All of the vst plugins are CPU bound and even though i have a top of the line i7 and 32 gigs of ram, my computer slows to a crawl when editing even moderate sized songs.
Specifically, there is an nvidia bug that introduces latency to real time audio, making guitar and other live performance unplayable.
I don't know how this compares to your i7, but I dabble around using Pro Tools w/ about 15 or 20 tracks at a time with several effects running in unison, on an M1 Pro processor w/ only 16 gigs RAM, and I typically stay under 20% usage according to Activity Monitor.
I mostly play guitar and don't notice the latency in most effect chains that I use.
Very cool, I've ordered a Raspberry Pi touchscreen this weekend and it should arrive today. I want to make a MIDI sequencer with it, or at least play with the idea. I hope my old Raspberry Pi can work with MIDI (over USB) without too much latency..
Depends on how you want to utilize it. If you want to connect it to a computer as a MIDI device, you'll have to use an Rpi4 or one of the Pi Zero's. The Pi's < 4 can't go into "gadget mode". I bought a new v4 for this exact reason.
The RtMidi library is probably your best bet for getting started. I found the ALSA library to suit my workflow a bit better, but the setup is pretty obtuse. RtMidi is much more user friendly in that regard.
Also, look into implementing the Ableton Link library. It runs over your network, and is honestly astounding how well it sync's devices.
Ideally I'd sync it with Ableton's clock. I'm using Ableton only for tempo and multitrack recording. Maybe this is achievable without the RPi being a MIDI device? Sending messages over the network, like GuitarEffects which uses WebSockets. My RPi 2 doesn't support gadget mode (I should probably start looking for a RPi 4).
I'm currently using an Elektron Digitakt to sequence my analog gear - it's great - but unfortunately the DT is limited to 8 MIDI tracks with only 1 LFO per MIDI track to automate MIDI CC data. I'd love a Cirklon Sequentix but the waitlist is just too long (3 years atm, arghh).
I wonder if I can hook up something like a Midiface 16x16 (https://miditech.de/en/portfolio/midiface-16x16/) to the RPi. The Midiface is Class Compliant so maybe the RPi can use it natively..(?) I'm a bit worried about performance though.
Thanks for your suggestions, I'm going to look into them!
The Odroid C4 you can actually get is an alternative with 4 USB ports and an OTG that is gadget capable. Unlike older models, mainline Linux support is decent.
I’d say you can most definitely do that. You can get usb midi conversion cables for next to nothing and send an out signal to a midi hub or Daisy chain it. Midi is a seriously slow protocol so as long as the actual audio processing is happening on other devices and you’re just sending midi to them, you’re definitely good to go.
The pi zero hardware supports gadget mode and from there it's a bit more work to get it to enumerate as a midi device. The pi 4 supports this via the usb-c port.
I've found the RPi4B to be somewhat awful for low-latency audio usage.
My particular use case is simply playing MP3s read from mmc through an MBox1 on USB.
No matter how much irqbalance, isolcpus, taskset magic, it never gets absolutely perfect. It gets better, but there's always spurious delays exhibited as occasional pops and clicks in the audio output.
I'm hopeful that [0] will improve the situation, but haven't had time to really dig into it let alone build a custom bleeding-edge mainline kernel - which I'm not even sure supports all the Pi4B hardware.
It's asinine that an otherwise idle 4Cx1.8Ghz machine can't even play MP3s on a USB MBox1 flawlessly with zero special effort...
This should really be at the top of the thread, since it is fundamentally a commercial version of (and which also substantively predates) the project in TFA. Note that most of the technology in a mod is FLOSS (if not all of it).
From my understanding the Line 6 Helix uses two 450MHz SHARC processors, ADSP-21469. Other effect/amp modelers use more or newer of the same family[0].
Can anyone comment about the relative processing power of a RPi vs the market solutions? Is the RPi theoretically good enough that a pedalboard could be completely modelled?
[0] Interestingly, it sounds like SHARC chips were designed to be low cost processors for single use applications in guided artillery shells.
According to Analog's product page [1] for the ADSP-21469, it delivers 2.7 GFLOPS at 450 MHz.
And Wikipedia's page of all the RPi models says that the latest (4B) manages 8 GFLOPS at around 1.8 GHz.
If that means that the answer to your question is "yes" or "no" is unfortunately a lot harder to Google and/or figure out. I would assume that the SHARC-based devices run on the metal, whereas most applications for the Raspberry Pi run under Linux, for instance.
Most of the real-time Linux kernel has been ported into mainline. Mainline is perfectly capable of producing stable low-latency audio, as long as you're running on threads with real-time priority.
The real-time kernel provides additional improvements. But it's incredibly difficult to find up-to-date real-time kernels these days.
Also worth mentioning that there are enormous USB audio improvements in kernel version 5.10.0, both for performance, stability, and compatibility. As far as I know, there are no real-time kernels available for Pi 4 with a kernel version greater than 5.10.0, and building one is painfully difficult.
Generally this type of work is done on DSPs and FPGAs, because they can generally get much lower latency than a CPU. While this can get latency low enough that nobody will notice, it is still there, and there isn't room for anything else in your signal chain to also have latency and still be unnoticed.
They will still have a CPU on devices like this, but all it does is run the UI, the sound processing is not done on a CPU.
Why would a DSP be able to get lower latency that a modern general purpose processor? DSPs might have been a thing 15 or 20 years or 30 years ago; but I cannot imagine anything that a DSP would do better than a modern ARM processor.
> The root of it is the Transform function which takes an input signal, performs any kind of transformation, and returns an output. All input and output values should remain in the range [-1, 1] otherwise you'll produce some really gnarly popping and cracking.
So you have to make all your own effects in code? It would be cool if it connected to something like Guitarix (open source) so you could use existing guitar effects. (Disclaimer: I've never used Guitarix, so it might sound shit)
Btw, for anyone who doesn't play guitar, but is interested... Gone are the days of those huge pedal boards and having to buy 30 different pedals. Emulation is getting really really good. You can either buy a multi-effects pedal and use that onstage. Or if you're in the studio, NeuralDSP is software which can emulate basically any sound you're after. It's expensive though, but it sounds better than free alternatives like Amplitube.
Nice, I just installed this on a Raspberry Pi 3 with a touch screen I had around. Save for an issue when installing rtaudio (requires libasound2-dev in Debian), it went almost flawlessly. I'm guessing the RPi 3 can't handle realtime audio and graphics, as popping was frequent (although not annoying for the purposes of experimenting). The buttons did nothing, I'm afraid, but the web interface is ace!
I'd love to write about this soon. Kudos to the coder
Nah, low latency audio is absolutely possible, but not necessarily using pulse audio. Using either Jack for the low latency stuff has been the advice for years, but with the advent of pipewire you can mix and match without too much difficulty.
A quick skim didn't turn up what audio backend this project is using, but I'm using patchbox with modep on my Pi4 as a bass pedal board and it's pretty much flawless low latency. I do need to add a fan to my pi since it's mounted underneath a pedal board and doesn't get quite enough air though.
I also wonder about the latency of the behringer interface. I've had one before, the sound quality is pretty good for entry level high-fi audio, but when I've run the audio into a DAW and back out post-processing there has always been an audible delay.
Granted, this was in windows, but from what I understand there is always going to be some audible processing delay with a USB 2.0 interface.
Wow, what fx are you running -- and does that or could that include convoluted reverb?
(Convoluted reverb is utterly awesome but more processor intensive than almost anything else)
Edit: and sorry, to be helpful, by low latency, could you meantion how many ms that is -- because while I'm very happy for you, and really interested, 15ms is very different to say 5ms or lower
My normal chain is a noise gate, followed by a comp, a little emulated tube drive, an SVT amp model, an impulse response modeled cab simulator (of course running a 8x12 SVT cab), a plate reverb, and a limiter. I've got some crazier ones, but as a beginner bassist, that's all I really need. There are literally hundreds of effects to choose from though.
Convolution reverb is an option, but it ended up being a little more than the Pi4 could handle iirc. I didn't tinker with it much though.
Latency is under 10ms, I'd have to go back and check my settings to confirm exactly though.
Just double checked my jack settings; I'm running my interface at 48000hz with a buffer size of 128 and 2 periods for a theoretical latency of 5.33ms. I'm sure USB adds a fraction of a millisecond as well.
Thanks! Musically that's really viable and fun setup
A really helpful data point :)
--
You're doing great! You don't need my help, but just share my own path, because it's fun :)
Yeah, play the bass, if i start talking about computing im not helping...
Err, why am i still here?
I switched to BSD or the OSS driver and went from massive effort ~5ms to zero effort ~2ms
Then I made a two button controller and pedal->usb, and started programming an arbitrary effects controller: a looper + buttons to switch the pedal to control any fx parameter
I've begun tinkering with GhostBSD lately just to learn more about BSD. Pleasantly surprised to find a lot of familiar linux audio tools were ported or can be easily made to work. Also happy that my ancient Presonus AudioboxUSB works fine in BSD. So much so that I get less random audio glitches than I do under windows 10.
Pipewire runs neck-and-neck with Jack and Alsa. The benchmark pretty regularly; and the difference is +/- nothing. Pulseaudio isn't appropriate for this application.
PulseAudio is an audio server aimed at desktop applications. It has nothing to do (or offer) low latency audio work, and would never be used for it. ALSA is the driver layer on Linux, and it can go as low or lower in latency terms as the audio driver layer on any other platform.
What is he using for actual audio interface? RPi has something awful onboard and none of the pictures show anything external. USB would add 0.25ms latency just for being USB, best case.
Nice project, but nonway on earth I am taking anything with a breadboard and jumperwires onto stage — except maybe if I plan to make it part of the performance
I'm assuming that the Web UI is just for changing settings.
The low latency would refer to the latency of the audio input (guitar) being processed and producing a sound. Ideally you want no discernible delay between when you hit a string and when a sound is produced.
As for how much delay is considered "acceptable", I'm not going to open that can of worms...
That's a killer feature for me, hiding at the end of the README. I have a Fractal Audio FM3[0] at home, and the only way I edit my patches is using their editing software over a USB connection to the device. Adding the ability to program (and even control) my patches live over any wifi-enabled device is even cooler!
[0] - https://www.fractalaudio.com/fm3/