Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have high end sennheisers and airbuds. The only time I pull out my sennheisers is when im doing recording or live sampling. Bluetooth is opensource, it's not overly complicated it's just new to you. This technology was already being used in live performances with no one complaining.



it's not overly complicated it's just new to you

Download the Bluetooth specification and try to read it.

How is that not "overly complicated" compared to three copper wires?


This. Bluetooth is 24 years old, but it's still a crapshoot whether two brand new devices will talk to each other.


Specifications are used to reference in conjunction with an implementation. There are many specifications I could point to that would overwhelm most engineers yet they use those standards just fine.


Bluetooth is definitely not one of those "uses those standards just fine" cases... if a user has just a dozen peripherals, a even a 99% success rate starts failing on you pretty often

On a more personal note, I have to keep a mental map of which bluetooth devices will work with which bluetooth receivers... the tech is not NEARLY as universal as people like to think it is. Generally speaking, devices of the same vintage (or make) will talk to each other, but it is a crapshoot as to whether my a v2 headset will talk to a v4 USB receiver, or be seen by a new laptop/ipad/whatever.


The point I'm making is that it is a very complex standard whose use-case goes far beyond carrying audio from a device on a person to the headphones in his/her ears.

To give another example, GSM is also a very complicated standard (or more precisely, set of standards) but no one would really advocate for all landlines in buildings to be replaced with non-mobile GSM phones attached to the wall.


ISDN and SIP are complicated standards, but telecom engineers do advocate to their customers, whenever possible, to drop POTS in favour of VoIP softphone service run over the same lines.

VoIP-over-ISDN solutions are far easier to wire a large building for (it's just the existing ethernet drops, coming from the same switches the building already has, now carrying an additional VLAN); it removes many potential sources of interference; it increases voice quality; and it's just plain easier to deliver.

All that despite being, in pretty much every way, "more complicated."


"Three copper wires" understates the complexity of physical audio standards a lot. Most modern audio components can (still) deliver/accept SPDIF, for example, over the same wires that they deliver/accept analog audio. Many modern audio reproduction components can recognize TRRRS control signals. The headphone-jack audio path is a loose de-facto standard with a mess of variably-supported extensions.

Also, especially in consumer hi-fi systems, those "three copper wires" found on phono connectors can create ground loops, which is why the favoured standard for audio for most of the reproduction chain isn't the plain-jane analog audio path, but rather either digital, or analog-over-optical, or just analog with opto-isolation (and thus separate reference domains, requiring separately-powered active components); and why the audio production chain uses balanced analog. (And nobody has ever stuck a mini-XLR connector onto a smartphone, so they're not what we're talking about by "losing the audio jack" here. That'd be cool, though.)

To put all that another way: the classical analog-audio model, where you have a single current running all the way from a high-impedance microphone through an amp to a loudspeaker, is just not how modern audio chains look. There's a lot more active components, a lot more digital logic, a lot more circuitry. It's complex, just like Bluetooth is. (Which is not to say Bluetooth is good or better somehow; just that your smartphone's headphone connector doesn't win when measured on the axis of "simplicity of implementing the signalling standard interoperably with all devices the user would expect to plug it into." There are a lot of active components in a smartphone's headphone-jack audio path—just as many as in its Bluetooth audio path!)

Also, regardless of Bluetooth, headphones themselves have been gaining features requiring active components, like noise-cancelling, for years now. For such headphones, making them into wireless Bluetooth headphones comes at a cost of just one additional radio chip, connected to what was already a pretty complex on-board microcontroller. (Hell, many modern headphones have firmware. Wireless or not.)


Ground loops have never been a problem for headphones. And SPDIF over optical (presumably you mean ADAT) is considered vintage now.

The favoured standard for professional digital audio these days is some variant of Thunderbolt or USB, with Firewire for the old timers.

At the high end you'll see Audio-over-Ethernet (e.g. Dante), although it's not used much for low-end amateur or semi-pro production because it's not a cheap technology, and it only starts to make sense when you have tens or hundreds of channels going from one place to another - e.g. from a stage box to a front-of-house mixer.

https://www.audinate.com/node/128

Analog copper wire between microphones, guitars, synthesizers, FX pedals, and digital interfaces is still absolutely standard equipment in studios at every level.

Hum is avoided by using balanced three-wire balanced connections, which are almost as old a technology as mini-jacks.

Analog has zero latency compared to digital, it's at least as reliable, and it "just works".

Bluetooth is simply not a professional audio technology. It's not enough of a standard, and not reliable enough even when it's implemented correctly.


> Ground loops have never been a problem for headphones.

No, but they're a problem for consumer hi-fi systems, which use the same 1/8-inch RCA connectors.

> And SPDIF over optical (presumably you mean ADAT) is considered vintage now.

Yes, but it still exists, and you still have to consider its existence in the design of a modern component that could be hooked up as part of an audio reproduction chain, especially a component of a consumer hi-fi system.

My point was that the signalling standard which modern devices connected via a headphone jack have to obey if they want to be able to talk to "anything else that has a headphone jack", is complex, in the same way that e.g. USB-C is complex. Everything "needs" active circuitry just in case the other side isn't playing/expecting plain analog audio over the jack. ("Needs" in quotes because you wouldn't expect a $10 pair of headphones to work if you plug them into your CD player's SPDIF-out port, but you generally would expect the same of a pair of Bluetooth headphones that have their own internal DAC.)

> Bluetooth is simply not a professional audio technology. It's not enough of a standard, and not reliable enough even when it's implemented correctly.

Sorry, didn't mean to imply that Bluetooth was good, or that it was suitable for professional usage.

What I meant to assert, specifically, was that when comparing the signalling standard of the Bluetooth audio profile, to the de-facto signalling standard required on e.g. a smartphone's audio connector, the Bluetooth standard requires around the same number of active components at the [antenna] connector terminal, as the headphone-jack de-facto standard requires at its physical connector terminal, to achieve the same level of user-expected interoperability. (And something like Audio-over-Ethernet is even more complex in terms of the number of active components required!)

So, while Bluetooth might not be good, I wouldn't describe the thing being mourned here (the de-facto signalling standard backing the 1/8 inch headphone jack) as being very good either. It's also complex, it's also "not enough of a standard" (because it is extended with things like SPDIF and TRRRS connectors), and it's also frail—and comes with a host of problems of its own, like signal degradation over short distances and wire frailty, when implemented at the wire gauges consumers [rather than professionals] gravitate toward.


In the non-professional underground party scene it's still all analog quarter-inch hookups still

Do they make mixers now with channel strips of SPDIF connectors? I don't pay that much attention to audio hardware these days but I don't recall ever seeing a bank of optical connectors on the back of anything


Optical SPDIF was never meant for profesional applications. In fact the use of optical cable for that is completely unnecessary snake-oil to make it seem "more digital", or something like that.

Profesional applications usually use the same logical data stream over RS-422 differential pairs (with cat5 for fixed installations and plain balanced mic cables otherwise being usually used).

For consumer use there also is third variant with TTL signal over notionally 50R coaxial cable (ie. the cable that should be used for line-level audio with RCA connectors) that seems to be common on cheap whitebox AV tech and almost unheard of on anything brand-name.

Another thing at play is that most proffesional audio users outside of the broadcasting industry have exactly zero use case for standardized digital audio interface. In typical PA application you will either find some semi-proprietary "digital snake" or no external digital interfaces at all (with particulary hilarious example being Pioneers DJ mixers and players that use gigabit ethernet for file sharing and automation and then pass the audio on two RCA jacks, not even as balanced XLR). In modern recording studio you just record almost directly into computer and do everything in software.


I would beg to differ, the live music scene has a strong interest in standardization. Most of my sound reinforcement friends will wax poetic on the virtues of proper 1/4" coaxial cabling and a lot of these folks would probably shit bricks at the thought of having to replace their hardware.

Funny you mention Pioneer, I always found their use of RJ45 for comms to be kinda funny/ingenious. (Note that using RJ45 for other purposes seems to be a pretty common thing in the embedded world, from my own observations.) If you're talking CDJs they're usually hooked straight into a DJ-style mixer before the signal touches anything else; I thought balanced/unbalanced was a consideration more for microphones and guitar amps?


> This technology was already being used in live performances with no one complaining.

nah, in live situation it's 99% FM radio used for transmission


Yeah, BT is literally impossible to use live. Every ms counts, and here we’re on the order of 50-150. Even native drivers often aren’t good enough for wired equipment, forcing users to upgrade to ASIO drivers! I know I couldn't use my guitar interface or keyboard without them during my Windows days, though the situation is admittedly much improved on the macOS side.


Another thing is that primary thing you want to do wirelessly in live situation are mics and maybe instruments. That implies relative large dynamic range on input (certainly more than 16b samples), which is hard to do digitally in sane way.


Bluetooth's implementation complexity contributes to the high cost of its peripherals

Also (at least for Sennheiser) the non-bluetooth (2.4ghz) wireless headphones are pretty stellar. Personally I prefer that approach over Bluetooth


I bought a decent set of Bluetooth Anker earbuds for £14. Better sound than the last set of wired earbuds that I got for £20.

I still use an iPhone 6S, but I’m not going back to wired headphones.


Can you cite examples of people using Bluetooth in live performances? 20ms is intolerable for a band playing live together. Any hickups in signal for a major performance would also obviously be a problem.


"not overly complicated" seems like the security aspect is since most companies glance over it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: