Before finding a way to use DDC on Apple Silicon [1] for Lunar (https://lunar.fyi), I contemplated creating a universal DDC/CI device that can change brightness, contrast, volume, input and colors of a monitor, by either receiving commands through HTTP, or acting as an Adaptive Brightness standalone device.
In my mind, it would have been an HDMI dongle with an ESP32 that gets power through the 5V line of the HDMI port, and which has an ambient light sensor to adapt brightness by itself.
In the end, I found the I2C APIs on M1 so this was not so sorely needed anymore, but given the limitations of the M1 Display Coprocessor, I still think it might be a good idea. I just have no idea where would I start with hardware distribution and mass production, this domain seems so intangible from the software development side.
I also started working on something similar a while back, unfortunately got sidetracked with other things before I got many of the features (color adjustment, contrast, etc) implemented.
Both of the monitors on my desk have ESP32 based HDMI dongles plugged into a spare HDMI port. I have 2 USB 3.0 switches that have been retrofitted with ESP32s as well, one for my keyboard and mouse, and one for a hub on my desk. A short press of the button on the switch on my desk will switch my keyboard/mouse between my desktop and a thunderbolt 4 dock, which will then trigger both of the monitors to switch the the inputs that are also connected to the dock. A long press of the button will do the same, but will also switch the secondary hub over (speakers, mic, webcam, and a few free ports for flash drives and the like).
I was looking for an alternative option to an expensive KVM that would do the job I needed. As a 'poor man's KVM' it works pretty well, though I do hope to flesh out the rest of DDC/CI to get some of these other features implemented as well.
My own implementation. I actually started writing a more full featured DDC library with things like capability scanning on the roadmap. I should have some time next month to potentially get back to it.
A lot of what was available seemed to be Arduino based with Arduino specific calls in the libraries. I'm using standard ESP32 libs/build system and am trying to keep this one generic.
I hope I’m not stepping over the line, but if it’s not already open source, would you be willing to send me a source code tarball? I’d love to prototype a DIY solution with it for the people having trouble with DDC on DisplayLink connected monitors.
My email is on my HN user page in the About section.
I wouldn't mind if there was a bit more there. There is only the beginnings of the library (headers w/ DDC register map, function prototypes, etc), and I was still in the process of figuring out the architecture of the thing when I ran out of time to work on it. That being said I would be more than happy to get back to you if I am indeed able to get some more of the groundwork done on it next month as my schedule frees up.
I maybe could have been clearer on this, but the DDC dongle I made isn't using this library, just hardcoded i2c calls for input switching. I started working on the library to eventually integrate with that dongle, but I'm still using the 'hacky' firmware on the dongles hooked to my monitors.
I have a repo for the switch firmware, which has some pics and other details. Keep in mind though that was pretty quick and dirty, and I think there is a fix for the secondary switch that I still need to push. I should hopefully have some free time to get back to it soon though.
The HDMI dongle repo is still private, and I really want to do some cleanup before making it public. But, as for parts, I just used a breakout off amazon, and some cheap perfboard to make a hat containing the ESP32.
Wow, this looks fantastic! No one ever mentions that the Apple displays integrate with the brightness controls on macs. This can be a major quality-of-life feature if you work during the day and night and don't want to keep changing brightness on multiple monitors every day.
However, I have non-Apple displays as well. Just tried https://lunar.fyi/ via usbc and it worked great so far!
I see a lot of questions on the internet on “how do I control monitor brightness on macOS?” and a lot of “just use Control+Brightness Up/Down” answers from people not realizing that only Apple vendored monitors support that.
And that’s only possible because Apple implements a proprietary USB protocol inside their monitors for changing brightness so they have smooth transitions and be sure it works every time.
I’m pretty sure they’ll never touch DDC/CI for that, and rightfully so as it has very spotty support, especially with these new Thunderbolt hubs/adapters and smart monitors.
I'm using MonitorControl [1] and it works great. Plus, it is really open source, in contrast to Lunar which is "open source" but you can't build it as per its repo's README:
> Lunar can't be built from this repo yet as the source code for the paid features is hidden.
Thank you for the snark, but what you probably don’t know is that MonitorControl for M1 wouldn’t have been possible if I hadn’t open sourced Lunar’s code and my reasearch on that.
All the work on MC after Apple Silicon is done by a previous Lunar user (@waydabber) which I helped for months on Discord until he got the implementation right, and then eventually got to do his own paid implementation called BetterDisplay.
Lunar is my full time job, I can’t have it fully open source and still earn money from it. I would still be working at some corp on gluing web APIs if I hadn’t done this.
I think it's perfectly fine to do non open source work as long as one does not pretend to be open source.
I had a quick look to your website at in all fairness, it's pretty clear that the product is proprietary and paid (at least it doesn't claim "the de facto open source app to ...").
In your comparison table I would suggest to put at least a yellow tick (rather than the green one currently) in the open source line though, because that is slightly misleading.
Does Open Source require the source to be compilable?
The free parts of the app are fully open source with an MIT license, you’re free to modify it and add mocks for the paid parts. It’s gonna be hard, but infinitely easier than having to rewrite such an app from scratch.
For me, the source code is there and open, call it whatever you want, they’re just vague terms anyway. I’m gonna keep calling it open source.
> Does Open Source require the source to be compilable?
No, but, the actual code clearly is compilable (because you've compiled it). So this is only part of the code. So it's partially open source. So it shouldn't have a green tick and you shouldn't call it "open source".
Well yes, the full Lunar Pro app is compilable. I'm only open sourcing Lunar Free which is not compilable without some mocks, not even on my own machine.
But whatever, since all I've had is attacks and flat out stealing from the "open source community", I'll only release closed source binaries from now on so that I'm staying away as much as possible from this drama.
I'm not gaining anything from calling it "open source", it's just what regular people understand easier than "source available" or "semi-closed source" or however OSS people would like to call it just for the sake of pertaining to an arbitrary Open Source Definition like it's the law.
Lunar started as open source until v4 and Lunar v3 is still in the lunar3 branch, as compilable and usable and open as it ever was. I wanted to contribute my knowledge just how I got help from reading and using part of the ddcctl source code.
And people did profit from that: all current M1 DDC solutions (MonitorControl, DisplayBuddy, m1ddc and BetterDisplay) are based on my code from the DDC.c and DDC.swift files. I've created competitors for the only business that's keeping me afloat financially, so I'm definitely not liking when people attack even this gesture.
I'm probably sounding like an old man shouting at the sky, these words are not a direct reply to your comment anymore so hopefully you won't take it personally.
Don't worry, I was just providing suggestions to be helpful, not demanding or expecting anything (I don't use Apple products, so I won't be one of your customers no matter whether it's OSS or not) and I'm not taking things personally.
Each people have their own reasons to want open source software. Some people like it for knowledge sharing as you mentioned, but other use cases include make it easier to package in Linux distributions (I don't know the Macos equivalent, maybe that one doesn't make sense), or making it possible to customize the software for my own need (and sharing it with the community in case it's helpful for someone else). For those use cases, it's very useful to have enough code to build the software. Maybe you should suggest whoever would be interested to have this to contribute the missing pieces / mock.
Anyway, I wish you luck with your business, personally it makes me happy to see people build software independently, OSS or not :)
The part that is open source is open source, but the part that is proprietary is not open source. Which is why I think a yellow tick at least would be fairer, because it's only partially open source.
> MonitorControl for M1 wouldn’t have been possible if I hadn’t open sourced Lunar’s code and my reasearch on that.
Thank you for your open source contributions.
> I can’t have it fully open source and still earn money from it.
I understand. No one asked you to make it 100% open source, but it would be great if the open source parts of the app are decoupled and can be independently built.
Actually, I knew several partially open source apps, similar to Lunar. For example, Rectangle [1] has both an open source version and a pro (paid) version. However, unlike Lunar, the open source code of Rectangle is buildable. In fact, Lunar might be the first partially open source app I know that is not buildable.
Someone determined enough could add adaptive brightness to it. You can even use Lunar’s wireless ambient light sensor [1] for that as it’s simply sending lux values through Server Sent Events [2]
And for a more advanced feature set, ClickMonitorDDC.
Sadly it is no longer hosted by the company where the developer worked. I mailed them, asking where the dev went or if they could put me in contact with him so he could release the source code. Regretfully I never got a response.
It doesn't have ambient light sensing, but I use Twinkle Tray [1] for controlling multiple monitors at once.
It has a time schedule feature, and my hacky-but-functional setup is to have it reduce the brightness a bit every 30 minutes between 5:30 and 9pm -- on my setup it goes 80% to 20%. I have it reset back to 80% before I start using it in the morning.
Looks like lunar.fyi is just for Mac, right? Any idea about an equivalent on Linux? I want to add an input switch shortcut on my keyboard, but I want it to work on both machines.
This just solved a problem I have that I never knew there was a solution for! It seems really weird to me that every OS doesn't have a native solution for changing external monitor brightness; I thought it was down to the monitor but clearly it isn't. Thanks for the link and thanks for your work on Lunar!
Well it kinda is up to the monitor. Given how many monitors don’t implement the DDC/CI standard correctly and flash and crash and blackout on simple brightness change commands, it would turn into a PR nightmare at the scale of an operating system.
I’m overwhelmed by the daily support I have to provide for Lunar users and I barely have 20k users. I can’t even imagine how bad it would be if 100k users started having their monitor crash or lose color on “such a simple thing as changing brightness”.
Thank you! It was built incrementally over the past year to reach this state, so I thought it might be a tad too long and in-depth :) happy to hear you find it just right!
[OffTopic] Thank you for Lunar.fyi. I tried Lunar and then stumbled on MonitorControl[1]. Went back to Lunar and bought the pro. I'm not aware of the details but Lunar just works with the LG external monitor and syncing with the iMac.
No worries! The same sentiment is what keeps me enthusiastic about programming day after day :)
So computer monitors have support for a communication protocol called Display Data Channel which is normally used by the host (Mac, PC) to get info about supported resolutions, frame rates, signal timing etc.
On top of that, a command interface has been created called MCCS or Monitor Control Command Set [1] which allows changing brightness, volume, input and a ton of other aspects of the monitor, by sending specific bytes through the cable. That cable can be HDMI, DisplayPort, Thunderbolt, VGA, DVI. It doesn’t matter, as long as it has dedicated wires to carry the I2C signal.
I2C is the 2-wire communication protocol used by DDC, and it basically defines things like “a pulse of 5V (volts) of x milliseconds followed by 0V of y milliseconds means the 0 bit. The 1 bit is represented by a pulse of 5V of 2x milliseconds”. It’s a bit more complex than that, also defining TCP-like features with data frames and ACK packets, but you get the idea. It’s something that both devices agree on so that they can send raw bytes using 5 volt pulses.
I’ve created Lunar as an adaptive brightness app for macOS after finding out about a little CLI called ddcctl: https://github.com/kfix/ddcctl
That’s where I learned how DDC packets look like, where to place the payload (brightness value between 0 and 100, input ID, etc) and how to write that to the monitor using the macOS I2C APIs.
When Apple Silicon came out, none of that was possible anymore so I had to go looking around kernel assembly and private macOS frameworks for “the Apple Silicon way” of writing data through I2C.
If you’re also curious how I learned that, it’s a very cool domain called “reverse engineering” and I learned it while working as a Malware Researcher at Bitdefender. A bit hard to get started, but so many gems to discover once you know how to open binaries in IDA/Hopper and look around their disassembled code.
These HDMI ports use an internal MCDP29xx chip to convert the HDMI signal to DisplayPort. That chip supports DDC, but the method we use to write the DDC commands to the monitor (IOAVServiceWriteI2C) fails when used through it.
There are other WriteI2C methods specifically for this chip inside the DCPAVFamilyProxy.kext but they’re not available to be used inside an app like Lunar.
Asahi Linux shouldn’t be limited by the API, but the problem might be in the MCDP chip firmware itself. 1 year later, I’m still investigating new methods to fix this but so far I have no results.
Thank you as well! It’s nice to see I’m really solving problems and saving people time and frustration. That’s the only thing I like to create software for.
> For example, let's imagine you invite an external guest for a presentation inside your company. You offer to connect to a video-projector so he can show his slides. This is the perfect opportunity for the guest to hack the video-projector. Next time an employee connects to this projector, his laptop is hacked back.
Is there a single proof of concept of this out there? This seems wildly theoretical to me.
Seems like you have to know:
* the projector/TV model/firmware
* the employee laptop and firmware
* a bug in the projector/TV that would let you insert arbitrary data in a way that will stay around and be sent to other devices that connect to HDMI
* a bug in the laptop that would let you install something malicious with the data could get the projector to send
* your hack wont be detected by software on the laptop or the IDS/whatever on their network
That’s pretty heavy lifting. I mean is any of that reasonable outside of nation-state level attacks?
Check out trmml's thunderstrike for an example of a working hardware worm.
Its not exactly what you are outlining, but close enough for me to think 'yeah, does not need a nation-state level actor to do'.
It would be a different exploit chain for HDMI, and I am no expert in either HDMI or thunderbolt to make a claim that its possible, but I've been surprised with the amount of complexity HDMI offers.
I mean, if part of what you're trying to do is deliver a persistent payload to the target machine then the T1/T2/SIP with the M1/2 will make your life more difficult after the machine reboots, but that is probably the least of your worries trying to chain off an attack like this.
the point is there's no sense in spending man-years building an hdmi hack and implementing it by going onsite when you can bribe an employee instead. it's low likelihood
I guess the "bug in the laptop" part would be the hardest at the time of auto-updates.
Assuming someone builds a database of vulnerable displays and related exploits, your software can choose the correct exploit because HDMI sends the display model. But if the display is not connected to the network, it would have to exploit an unpatched vulnerability over HDMI in a laptop that connects to it. Not entirely impossible but the time window might be slim if it's not a zero-day.
> Is there a single proof of concept of this out there?
At one point malicious USB charging cables that owned your machine were just the fever dreams of paranoid people thinking the NSA was out to get them. Now you can buy them for less than $200.
Will nation states launch this type of attack? They probably are.
Do you have to worry about this happening in your meeting rooms? Probably in 3-4 years.
With televisions having all sorts of questionable software installed that wants to phone home, it's probably best not to connect them directly, but rather stream things to the display from either your mobile device or a box (e.g., Apple TV).
Of course if you accidentally do connect it online via a box that does have Internet access, who knows what kind of Lovecraftian horrors will be enabled on the television.
In most cases, streaming from a mobile device doesn't actually stream from the device. Instead the mobile device is just used to initiate the stream on the TV which requires it's own Internet connection to get the video directly from the source.
Wouldn't be surprised if it muxes ethernet and ARC on the same pins.
And, just because it is "dead" commercially does not mean there are not drivers in your OS watching for any traffic claiming to be ethernet, and eager to respond to it.
> And, just because it is "dead" commercially does not mean there are not drivers in your OS watching for any traffic claiming to be ethernet, and eager to respond to it.
It's not dead in that it never caught on, it's dead in that as far as I'm aware no one ever produced hardware that supported it. So no, there's no concern about it being used as a stealthy attack vector.
Also it's not exactly rocket surgery to see what network hardware your system sees available. All major general purpose OSes make it really easy to see what drivers are loaded, what hardware is attached, etc. Unless you want to propose that GPU and/or SoC vendors are hiding the fact that the hardware even exists until it's activated and rootkitting the OS to prevent the driver from being detected there is absolutely no reason to believe HEC is a threat to anyone anywhere.
Now, there could certainly hypothetically be exploits discovered against the CEC or ARC/eARC implementation but AFAIK ARC is generally implemented in dedicated hardware and CEC is simple enough to present a very limited attack surface.
Yes? That's exactly the attack that was being discussed -- plugging into a device that did that by default/without telling you/without giving the option to disable it, and possibly not even knowing it was happening.
It sound from other discussions here that maybe that isn't very likely given that the network support sounds like it was seldom if ever implemented, but it's certainly a valid concern and the type of scenario I've come to expect from adversarial consumer device vendors who now seem focused less on their products than on how their products can be a conduit for data mining their 'customers'.
No, the HEC pins are probably disconnected in any modern computer/dGPU.
Background: HEC uses 3 pins. Those pins can be used for 3 different protocols: HEC (100Mbps differential signal slightly modified from the Ethernet standard)(1), ARC (1 Mpbs SPDIF signal over a single wire)(2), and eARC (~40 Mbps SPDIF differential signal)(3).
Modern computers with integrated graphics send the HDMI signal from the CPU pins. But in all the examples I could find, those pins are all Displayport (4)(5)! This is because DisplayPort supports HDMI video data wthin its signal. This allows manufacturers to convert that to HDMI with a cheap IC (that is mostly a redriver and level shifter). So they just have a DisplayPort signal exit the CPU/GPU and convert it to HDMI if necessary.
At this point I should note that DisplayPort does not support high-bandwidth auxiliary channels. So higher bandwidth features like HEC would require the converter IC to have a separate data channel to the CPU/GPU. But I couldn't find a DP-to-HDMI IC that supports HEC or eARC. I guess there's just not enough interest.
In theory, some computer/GPU manufacturer could connect the HEC pins to an FPGA that has a high-bandwidth connection to the CPU/GPU to support eARC. But even with that, your hacker would need to reprogram the FPGA to stop being a SPDIF encoder and start being an ethernet bridge. And this hypothetical FPGA can't support just ARC - it would be way too slow to be reprogrammed for Ethernet.
Of course, this leaves out home theater devices. They support eARC, and if their ICs are flexible enough, might be able to speak Ethernet with a lot of programming. But since no devices support HEC, you would have to reprogram devices on both sides of the HDMI cable. And at that point, you could just transmit data over SPDIF.
In short, there's a 99.9% chance that HEC pins are physically disconnected in any computer's HDMI port. Even if they were connected, a hacker wouldn't be able to establish an Ethernet connection unless the device under attack is using a very obscure IC or is ridiculously over-engineered.
Neat idea, but lack of HDCP support could be deal breaker for a lot of applications - I'd even consider listing this limitation nearer the top, it's currently at the end of a long README. HDCP is used in a huge number of HDMI devices.
Is it used outside of playing DVDs and other copy protected content? It seems like the use case presented in the link (malicious presenter connecting a laptop that hacks the projector) would not need HDCP.
> It seems like the use case presented in the link (malicious presenter connecting a laptop that hacks the projector) would not need HDCP.
Unfortunately people often look at the projector and want to do something else with it, such as play video from one of the numerous sites that enforce DRM now, and will quickly become annoyed if they can't.
It can go various different ways from here, but it usually ends up with firewall being removed one way or another because someone in the chain doesn't agree with or understand the importance of it.
If you're not concerned about the risk of prosecution under the anti-circumvention provisions of the DMCA, you should be able to emulate HDCP 2.2 by connecting the computer to an HDCP 2.2-to-1.4 converter[1], connecting the converter to one of the myriad cheap HDCP 1.4 splitters or audio extractors that "accidentally" strips HDCP as a side effect, connecting the splitter to the firewall, and, finally, connecting the firewall to the monitor.
in similar scenarios, and both worked. Come to think of it, though, while I love my Vertex dearly, it's exactly the sort of highly programmable[2] device this firewall exists to protect against.
[2] Over (out-of-band) USB, RS-232, and, using an optional external adapter, Bluetooth. AFAIK, it's at least not intentionally configurable or reprogrammable over HDMI.
DisplayPort has both it's own thing called DPCP but also as of displayport 1.1 it literally has HDCP, too. Which it has been tracking (so eg DP 1.3 has HDCP 2.2)
OK but the analog hole on a Powerpoint slide is large enough that it's never going to get plugged. DRM kind of makes sense when there is 60 high resolution images a second, not when everyone has 5 minutes to jot down a single slide of text.
You could combine with an HDFury device such as https://www.hdfury.com/product/dr-hdmi/ to fix that. Of course the HDFury device would have to be before the HDMI Firewall (closer to the source), so you have to trust the HDFury not to be hackable on its own.
Wouldn't dip switches be better than breaking circuit boards and cutting traces? After all, this can't protect against physically present attackers anyway, since they could just unplug it and plug directly in.
As for a physically present attacker, many projector systems are out of reach and have a cable running through the wall or table to produce a socket for users to plug in to. Or they are mounted inside a cabinet with a similar cable reaching to the outside. If the "firewall" device is present at the (inaccessible) projector, this would not be a problem.
I could see this also becoming useful with Smart TVs. I don't want my TV to be having Ethernet-over-HDMI or other non-video/audio connection to any hardware that I connect to it (EG laptops). I'm not sure how much of a concern this is, but I don't trust the security of them, nor do I trust the manufacturers intent.
Nothing uses Ethernet-over-HDMI. It doesn't actually exist outside of a spec proposal. Even though in theory it'd be really cool. Imagine if your receiver could also be the network switch for all the devices. I'd actually pay money for such a thing. But, yeah, not actually a thing it turns out. So you definitely don't need to worry about your smart TV using it any time soon.
It would be super silly and probably against every spec ever written, but I’d love to be able to plug a digital signage Raspberry Pi into a TV and not need any other cables thanks to Power-over-Ethernet-over-HDMI.
on the flip side, USBC does all 3, so the future is already here!
RPi4 should support power and network through its usbc plug, however you'll still need another cable for video currently.
the frame.work "SBC" would allow all 3 through the same connector, but, you'll be looking at ~400+ per SBC just to achieve this, but the key thing is, its 100% doable if you get the hardware combination right
The RPi might not quite manage it, but any USB-C Mac and many Windows machines can. Maybe. Just make sure you know whether you're expecting to use Thunderbolt over USB-C or USB-C alt modes, and make sure you're within the power budget of a PoE -> USB-PD adapter. And I suspect you'll need to fudge the monitor power supply somehow too.
The different bits will locally need a number of wires to join them together, but you might manage to run the whole thing off a single ethernet cable.
It would be super convenient to avoid having a mess of cables and a switch behind your TV, or having to use Wi-Fi, that has its limitations, especially these days with higher and higher bitrates.
Oh I see, you're worried that the TV manufacturer could hack your laptop if you connect it via HDMI. This is the point at which I have to wonder: what on earth can HDMI do?? What is the vector? I assume you have to break out of the target driver, which is probably OS/version specific. Then I suppose normal hackery applies, in terms of planting something on the disk. I assume that the HDMI driver does NOT have any legit API for writing to arbitrary locations on disk. I assume.
I know nothing about the HDMI spec, but according to other comments, Ethernet over HDMI is a thing. So in theory perhaps the TV could present itself as a network device and then MITM your traffic. Not sure if that's actually possible though.
In the UK, BBC vans drive around with unsecured WiFi networks to see if any houses have smart TVS connect, and then fine them if they don't have a TV licence.
It was a joke, based on the alleged unlicensed TV detector vans that have roamed for decades and which are nothing more than a scare tactic to get people to pay the TV license.
As of 2016, the BBC still claimed they work, though, even for non-TVs:
> 1.37 The BBC’s final detection and enforcement option is its fleet of detection vans. Where the BBC still suspects that an occupier is watching live television but not paying for a licence, it can send a detection van to check whether this is the case. TVL detection vans can identify viewing on a non‐TV device in the same way that they can detect viewing on a television set. BBC staff were able to demonstrate this to my staff in controlled conditions sufficient for us to be confident that they could detect viewing on a range of non‐TV devices.
"TV Detection" is actually just a civilian use of Radiation Intelligence, the kind of RF emanation that the USA has the entire TEMPEST hardening requirements https://en.wikipedia.org/wiki/Tempest_(codename) in order to prevent nation state attackers from being able to snoop data from their electronic equipment. This is a very real security principle and plenty of demonstrations out there to show how much information can be leaked from unshielded systems. You can check out gr-tempest which uses modern software defined radio hardware https://github.com/git-artes/gr-tempest. You can see pretty good demo of it here https://old.reddit.com/r/RTLSDR/comments/q59ofn/i_was_finall...
The basic truth is that over time it got harder and harder to build "simple" detectors to work out if people were using their TVs to watch the BBC (and this is the tricky part, a valid argument is "I don't watch the BBC", so they need to detect BBC channels being displayed on the TV and not detect other channels) and so it gradually became a less and less directly useful tool for the license enforcement teams to use, so it has sort of transformed from a genuine relatively accurate tool that doesn't need too much equipment, into a sort of mythical boogeyman that gets used to scare people into paying for the license, potentially backed up by cutting edge signals intelligence type equipment to occasionally prove it can be done and maintain the story. The wikipedia article is actually pretty good for explaining how the older detection mechanisms worked https://en.wikipedia.org/wiki/TV_detector_van#Detection_tech...
Thanks for catching that! Not living in the UK my familiarity is mostly with the topic as an example of civilian ELINT and the modern “anti-myth” that this stuff never worked/existed and it was a scam by the government since the 1950s. It’s past the edit window so I can’t fix that up in my comment unfortunately
There was never an operational TV detector van (although there are dummy vans deployed). It was not an implementation of any RF technology. It was purely a PR campaign to improve compliance - obviously an effective one given your response.
I really can't agree with that view. While the modern versions are (and I did try to acknowledge this with my point about it being mostly a scare tactic now) extremely implausible. There exists ample evidence about how the old ones, particularly the first few generations, worked, all of which is rock solid electronic/radio technology theory stuff.
The wikipedia article I linked to references three separate old government documents over the span of nearly 20 years, each detailing subsequent method used for the first, second and third generation of TV detection equipment.
I'm not going to bat for the notion they have vans roaming the streets today with some sort of magically cutting edge NSA/CIA/GCHQ grade ELINT suite packed into a van that can sniff the digital signals leaking from the wiring from a flatscreen TV and run it through some sort of BBC only content ID while compensating for whatever distortions would obviously be introduced (and can be seen in the demo video in the reddit article I linked to) ... this notion is basically absurd, its clearly a scare tactic. No one is practically using the sort of technique documented by this 2013 research paper ( also linked from the wikipedia article, and showing with attached pictures that you can still theoretically do this sort of thing to a "modern" TV ) https://www.cl.cam.ac.uk/~mgk25/temc2013-tv.pdf to "detect" people watching The BBC on flatscreens TVs bought in the last 2 decades.
Whats not absurd, is that this clearly used to be pretty easy to do, like the circuitry for the first gen detector, could be built by any reasonably confident radio equipment person and definitely would work to detect the sort of TVs and TV broadcast technology used back in the early 50s. You can see it get more complex over time in each of the linked documents and obviously as it gets more complex it gets more expensive, and so you would probably see less of them built and operated... if they had a version that could actually do it in the 90's or early 00's then it probably cost them millions per van, so why would they ever risk that leaving the depot on off the chance it gets T-boned in an accident. This is classic mythos progression stuff, where the slightly extraordinary gets told and retold with each version seeming more implausible, turning ordinary events into mythic tales of heroes larger than life battling the gods... Except in this case its progressively more complicated and implausible sounding technology transforming into genuinely impossible for them to pay for it with their budget, transforming into a lie, into a myth, into a complete fabrication and conspiracy.
I agree that a myth has progressively grown, but maybe not the same one!
I’m not questioning the feasibility of the concept, but it is undeniable that no prosecution has ever arisen from evidence gathered by a TV detection van and the BBC refuses FOI requests for any details of investigations instigated by a detection. There were no detections.
The vans are and always have been a hollow deterrent.
Amusingly enough, I originally opened with a half paragraph about how the lack of prosecution shouldn’t be viewed as evidence of anything about the existence of the tv detector vans. But I cut it while drafting. Your completely right about the vans never being a source of evidence for a prosecution.
The complete lack of direct evidence tying a ban to a prosecution reminds me of the way the FBI are now treating Stingray cell site hijack tools… But it likely comes down to a combination of the enforcement agency’s limited powers and inherent ambiguities that couldn’t be eliminated from the technology. If they couldn’t prove that their signal couldn’t be coming from a TV in the neighbouring row house with perhaps a meter of separation and mirrored layouts placing rooms like master bedrooms and lounge rooms commonly back to back on the adjoining wall, and they lack the power to force someone to let them in, they effectively have their hands tied… but I can definitely see how it would be useful as a tool to quickly cull down lists of hundreds of potential unlicensed premises and to provide the sort of inadmissible hearsay that would provide internal evidence/justification for investigations using other means that could then actually be used for prosecution.
I suppose it was a hollow threat in the sense they could never just lock you up because some detector van snooped you out driving down the street. But in the sense that it was likely an efficient tool to help the agents doing the enforcement work over a large area, the side effect of selling it to the public was probably a bit of clever PR scam pulled by the enforcement agency, who needed all the help they could get since famously, you didn’t have to let them in which sort of cuts off a lot of ways for them to prove/investigate anything.
It was a hollow threat in that the vans deployed could not and did not make detections. There is no evidence that even a single operational van made a single detection. All known prosecutions and investigations were the result of human inspectors visiting residences without TV licenses. There’s no need for supposition or mental gymnastics about their secretive technological nature.
>For example, let's imagine you invite an external guest for a presentation inside your company. You offer to connect to a video-projector so he can show his slides. This is the perfect opportunity for the guest to hack the video-projector. Next time an employee connects to this projector, his laptop is hacked back. And voila, the innocent guest managed to infiltrate your company network, and can exfiltrate confidential information.
This seems like a very unlikely attack vector. For one, you have to identify the projector used, develop an exploit for the projector (assuming it's even vulnerable), and also develop a projector for windows/mac/linux that allows the payload to spread. This seems like a lot of trouble to go through compared to visiting the office, "accidentally" leaving a usb drive/dongle, and hoping someone ends up using it. Bonus: usb dongles allow you to spoof input devices, so you don't even need to develop an exploit.
Defense in depth - after you're done gluing USB and ethernet ports closed, this might be something to consider. I agree it's unlikely but if it's a low cost mitigation then why not? And if you're at a shop that doesn't do anything about open physical ports in the office then yes, this is way down on your risk assessment checklist.
It's good education though. Much like there's danger in plugging your phone into random USB chargers, there's danger in plugging your PC into random HDMI cables. I didn't know that.
It would be nice if there was a way it could preserve the EDID information without needed to manually clone it. It looks like this device is good for protecting the display, but I'd be more interested in a device that can protect my laptop when I travel.
Unless you are defending from the NSA or something like that, I don't think anyone has the resources to do that kind of attack, even if in theory it's possible.
Also, if you have that level of paranoia, you probably don't invite people to present in your office.
Yes, leaving around a USB drive would be easier, would also be easier to attach something to the first network port you can find, or attack some weak target in the network (for example, ask to use the printer to print something, and install something on them, printers are super vulnerable and the firmware gets never updated, also because the company that manages the printers is usually an external company with little knowledge about informatics in general).
You might be able to get a persistent hack attacking the HDMI controller chip directly instead of having to develop one specific to each projector, that's a more generic component. That said though this does feel a bit like a very hypothetical attack because you'd have to package both an exploit for the projector and target
If your organization often gets outsiders in to present, it could slurp off the interesting contents of anything plugged in. Or, if you can get access to the projectors (or just the HDMI cables) at e.g. DEFCON.
For certain selected targets, it might install something for later. Or maybe just everybody, and you only use the one on the target's machine, and ignore the rest.
Note, you don't need to hack the projector. You just interpose between the computer and the projector, just like this thing does.
The readme says there is also an additional device that can be used for copying the ERPROM, instead of doing it via the computer. It would be possible to integrate the copier in the firewall dongle but that would mean having a microcontroller in each device that is just used once, driving the unit cost and the size up.
The firewall dongle as it is is just holds an EEPROM, ancillary components, and the video passthrough traces.
Oh, good. Glad to see this getting some attention.
Malicious HDMI (and VGA, and DisplayPort) devices can feed bogus i2c data into the video drivers, which are probably never security audited. With VCP / DDC-CI, the attack surface grows.
You can in theory hijack the projector/monitor, okay. But the chances of reverse-hijacking are really REALLY slim - virtually all HDMI devices employ a retimer/redriver chip in front of the output, which means the only way "in" would be to exploit the CEC stack. HEC (Ethernet over HDMI) support is extremely rare in support.
> But the chances of reverse-hijacking are really REALLY slim - virtually all HDMI devices employ a retimer/redriver chip in front of the output
So what? The retimer chip doesn't touch the i2c lines, or even if it provides some ESD protection and buffering, it doesn't alter their data content. Software running in the PC is still talking to software running in the monitor's scaler chip.
You can play with i2c-tools under linux to verify this. I've been using HDMI's i2c port to talk to the i2c EEPROMs on SFP transceivers: https://i.imgur.com/T7cY01m.png
The EDID i2c eeprom is 256 bytes in size. Even if a malicious laptop overwrites the content, there is utterly no way that a piece of malware can embed itself in there that can cause an adverse reaction on a computer that attaches later on.
Nearly everything is DDC-CI now, where the scaler chip _pretends_ to be an EEPROM in the initial negotiations but there's a few magic "memory locations" that change each time you "read" or "write" to them, and a full bidirectional protocol layered on top of that. This allows arbitrary sized message transport, reflashing of the monitor firmware, and all sorts of goodies. And baddies.
Let's start with the theory: if you have access to the television, you should be able to install your own custom firmware.
Then there's the HDMI constraint; HDMI is a bidirectional interface by design, and while it looks nice, it's built for packet streams rather than data blocks.
From a user's perspective, this is a dead-end; however, for a hacker, there must be a way to wrap data blocks into packet streams; especially since the HDMI port/cable does transfer data from PC to TV, such as information about the movie you're watching, time left, and so on, in addition to commands and selections from TV to PC, this is known as HDMI-CEC (Consumer Electronics Control), and there is also the HDMI-HEC (High-speed Ethernet Channel) which provides internet sharing for devices connected with the HDMI cable.
So yes, it's a possible threat vector as is everything nowadays. Does it fit inside your cybersecurity threat scenario/risk scheme? Up to you to decide.
I see comments here expressing skepticism that any but a nation state threat actor would try to pull off an exploit taking advantage of this, but considering the possible targets, that is exactly who you might be looking to defend against. The clearest target, to me, is air-gapped systems that also disable USB ports, but still allow workers to use external monitors. That is effectively any classified computing environment, and clearly the threat actors targeting state secrets are overwhelmingly going to be other states.
I was of the mind of the more skeptical commenters, but this is a great point! In the most classified environments, an esoteric vulnerability like this may be the only nonhuman thing left to exploit.
HEC isn't a concern because it's disconnected in any computer, and unsupported by home theater devices that support eARC. Here's my comment explaining more: https://news.ycombinator.com/item?id=31830551
I guess I just wonder, why give any extra trust to a device just because it’s in your company network? Why not have specific authorizations for specific operations? A device on your company network could be pwned by anyone.
Tip for solder jumpers: the ones in this PCB would allow you to use a zero ohm resistor easily, but actually would be quite difficult to get solder alone to bridge the 2 pads due to the solde mask in between. I’ve had good luck omitting (or scraping off) the solder mask in that area, and recently came a cross a product with a better design. This product, an upgrade for the original NES[1] has very nice solder jumpers.
i suppose the more interesting question is if the device allows a researcher to inject i2c/ddc/etc packets in order to simulate a hostile display and test the drivers.
Seems odd to focus on HDMI, the DDC/i2c mechanism described has been in every display connector from VGA on forward. DisplayPort did away with dedicated pins but the mechanism is still there encapsulated in the AUX channel.
FYI: It seems strange to use branches to differentiate between entirely different projects. Normally different projects have different branches; a branch is used to make a change in isolation and is then merged back into master.
This is kind of a neat hack, if you want to use the same build setup for all of these projects: just check out whichever project you're working on. Shared stuff can go into a separate repo. You could still have feature branches, it's just not master that they're merged into.
This is wishful thinking, but I really hope HDMI protocol was replaced by something more reliable one day. I’ve never had this many problems with a simple task of having device A connect to device B over a short physical cable.
Having to flash firmware on a TV just so that it can successfully connect to a DVR is peak techno dystopia.
"Andy Davis
HDMI - Hacking Displays Made Interesting
Picture this scene, which happens thousands of times every day all around the world: Someone walks into a meeting room, sees a video cable and plugs it into their laptop. The other end of the cable is out of sight – it just disappears through a hole in the table. What is it connected to? Presumably the video projector bolted to the ceiling, but can it be trusted to just display their PowerPoint presentation?...
This presentation discusses the security of video drivers which interpret and process data supplied to them by external displays, projectors and KVM switches. It covers all the main video standards, including VGA, DVI, HDMI and DisplayPort. It also details the construction of a hardware-based EDID fuzzer using an Arduino Microcontroller and a discussion of some of its findings."
I don't. The ads and tracking enabled by digital offset the cost of manufacturing; sans that products will cost more, and price is a major factor in purchasing decisions.
It would not be surprising if spooks rely mainly on plugging into HDMI to suborn machines now. Of course this is no help with that unless spooks have substituted a doctored monitor. Or cable. The doctored presentation-room projector would be a plausible place for that.
I wonder which of the things that happen when an HDMI cable is plugged in are negotiated in hardware or laptop firmware, before the host system driver gets to talk to it. It's short odds a binary blob for that chip is loaded at startup. So, turning off services that respond at the OS level likely would not suffice. And, does code in that binary blob have full DMA access to the main memory bus, and PCI devices?
Anyway this gadget should protect against the projector.
TV's taking a terrestrial signal using a HDMI cable to a set top box which is connected to the internet is just another attack vector. Satellite signals are harder but any SDR can be used on a TV using terrestrial signals.
Before finding a way to use DDC on Apple Silicon [1] for Lunar (https://lunar.fyi), I contemplated creating a universal DDC/CI device that can change brightness, contrast, volume, input and colors of a monitor, by either receiving commands through HTTP, or acting as an Adaptive Brightness standalone device.
In my mind, it would have been an HDMI dongle with an ESP32 that gets power through the 5V line of the HDMI port, and which has an ambient light sensor to adapt brightness by itself.
In the end, I found the I2C APIs on M1 so this was not so sorely needed anymore, but given the limitations of the M1 Display Coprocessor, I still think it might be a good idea. I just have no idea where would I start with hardware distribution and mass production, this domain seems so intangible from the software development side.
[1] https://alinpanaitiu.com/blog/journey-to-ddc-on-m1-macs/