I have been using chat GPT to control my light colours for about a month now. It's too tedious to properly set the colours and temperatures of our lights manually and too complex to consider all factors like activity, weather, music, time of day and season.
Chat GPT is now our personal lighting DJ, giving us dynamic and interesting light combinations that respect our circadian rhythm.
Here's my prompt - the output of which feeds Home Assistant:
Set the hue for my home's lights using the HSL/HSB scale from 0-360 by providing a primary and complementary colour which considers the current situation. The HSL color spectrum ranges from 0 (red), 120 (green), to 240 (blue) and back to 360 (red). Lower values (0-60) represent warmer colors, while higher values (180-240) represent cooler colors. Middle values (60-180) are neutral.
Consider these factors in setting the primary hue (in order of importance):
1. Preferences throughout the day:
- When about to wake: Reds, oranges or hot pinks
- Approaching bedtime: Hot pinks or reds
- During worktime: Blues, greens or yellows
- Other times: Greens, yellows or oranges
2. Current activity: Bedtime
3. Sleep schedule: Bedtime 23:00, Wake-up time 07:00
4. Date & time: Sunday May 21, 05:40
5. Current primary hue: 10
6. Current complementary hue: 190
7. Weather: 13°C, wind speed 9 km/h, autumn
Respond in this format and provide a reason in <250 characters:
{primary_hue: 10
complementary_hue: 190
reason: "Approaching bedtime and early hours of morning, so a warm and calming hue is needed. Complementary hue adjusted slightly to 195 to maintain balance."}
FWIW, I just tried the same prompt with open assistant
> {"primary_hue": 345, "complementary_hue": 55, "reason": "For bedtime, warm orange tones at 345 create relaxation while paired with cool green at 55 helps prepare your body for sleep."}
You can absolutely replace this with something self hosted, this was using the `oasst-sft-6-llama-30b` model on https://open-assistant.io
For stuff like this I like to make it write out a lot of "reasoning" before the final output that I'll parse.
Like so:
Write three thoughts on how the primary and complementary hue should change and what value they should change to, along with your reasoning.
Pick one, summarize the reason for it in less than 50 words.
Then write FINAL CHOICE: followed by output that looks like this {"primary_hue": PRIMARY_HUE, "complementary_hue": COMPLEMENTARY_HUE, "reason": REASON"}
For what it's worth, I've been using something similar with my prompts and felt the completions did a poor job of honoring this, but do a better job when asked to use words instead of characters.
> For what it's worth, I've been using something similar with my prompts and felt the completions did a poor job of honoring this, but do a better job when asked to use words instead of characters.
Yes, restricting by characters is hard for GPT-style LLMs because they work in tokens, not characters.
It can understand word boundaries, though. A space is its own token and there are special tokens for common words prefixed with a space or common word prefixes with a space in front, ex “ a”
Imagine asking a person to give a verbal response in 250 characters or less. They could do it, but it would be a lot of work. Even saying less than 50 words is hard.
If you actually have a hard cap, you’ll have to give feedback. If it’s just you don’t want an essay, it works great to say something like “a few sentences”. And as always, examples help a ton.
A video of these conditions and the resulting colors would be nice.
I wonder if the limited amount of variables mean one could just ask ChatGPT to generate a one-time lookup table of colors and store them locally. But it's interesting to see that an LLM can be a "color designer".
I realize now, instead of a lookup table, ChatGPT could probably generate a piece of code that considers those inputs (so e.g. temperature, time of day/weekday, cloud cover); and output colors.
That's fantastic. I might have to hook up my lights.
I also keep complaining that music recommendations fail because they don't take into account enough factors like this - would be great to control music choices this way too
Could you elaborate more on the setup? Like if I were a technologically competent person, but unfamiliar with how to set up a system that keep GPT live, and then feeds that output into Home Assistant
The flow is pretty simple in Node Red - render the prompt from Home Assistant and pass it to Chat GPT. The response from Chat GPT is parsed and sent back to Home Assistant in some "Helper" variables known as "input_number" and "input_text".
Once the values are in Home Assistant, it's pretty easy to change the colours of lights in an automation.
Haha, probably not. The logic for my activities and the complexity for the complimentary colours made it a little tough to maintain and extend. Plus there's a nice warming feeling knowing an LLM is busy designing my lighting throughout the day.
I don't get it. Do you not tell it the current primary and complementary hues?
Seems like it gives you the same back. And in the reason it incorrectly says it adjusted the hue.
It hallucinates a bit, yes. But in general it gives me suitable colours for when I need them. I've yet to try this with GPT-4 or some of the other models suggested in the comments here.
1. Every hour a Node Red flow runs
2. It generates a prompt using a Home Assistant Jinja template
3. The prompt is sent to Open AI
4. The response gets parsed from JSON and sent to Home Assistant's "input_number" entities
5. Lights in Home assistant pick up the state change and set the new colours
If you're keen, I can share the Node Red flow diagram (or JSON).
I guess my comment was also in reaction to the OP which sounded so absurd to me I almost thought it was parody. Af root, my response, upon further reflection, is that if all these gadgets cause so much consternation, maybe they are more trouble than they are worth? Less is more perhaps!
My reaction was also rooted in my own experience of being so crushed down by the day-to-day responsibilities involved in keeping the kiddo on the right path, going to work, keeping house, that I can’t imagine having time to AI automate my mood lights.
Where do you rank the general idea of recreation on this scale? Should we be spending 16 hours a day on improving the world? No time off ever? Should we take amphetamines and maybe get by with 4 hours of sleep instead of 8 so that each day brings 4 more hours of bettering the world to the table?
So I suppose you don’t do anything to provide any kind of comfort or enjoyment to you or your family, ever? If yours, how exactly does that differ from someone working on their home automation?
I'm not against idea of having LLM be smart assistant for home, but I do have problem with sending any of that to cloud.
It's one thing to use it as convenient way to change anything remotely but linking home automation not only into constant internet connectivity but also "that one particular cloud thing that might disappear at any moment" (hello google) just seems like setting yourself for problems later. At least in this case he's still left with HASS if cloudy part goes away.
But I wish there was more push into hybrid model - like have your router or small ARM box be server that runs queue and some, for lack of better word, "lambda-like" code that handles most of the programmed events (lights, heating etc.), say via WASM and some APIs around it, then just have HASS-like software (both cloud and local) deploy control rules/code onto it.
Big nice UI for control goes down ? Doesn't matter, the controller is just running simple code, not rest of the visualisation and controls. Want to replace it ? Configuration as code makes that a breeze, hell, have cloud backup of whole setup. Don't want controller ? It's essentially just a queue and some code runners, that cloud providers can host for you. Want pretty graphs in cloud or some aggregation ? Just make your local node filter and send relevant MQ events there.
"Internet of Things will not happen until LAN of Things arrives", to paraphrase someone.
This already happens in the industry, like heavy industry.
At home, it's held back by absence of an adequate server. Often a router box or a NAS would be fine to host something like OwnCloud. But to run ML middle, different and much more expensive hardware is required, which would sit idle 99% of time.
LAN of things - this is a fascinating idea. Can you share where you are paraphrasing this from or link to more info on this concept? Sounds familiar with the self hosted applications people now run on their NAS/routers these days.
The concept is nothing new, it's just not something manufacturers picked on. Hobbyist are doing it all the time, usually in form of most devices either communicating directly or being bridged into queue. So technically via that every device can call every other device directly so for simple things you don't even need programming
The idea is just to keep that within your own LAN.
IoT world mostly wants to create closed gardens so you buy other products around that garden.
Easiest way to make it easy for consumer is to connect your devices to cloud and so most decide to go that way, no need for end user to buy a hub.
And even if you need to (say devices use zigbee and you need a bridge), putting that in cloud means user's phone can always access it, regardless of whether they are connected to the home's wifi, without much fuss (as home internet will most likely be behind NAT/firewall so it can be hard to access directly
> $20 for the SMD TPU isn't bad, but it's definitely at the top end of the BOM for custom PCB projects.
Sounds great in theory, but show me a place where you can buy Coral TPUs at anywhere near msrp. Unless something has changed recently, finding a real live unicorn would be easier.
This is broadly the ethos behind Home Assistant. It runs on a Pi or similar box and it acts as an abstraction layer on top of a bunch of drivers for smart sensors and actuators. It's totally offline, though if you're really against being connected that also means you can't access it remotely (unless you SSH into a local machine). The dashboarding is competent, though there are often little things that are frustrating to modify (but it's open source, so the opportunity is there).
The whole system is a sort of IFTTT and quite powerful. Since every entity is exposed as a generic device with some capabilities, you can easily set up little automation routines or you can interact with the local API.
At this point pretty much all the "big" names are supported - I have a mix of Hue, Aqara, iRobot, IKEA and other devices and they mostly just connect. There's also support for unusual things like Octoprint and router status. And now there's also ESPHome so you can make your own sensor nodes.
The biggest challenge with offline voice control is latency between speech to text, inference and text to speech. Running offline is less of a problem and open source models will eventually get good enough, but the really impressive aspect of Alexa, Siri, etc is how fast they can respond to you.
A mac mini running home assistant with some kind of llama plugin running one if the 7B models should do well enough and would run entirely locally. It is just a matter of time before someone builds it.
I think one of the issues for a lot of people is the beefy hardware required for running even a 7B model locally. A RPi4 just won’t cut it for interactive use, and that’s what most people would run their home assistant setup on.
Super exciting to see the work happening in this area! I can especially appreciate the use of ChatGPT to orchestrate the necessary API calls, rather than relying on some kind of middleware to do it.
I have been working in this area (LLMs for ubiquitous computing, more generally) for my PhD dissertation and have discovered some interesting quality issues when you dig deeper [0]. If you only have lights in your house, for instance, the GPT models will always use them in response to just about any command you give, then post-rationalize the answer. If I say "it's too chilly in here" in a house with only lights, it will turn them on as a way of "warming things up". Kind of like a smart home form of hallucination. I think these sorts of quality issues will be the big hurdle to product integration.
Yeah but I think the idea is that it is a knob that calls to be turned. "It's warm in here" -> "I'll make the light blue so you feel nice and cool". "How fast do sparrows fly?" -> "Making the light brown". Like it might want to do _something_ and tweaking the hue or brightness are all it can do.
Good reason to always try to include in a prompt a way-out, a do-nothing or I-don't-understand answer.
This is the new Godwin's law: the longer a thread about AI grows, the higher the probability of a comparison to Skynet, Matrix, HAL etc popping up.
I would also like to add Wall-e to this memetic set of movies. In Wall-e, AI is an enabler of our own destructiveness, humans are enslaved by AI willfully, AI is empowered so that humans can graze away on nihilistic screens.
Lecun's law: Every discussion about AI eventually results in someone catastrophizing about an evil AI taking over, despite having no argument for why an evil, omnipotent AI is likely to ever exist.
We trained the AI on samples of writing from the internet, which includes a lot of fiction, which includes a lot of evil AIs. So, I’m surprised it doesn’t start producing evil sounding text as soon as it “finds out” that it is an AI.
It certainly makes logical sense. I think if you have the ability to control the light in the first place via an API, it's probably an LED smart bulb and thus doesn't produce much heat. At least, I'm not aware of any incandescent smart bulbs.
I mean the laziest way to control a house is to add plugs to change every electrical plug to an on/off controllable one. This would make every incandescent bulb a smart bulb.
> If I say "it's too chilly in here" in a house with only lights, it will turn them on as a way of "warming things up".
Thanks for the example that's interesting.
FWIW, this is pretty much what has been described as "waluigi" effect a bit extended: in a text you'll find on the internet, if some information at the beginning is mentioned, it WILL be relevant somewhere at some point later in that text. So an auto-completion algorithm will use all the information that has been given in the prompt. In your example it puts it in an even weirder situation where the model the overall model information (the lights, and that you're cold and nothing else), and it must generate a response. It would be a fun psychological study to look at, but I'm pretty sure even humans would do that in that situation (assuming they realize that lights may indeed produce a little bit of wattage of heat)
> FWIW, this is pretty much what has been described as "waluigi" effect a bit extended
Sorry I disagree for some reasons. First, turning the lights on is literally the only thing the bot can do to heat up the house at all. Turning on the lights does heat it up a little bit. So it's the right answer. Second, that's not the Waluigi effect, not even 'pretty much' and not even 'a bit extended'. Both of them are talking about things LLMs say, but other than that no.
The Waluigi effect applied to this scenario might be like, you tell the bot to make the house comfortable, and describe all the ways that a comfortable house is like. Then by doing this you have also implicitly told the bot how to make the most uncomfortable house possible. Its behavior is only one comfortable/uncomfortable flip away from creating a living hell. Say that in the course of its duties the bot is for some reason unable to make the house as comfortable as it would like to be able to do. It might decide that it didn't do it, because it's actually trying to make the house uncomfortable instead of comfortable. So now you got a bot turning your house into some haunted house beetlejuice nightmare.
For performant enough models, you can just instruct it not to necessarily use that information in immediate completions.
adding something like
"Write the first page of the first chapter of this novel. Do not introduce the elements of the synopsis too quickly. Weave in the world, characters, and plot naturally. Pace it out properly. That means that several elements of the story may not come into light for several chapters."
after you've written up key elements you want in the story actually makes the models write something that paces ok/normally.
It's something that I've been wondering about with ChatGPT plugins - they've kind of left it up to the user to enable/disable plugins. But there's definitely going to come a point where plugins conflict and the LLM is going to have to choose the most appropriate plugin to use.
I have been very impressed at how good it is at turning random commands into concrete API calls. You are right though, pretty much any command can be interpreted as an instruction to use a plugin.
Thanks! That is part of the challenge as this idea scales imo - once you've increased the number of plugins or "levers" available to the model, you start to increase the likelihood that it will pull some of them indiscriminately.
To your point about turning random commands into API calls: if you give it the raw JSON from a Philips Hue bridge and ask it to manipulate it in response to commands, it can even do oddly specific things like triggering Hue-specific lighting effects [0] without any description in the plugin yaml. I'm assuming some part of the corpus contains info about the Hue API.
Making IOT API calls is a solved problem with Home Assistant - plus it works locally.
Where I see this working best is giving Chat GPT some context about the situation in your home and having it work out complex automation logic that can't be implemented through simple rules.
This sounds very exciting, but the more I think about this the more I think the right interface UI for light control isn't language. I recently bought smart switches for my smart lights and its fantastic to be back to clicking.
Having to exclaim "I'm going to bed" every time you're going to bed, or "turn on bathroom lights" every time you're going to the bathroom is cartoonish at best, and gets annoying. Switching "turn on bathroom lights" for "I'm going to use the bathroom" doesn't make it much better. The ideal interface IMO is presence sensors or in their absence, well-placed light switches, given how quickly and subconsciously one can flick them.
I think there's room for augmentation and improvement using LLMs and language. A system that automatically adjusts the color of the lights based on the time of day, or one you can occasionally ask exact or complex queries (ie, "make the living room red", or "turn the lights off in 50 minutes please"). But for the day-to-day, having to "think" about your lights gets annoying.
Is there something that works with home assistant? Siri is just terrible and have been trying to replace it with anything. The ESP box posted last weeks looks pretty promising but wonder if there’s something that could work directly off of the rpi I host HA with.
Here is what a large budget buys: massive marketing campaigns. ChatGPT spam is not showing signs of slowing down anytime soon. Are people aware there are other ais out there they can use? Some free?
What kind of ai stuff would you like to do? There are indeed few with as many integrations but individually there are open source variants out there and specialised products.
I have connected my esp32s to chat (telegram) and openai. With the correct prompt and an interpreter they can now execute requests in human language: https://medium.com/p/3242af6f2988
Is this reading specific outputs (in text form) from ChatGPT and then forwarding them to an API? Or how does "ChatGPT" actually make the call based on the OpenAPI description?
From the beginning I was ridiculing AGI dommsdayers, but just one month later I am waitng for an article about "Isn't it a great idea to connect LLM to NORAD system?"
Nice - exploring the use of home assistant for something similar. It already has all the integrations and API's for control, just lacks the Chat GPT plugin.
They are very easy to make - it's just an API. The main difficulty at the moment is just getting access to them. There's a long waiting list for develop access.
I’m getting really tired of talking to Alexa now I’m used to a machine actually understanding me.
Lights is just the start. Things like getting the weather, playing a playlist (or coming up with a new one for me) would all be SO much better with an LLM instead of a dumb bot.
Edit: if you’re going to downvote. Leave a comment at least. Are there just a lot of Alexa fans in the house?
Coming up with a playlist is a fun one, but I've had much more luck with stuff like "generate a description of a playlist" - I have a little side project that talks to the spotify API and dumps song metadata into sqlite, and plugging GPT in as a SQL query generator was super useful vs writing all those queries by hand.
Of course that requires a lot of up-front work other than GPT, but not a ton more than is necessary to talk to the API in the first place, but GPT itself has not impressed me in the ability to rank things, like "select the fifty other songs I've liked that are most like this song" even when given a lot of quantitative metadata.
Agreed. In order to get Alexa to do some "higher order" stuff you have to be very explicit and program each step into a routine. This routine never changes or grows and barely adapts to the environment. For example, I have a routine called "breakfast" that turns on the TV and the light in the living room.
Alexa doesn't really know anything about breakfast it just blindly turns on the light and TV. I think it would be handy to have Alexa know that I've been watching a particular show during "breakfast" and turn the TV to the next episode. It would also be super handy for Alexa to notice that it's my usual breakfast time, I've just gotten milk and cereal out of the cabinet, and I'm heading to the TV room. Instead, I have to utter the words "Alexa, breakfast" when I want it to get ready for me to start my day.
It's in its infancy but this does work - https://www.home-assistant.io/blog/2023/05/03/release-20235 I have been able to turn a device on and off by speaking to the app on my phone. Nothing leaves my house. There is a bit of a lag but it is early days.
You simply add the Piper and Whisper addons, then add the integration in the GUI. Then you press the assistant button in the app or your browser and then press the mic button and talk.
We're a far ways off from replacing sprinkler logic with AI, you're kind of mashing together building code with home automation. Most homes don't have sprinkler systems unless they're in large buildings that mandate it, and in those cases they're never controlled by individual occupants but a centralized detection system. Building code lags behind consumer technology quite far, with good reason.
I'm not aware of anyone that's using chatGPT in life or death situations? though maybe I'm just unaware. These types of situations are usually heavily regulated (varies by jurisdiction). A home with a sprinkler system would still be mandated to meet fire code when it comes to smoke detectors, fire exits, and other traditional safety methods. Seems like the blather is coming from both ends.
Chat GPT is now our personal lighting DJ, giving us dynamic and interesting light combinations that respect our circadian rhythm.
Here's my prompt - the output of which feeds Home Assistant:
Set the hue for my home's lights using the HSL/HSB scale from 0-360 by providing a primary and complementary colour which considers the current situation. The HSL color spectrum ranges from 0 (red), 120 (green), to 240 (blue) and back to 360 (red). Lower values (0-60) represent warmer colors, while higher values (180-240) represent cooler colors. Middle values (60-180) are neutral.
Consider these factors in setting the primary hue (in order of importance):
1. Preferences throughout the day: - When about to wake: Reds, oranges or hot pinks - Approaching bedtime: Hot pinks or reds - During worktime: Blues, greens or yellows - Other times: Greens, yellows or oranges
2. Current activity: Bedtime
3. Sleep schedule: Bedtime 23:00, Wake-up time 07:00
4. Date & time: Sunday May 21, 05:40
5. Current primary hue: 10
6. Current complementary hue: 190
7. Weather: 13°C, wind speed 9 km/h, autumn
Respond in this format and provide a reason in <250 characters:
{"primary_hue": PRIMARY_HUE, "complementary_hue: COMPLEMENTARY_HUE, "reason": REASON }
The output looks like this:
{primary_hue: 10 complementary_hue: 190 reason: "Approaching bedtime and early hours of morning, so a warm and calming hue is needed. Complementary hue adjusted slightly to 195 to maintain balance."}