You can use that to take images and generate annotated segmented images/masks that you can then train a YOLO model on. I've done this for prototypes before and it's a very quick way of getting started as you can hand off the really annoying annotation work to a machine.
Rewire the power button so it's an easy to push switch and put it behind a target. Then your remote can be a nerf gun.
Edit - the other thing that came to mind was to look into HDMI cec and see if there's a setup where turning on your tv will turn on the PC automatically.
Edit 2 - if you go with an app, you could use something like tasker and then you can have different triggers like your phone tapping an nfc sticker (then you don't have to unlock and find an app).
It depends why you're tracking things, and what level of "everything" you care about.
Starting with pretty much everything can be a good idea for people to get a sense of what's in what foods. How much does an onion typically weigh? What's that actually adding? What's the difference between getting lean and fattier meat? How much oil are you really adding?
After that it's easier to start dropping things - if I'm trying to lose weight I simply do not care precisely how much celery I've added for the sofrito. I do care about the amount of butter, oil, rice, bread, pasta though.
I'm not concerned about getting fat adding paprika, so I'm not weighing spices. Even if I'm trying to track macros that's just not going to be a considerable contributor to anything.
> - different cooking time in one receive : oignons going first, tomato sauce in the middle and parsley at the end (but still cook a bit with residual heat)
Prep/measure things first.
Last three things that smooth things over for me
1. Meal prep on a different day. I'm not in as much of a rush at night, it's proportionally less time involved measuring something for a larger number of meals/sauces/components.
2. Having measuring spoons and fast scales nearby.
3. Measuring before & after amounts rather than exactly what to add. If I need to add butter to a sauce until it's the right consistency, or flour to a dough, or whatever then weighing as I go is a nightmare. Instead just weigh it before and after and you'll see what you used. This tip works pretty well for oil too.
You can give yourself an ability akin to time travel by writing things down first.
If I write down the calories afterwards, I get the "oh, I shouldn't have done that" feeling at times. I'd like a little time travel button that takes me back to before I did, and let me adjust my behaviour and run through the situation again. If I write it down first I get to have the "oh, that's not worth it" feeling up front and decide to do something else.
This made a big difference for me, both lowering what I was eating and making me happier about the choices I made.
My base thing while advising people is that if anyone you pay needs to read the output, or you are directly replacing any kind of work then even frontier llm model inference costs are irrelevant. Of course you need to work out of that's truly the case but people worry about the cost in places where it's just irrelevant. If it's $2 when you get to an agent, each case that's avoided there could pay for around a million words read/generated. That's expensive compared to most API calls but irrelevant when counting human costs.
Prompt engineering is less and less of an issue the simpler the job is and the more powerful the model is. You also don't need someone with deep nlp knowledge to measure and understand the output.
Plenty of simple jobs required people with deeper knowledge of AI in the past, now for many tasks in businesses you can skip over a lot of that and use a llm.
Simple things were not always easy. Many of them are, now.
But McDonalds absolutely can tell you an objective measure of what they charge you based on what you're getting. They charge you x per burger and y per fries and ...
The examples contained CPU and ram but that's not what they say everything should be - just some objective measure.
Snowflake charge by time, storage and size of machine - though they never tell you what the machine actually is underneath. I don't know what their "large" is.
Maybe it's by concurrent users, maybe amount of hours of support, maybe API calls.
I think the key thing was "we'd charge you X because you'd use Y" rather than "we'd charge you X because you look like you might pay it"
https://github.com/IDEA-Research/Grounded-Segment-Anything
You can use that to take images and generate annotated segmented images/masks that you can then train a YOLO model on. I've done this for prototypes before and it's a very quick way of getting started as you can hand off the really annoying annotation work to a machine.
reply