I'm in the same boat as the person you're responding to. I really don't understand how to get anything helpful out of ChatGPT, or more than anything basic out of Claude.
> I've found that if you treat it more like a colleague, it works wonderfully.
This is what I've been trying to do. I don't use LLM code completion tools. I'll ask anything from how to do something "basicish" with html & css, and it'll always output something that doesn't work as expected. Question it and I'll get into a loop of the same response code, regardless of how I explain that it isn't correct.
On the other end of the scale, I'll ask about an architectural or design decision. I'll often get a response that is in the realm of what I'd expect. When drilling down and asking specifics however, the responses really start to fall apart. I inevitably end up in the loop of asking if an alternative is [more performant/best practice/the language idiomatic way] and getting the "Sorry, you're correct" response. The longer I stay in that loop, the more it contradicts itself, and the less cohesive the answers get.
I _wish_ I could get the results from LLMs that so many people seem to. It just doesn't happen for me.
My approach is a lot of writing out ideas and giving them to ChatGPT. ChatGPT sometimes nods along, sometimes offers bad or meaningless suggestions, sometimes offers good suggestions, sometimes points out (what should have been) obvious errors or mistakes. The process of writing stuff out is useful anyway and sometimes getting good feedback on it is even better.
When coding I will often find myself in kind of a reverse pattern from how people seem to be using ChatGPT. I work in a jupyter notebook in a haphazard way getting things to functional and basically correct, after this I select all, copy, paste, and ask ChatGPT to refactor and refine to something more maintainable. My janky blocks of code and one offs become well documented scripts and functions.
I find a lot of people do the opposite, where they ask ChatGPT to start, then get frustrated when ChatGPT only goes 70% of the way and it's difficult to complete the imperfectly understood assignment - harder than doing it all yourself. With my method, where I start and get things basically working, ChatGPT knows what I'm going for, I get to do the part of coding I enjoy, and I wind up with something more durable, reusable, and shareable.
Finally, ChatGPT is wonderful in areas where you don't know very much at all. One example, I've got this idea in my head for a product I'll likely never build - but it's fun to plan out.
My idea is roughly a smart bidet that can detect metabolites in urine. I got this idea when a urinalysis showed I had high levels of ketones in my urine. When I was reading about what that meant I discovered it's a marker for diabetic ketoacidosis (a severe problem for ~100k people a year) and it can also be indicator for colorectal cancer as well as indicating a "ketosis" state that some people intentionally try to enter for dieting or wellness reasons. (My own ketones were caused by unintentionally being in ketosis, I'm fine, thanks for wondering.)
Right now, you detect ketones in urine with a strip that you pee on, and that works well enough - but it could be better because who wants to use a test strip all the time? Enter the smart bidet. The bidet gives us an excuse to connect power to our device and bring the sensor along. Bluetooth detects a nearby phone (and therefore identity of the depositor), a motion sensor can detect a stream of urine triggering our detection, and then use our sensor to detect ketones which we track overtime in the app, ideally with additional metabolites that have useful diagnostic purposes.
How to detect ketones? Is it even possible? I wonder to ChatGPT if spectroscopy is the right method of detection here. ChatGPT suggests a retractable electrochemical probe similar to an extant product that can detect a kind of ketone in blood. ChatGPT knows what kind of ketone is most detectable in urine. ChatGPT can link me to scientific instrument companies that make similar (ish) probes where I could contact them and ask if they sold this type of thing, and so on.
Basically, I go from peeing on a test strip and wondering if I could automate this to chat with ChatGPT - having, what was in my opinion, an interesting conversation with the LLM, where we worked through what ketones are, the different kinds, the prevalence of ketones in different bodily fluids, types of spectroscopy that might detect acetoacetate (available in urine) and how much that would cost and what challenges would be and so on, followed by the idea of electrochemical probes and how retracting and extending the probe might prolong its lifespan and maybe a heating element could be added to dry the probe to preserve it even better and so on.
Was ChatGPT right about all that? I don't know. If I were really interested I would try to validate what it said, and I suspect I would find it was mostly right and incomplete or off in places. Basically like having a pretty smart and really knowledgeable friend who is not infallible.
Without ChatGPT I would have likely thought "I wonder if I can automate this", maybe googled for some tracking product, then forgot about it. With ChatGPT I quickly got a much better understanding of a system that I glancingly came into conscious contact with.
It's not hard to project out that level of improved insight and guess that it will lead to valuable life contributions. In fact, I would say it did in that one example alone.
The urinalysis (which was combined with a blood test) said something like "ketones +3" and if you google "urine ketones +3" you get a explanations that don't apply to me (alcohol, vigorous exercise, intentional dieting) or "diabetic ketoacidosis" which google warns you is a serious health condition.
In the follow up with the doctor I asked about the ketones. The doctor said "Oh, you were probably just dehydrated, don't worry about it, you don't have diabetic ketoacidosis" and the conversation moved on and soon concluded. In the moment I was just relieved there was an innocent explanation. But, as I thought about it, shouldn't other results in the blood or urine test indicate dehydration? I asked ChatGPT (and confirmed on Google) and sure enough there were 3 other signals that should have been there if I was dehydrated that were not there.
"What does this mean?" I wondered to ChatGPT. ChatGPT basically told me it was probably nothing, but if I was worried I could do an at home test - which I didn't even know existed (though I could have found through carefully reading the first google result). So I go to Target and get an at home test kit (bottle of test strips), 24 gatorades, and a couple liters of pedialyte to ensure I'm well hydrated.
I start drinking my usual 64 ounces of water a day, plus lots of gatorade and pedialyte and over a couple days I remain at high ketones in urine. Definitely not dehydrated. Consulting with ChatGPT I start telling it everything I'm eating and it points out that I'm just accidentally in a ketogenic diet. ChatGPT suggests some simple carbs for me, I start eating those, and the ketone content of my urine falls off in roughly the exact timeframe that ChatGPT predicted (i.e. it told me if you eat this meal you should see ketones decline in ~4 hours).
Now, in some sense this didn't really matter. If I had simply listened to my doctor's explanation I would've been fine. Wrong, but fine. It wasn't dehydration, it was just accidentally being in a ketogenic diet. But, I take all this as evidence of how ChatGPT now, as it exists, helped me to understand my test results in a way that real doctors weren't able to - partially because ChatGPT exists in a form where I can just ping it with whatever stray thoughts come to mind and it will answer instantly. I'm sure if I could just text my doctor those same thoughts we would've come to the same conclusion.
I believe the smart bidet was an idea some Japanese researchers developed some years ago. Maybe this one was geared to detecting blood in faeces. Whatever,the approach you describe has a huge number of possibilities for alerting us to health problems without even having to think about them on a daily basis. A huge advantage. On the other hand this is a difficult one to implement bearing in mind the kinetics involved.
I have spent the last couple of days playing with Copilot X Chat, to help me learn Ruby on Rails. I'd have thought that Rails would be something it would be competent with.
My experience has been atrocious. It makes up gems and functions. Rails commands it gives are frequently incorrect. Trying to use it to debug issues results in it responding with the same incorrect answer repeatedly, often removing necessary lines.
I only found out about this benefit of iCloud+ a few days ago, thankfully a few days before my prior solution was due to renew for another 2 years at a vastly more expensive rate.
Certainly easy to set up. DNS with CloudFlare and it was able to do it all with just a login confirmation from my side of things.
Which popups are these? The only one I ever received was when I hit the limit on my iCloud account. Certainly not anywhere as bad as all the adverts in teh start menu on my Windows work machine.
This is exactly what I have been going through for the last year or two. I even changed jobs, finding a role that was supposed to be better. At a company that would allow my skills to improve, while having what I assumed would be a better run company.
Unfortunately the new company is so full of corporate BS that I'm finding it even harder to get through each day. I genuinely feel like there are staff who are hired to 'improve productivity' through implementing Agile company wide, are actually doing everything in their power to slow things down. I've never seen this amount of unneeded meetings in my calendar, all in the name of 'planning'.
Because the market is tiny. Recruiters can't specialize, and those that do get eaten up by the likes of Datacom or Australian-based providers.
Most recruiters in NZ start from labourer/contracting/HR firms and then move into tech because it's better paid. Whereas in Australia you get people who trained specifically to be a tech recruiter, or migrated to recruitment from tech (usually BA and QA type roles).
I completely agree with this. I recently started a new role that requires me to run 2 separate VDIs. The latency is incredibly frustrating. How people have been working like this for years I do not know. Beyond the latency the resource allocation is far too low. Things just take so much longer to run and Microsoft Teams grinds to a halt if anything else is happening.
Ive been told when my managed device arrives I will no longer need to use either VDI, but if that's not the case I am very seriously considering moving onto another role despite being only ~2 months in.
Unfortunately their store was closed thoughout most of 2021 due to COVID. I orded a Advantage 2 from the sole NZ distributor who do not honour that deal. Cost 500USD only to discover it was very uncomfortable for an "ergonomic" keyboard. I dont think my shoulders are particularly far apart but I still had massive pain due to the ulnar deviation caused by the keywells not sitting at an angle that suited by shoulders.
Another few hundred dollars in mods to get QMK and lower thumb keys and while some other issues were resolved, I cannot type on it for more than a minute without intense pain in my left wrist.
I must be the only person on HN that dislikes my Kinesis Advantage 2. I bought one in 2021 after the ZSA Moonlander didnt quite meet my expectations. While the keywell idea is amazing, i found that the two halves of the keyboard weren't angled towards each other enough and I still had very painful ulnar deviation.
This wasn't helped by it being so far off the keyboard tray. I don't understand how someone can use the Advantage 2 without a standing desk. Even with a keyboard tray I am unable to sit my feet flat on the ground without my thighs and knees smashing into the tray the whole time.
I ended up buying a custom board for it to run QMK, as well as replacement thumb cluster keys to try lower their height as they also caused pain. These both helped a bit but thanks to their store being closed due to Covid, I had to buy off a distributor in NZ that charged the equivelent of 500USD, before spending even more on the mods done. Unfortunately ive gone back to the Moonlander as I can at least angle the boards more after wasting around $1000 NZD.
While this will solve the positioning of the keys, the height looks like it will still be an issue for me.
>While the keywell idea is amazing, i found that the two halves of the keyboard weren't angled towards each other enough and I still had very painful ulnar deviation.
The big change with this one is it looks like tenting angle, cant and separation are completely configurable in this iteration, so if anything this one might help the issues you've had.
> I've found that if you treat it more like a colleague, it works wonderfully. This is what I've been trying to do. I don't use LLM code completion tools. I'll ask anything from how to do something "basicish" with html & css, and it'll always output something that doesn't work as expected. Question it and I'll get into a loop of the same response code, regardless of how I explain that it isn't correct.
On the other end of the scale, I'll ask about an architectural or design decision. I'll often get a response that is in the realm of what I'd expect. When drilling down and asking specifics however, the responses really start to fall apart. I inevitably end up in the loop of asking if an alternative is [more performant/best practice/the language idiomatic way] and getting the "Sorry, you're correct" response. The longer I stay in that loop, the more it contradicts itself, and the less cohesive the answers get.
I _wish_ I could get the results from LLMs that so many people seem to. It just doesn't happen for me.