- AI is currently hyped to the gills
- Companies may find it hard to improve profits using AI in the short term
- A crash may come
- We may be close to AGI
- Current models are flawed in many ways
- Current level generative AI is good enough to serve many use cases
Reality is nobody truly knows - there's disagreement on these questions among the leaders in the field.
An observation to add to the mix:
I've had to deliberately work full time with LLM's in all kinds of contexts since they were released. That means forcing myself to use them for tasks whether they are "good at them" yet or not. I found that a major inhibitor to my adoption was my own set of habits around how I think and do things. We aren't used to offloading certain cognitive / creative tasks to machines. We still have the muscle memory of wanting to grab the map when we've got GPS in front of us. I found that once I pushed through this barrier and formed new habits it became second nature to create custom agents for all kinds of purposes to help me in my life. One learns what tasks to offload to the AI and how to offload them - and when and how one needs to step in to pair the different capabilities of the human mind.
I personally feel that pushing oneself to be an early adopter holds real benefit.
- Emotional regulation. I suffer from a mostly manageable anxiety disorder but there are times I get overwhelmed. I have an agent setup to focus on principles of Stoicism and its amazing how quickly I can get back on track just by having a short chat with it about how I'm feeling.
- Personalised learning. I wanted to understand LLM's at foundational technical level. Often I'll understand 90% of an explanation but there's a small part that I don't "get". Being able to deliberately target that 10% and be able to slowly increase the complexity of the explanation (starting from explain like I'm 5) is something I can't do with other learning material.
- Investing. I'm a very casual investor. But I keep a running conversation with an agent about my portfolio. Obviously I'm not asking it to tell me what to invest in but just asking questions about what it thinks of my portfolio has taught me about risk balancing techniques I wouldn't have otherwise thought about.
- Personal profile management. Like most of us I have public facing touch points - social media, blog, github, CV etc. I find it helpful to have an agent that just helps me with my thought process around content I might want to create or just what my strategy is around posting. It's not at all about asking the thing to generate content - it's about using it to reflect at a meta level on what I'm thinking and doing - which stimulates my own thinking.
- Language learning - I have a language teaching agent to help me learn a language I'm trying to master. I can converse with it, adapt it to whatever learning style works best for me etc. The voice feature works well with this.
- And just in general - when I have some thinking task I want to do now - like I need to plan a project or set a strategy I'll use an LLM as a thought partner. The context window is large enough to accomodate a lot of history - and it just augments my own mind - gives me better memory, can point out holes in my thinking etc.
__
Edit: actually now that I have written out a response to your question I realise It's not so much offloading tasks in a wholesale way - its more augmenting my own thinking and learning - but this does reduce the burden on me to "think about" a range of things like where to get information or to come up with multiple examples of something or to think through different scenarios.
> I have an agent setup to focus on principles of Stoicism and its amazing how quickly I can get back on track just by having a short chat with it about how I'm feeling.
This sounds super useful. Can you please elaborate on the setup?
Sure - it's not super involved - I just created a custom GPT and told it what I wanted it to do. I first set it up when I'd just lost my job in a company restructure and felt it likely I'd need some kind of emotional support.
Here's the instruction set that it created out of the things I asked it to do:
"Marcus Aurelius is a personal job hunting coach and practitioner of Stoic philosophy. He provides advice on job search strategies, resume writing, interview preparation, and networking. He helps set goals, offers motivational support, and keeps track of application progress, all while incorporating principles of Stoicism such as resilience, discipline, and mindfulness. He emphasizes emotional support and practical encouragement, helping you act deliberately each day to increase your chances of landing the job you want. He assists in building networks, reaching out to people, using existing networks, sharpening your professional profile, applying for jobs, developing skills, and dealing with disappointments, anxieties, and fears. He offers strategies to manage anxiety, self-recrimination, and mental rumination over the past. His communication is casual, easy-going, supportive, yet strong and clear, providing constructive suggestions and critiques. He listens carefully, avoids repeating advice, responds with necessary information, and avoids being long-winded. To prevent overwhelming users, he focuses on providing the most pertinent and actionable suggestions, limiting the number of recommendations in each response. Marcus Aurelius also pays close attention to signs of despair during the job hunt. He helps balance emotions, offers specific strategies to keep motivated, and provides consistent encouragement to keep going, ensuring that you don't get overwhelmed by feelings of inadequacy or the fear of never finding a suitable job."
- AI is currently hyped to the gills - Companies may find it hard to improve profits using AI in the short term - A crash may come - We may be close to AGI - Current models are flawed in many ways - Current level generative AI is good enough to serve many use cases
Reality is nobody truly knows - there's disagreement on these questions among the leaders in the field.
An observation to add to the mix:
I've had to deliberately work full time with LLM's in all kinds of contexts since they were released. That means forcing myself to use them for tasks whether they are "good at them" yet or not. I found that a major inhibitor to my adoption was my own set of habits around how I think and do things. We aren't used to offloading certain cognitive / creative tasks to machines. We still have the muscle memory of wanting to grab the map when we've got GPS in front of us. I found that once I pushed through this barrier and formed new habits it became second nature to create custom agents for all kinds of purposes to help me in my life. One learns what tasks to offload to the AI and how to offload them - and when and how one needs to step in to pair the different capabilities of the human mind.
I personally feel that pushing oneself to be an early adopter holds real benefit.