Hacker Newsnew | past | comments | ask | show | jobs | submit | Imanari's commentslogin

My strategy: actively give yourself permission to just not do the thing, it’s ok! Enjoy your “time off“. This alone sometimes just makes me do the thing eventually kind of out of nowhere.

Could you broadly describe the AI projects you have built?


I literally just had a conversation with my CEO this morning where he told me not to disclose the projects I’ve been working on, so I can only speak about it obliquely.

We identified some problems our customers have, and I’ve come up with interesting ways to use LLMs as part of an automated system to solve some of those problems. It’s not the kind of thing where we just dump some data into the ChatGPT API and get an answer. We’re doing fairly deep integrations that do some interesting/powerful things. It’s been a big deal for our prospective clients and investors.


Pinn for simulations


The following workflow has given me a solid starting point multiple times:

1. > Please ask clarifying questions about {thing you want to implement} and write those into planning.md

2. > Please answer those questions with your best guesses / suggestions.

3. Review and correct the answers from Claude from step 3

3a Repeat 1-3 if needed with follow-up questions.

4. > Go ahead and implement the thing.

In step 3 you can, of course, answer the questions yourself, but letting Claude answer sometimes gives you surprising answers that broaden your vision on the problem a bit.

I am sure you could further automate this with hooks, slash commands, agents, etc., but so far I didn't bother.

Also I have heard great things about Serena-MCP but I haven't tried it myself yet.


Does saying 'Please' give better results than not saying please? To a bot


> This all started for me about three years ago, when I left a Google review for a doctor saying I felt discriminated against. Shortly after, I got slapped with a legal threat demanding €40,000 in damages. I ended up settling and paying €1,000 in legal fees just to avoid the nightmare of going to court.

They can force you to court over a google review?!


if they can figure out who made the review they can claim defamation


Defamation is when you say something you knew to be false.

Leaving a review that is honest is absolutely fine.


And when you get sued that's judge who has to decide that and for that you go to court, and for that you need a lawyer. Hence why most people just CBA...


Exactly, and at that time I was naive to put my (only first name) name on Google, as it is my only account. And they figured it out from their patients' records and sued me.


I've seen this several times in Google maps, the business owner finds out who the client is (gym, doctor, even ice cream shop) and confronts the client in the comments or even personally (seen once) instead of taking in the feedback


I've been using it within Claude Code via ccr[0] and it feels very similar to Claude 4.

[0] https://github.com/musistudio/claude-code-router


You can use any model from openrouter with CC via https://github.com/musistudio/claude-code-router


This ranking is just for the parsing, not the RAG Portion, correct?


Correct-ish. LlamaCloud and GroundX do everything up to retrieval. Here is an interactive graphic of major players along RAG flow: https://claude.ai/public/artifacts/b872435b-1d9c-461e-a29c-b...


PSA: you can use CC with any model via https://github.com/musistudio/claude-code-router

The recent Kimi-K2 supposedly works great.


> The recent Kimi-K2 supposedly works great.

My own experience is that it is below sonnet and opus 4.0 on capability - but better than gemini 2.5 pro on tool calling. It's really worth trying if you don't want to spend the $100 or $200 per month on Claude Max. I love how succinct the model is.

> you can use CC with any model via

Anthropic should just open source Claude Code - they're in a position to become the VS Code of cli coding agents.

Shout out to opencode:

https://github.com/sst/opencode

which supports all the models natively and attempts to do what CC does


I tried gemini2.5 and while it is certainly a very strong model you really notice that it was not trained to be 'agentic'/with strong initiative for tool calling. Oftentimes it would make a plan, I'd say 'go ahead' and it just replied something like 'I made a todo list we are ready to implement' or something similar lol. You really had to push it to action and the whole CC experience fell apart a bit.


I agree, Claude models are the most agentic oriented from the ones I've tried


Where do you host Kimi-K2 ?


You can use it via openrouter.ai


I’d just use sst/opencode if using other models (I use it for Claude through Claude pro subscription too)


Corollary if you're unfamiliar with how CC works (because you've never been able to consider it for its price, like me) – the CC client is freely available over 'npm'.


gpt4.1 works surprisingly well although it is not as proactive as Sonnet.


thanks!


CC is more autonomous, which can be a double edged sword. In big codebases you usually don't want to make large changes and edit multiple files. And even if you do, letting the LLM decide what files to edit increases the chance for errors. I like Aider better aswell. It's a precision tool and with some /run it is pretty flexible for debugging.


How are you using more concurrent sessions?


For each file in files: claude prompt with file

You can generally do map-reduce, also you can have separate git worktrees and have it work on all your tickets at the same time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: