Cursor: Autocomplete is really good. At a time when I compared them, it was without a doubt better than Githib Copilot autocomplete. Cmd-K - insert/edit snippet at cursor - is good when you use good old Sonnet 3.5. ;;; Agent mode, is, honestly, quite disappointing; it doesn't feel like they put a lot of thought into prompting and wrapping LLM calls. Sometimes it just fails to submit code changes. Which is especially bad as they charge you for every request. Also I think they over-charge for Gemini, and Gemini integration is especially poor.
My reference for agent mode is Claude Code. It's far from perfect, but it uses sub-tasks and summarization using smaller haiku model. That feels way more like a coherent solution compared to Cursor. Also Aider ain't bad when you're OK with more manual process.
Windsurf: Have only used it briefly, but agent mode seems somewhat better thought out. For example, they present possible next steps as buttons. Some reviews say it's even more expensive than Cursor in agent mode.
I've been using IntelliJ IDEA and similar products for almost 10 years, and I'm not impressed.
Java/Kotlin is their main thing, and yet neither Maven nor Gradle builds are stable. If your build fails or there are some unresolved dependencies, you restart IDE in hope it works...
AI coding tool trial failed for me -- IDE told me it's not activated even after I activated it on billing portal. And doc implied it might take some time. WTF. Does it take some batch processing?..
People who were able to get AI coding tools working said it's way behind Cursor (although improving, apparently).
Counterpoint from me: I've been using Jetbrains tools for over 10 years as well. Mostly Webstorm and Rider and it's all working well. Sometimes there are bugs, yes, but I had plenty those in VSCode and Visual Studio as well.
Aside from their initial AI plugin rollout fiasco it has been smooth sailing for me.
There's something severely wrong with your setup if you can't get stable maven or gradle builds, and your AI problem... maybe it was really early right after release? Either way, contact their support.
And if "If your build fails or there are some unresolved dependencies" you check your dependencies and config.
I'm tired of people complaining and not trying to understand how their systems (or an IDE for that matter) work.
Because JetBrains products DO have issues, but rest assured, the things you are complaining about are on the main path of basic features they take care of the most.
Source: at first reluctant but now happy IntelliJ user, after thinking for a long time that Eclipse/Netbeans would be better. I was wrong.
> There's something severely wrong with your setup if you can't get stable maven or gradle builds
To be fair with OP, I had a similar experience in the previous company. Sometimes after bumping a dependency or something, even if you asked IntelliJ to "Reload All Gradle Projects" something wouldn't work and I needed to restart the IDE. Not saying it was common, but it did happen. This was a Kotlin codebase.
Now working in a company using Scala, I had a few cases where IntelliJ failed to resolve a dependency correctly so it was saying that the class I just created didn't exist (this was in shared library and two different dependencies pulled it, but we did pin the dependency directly and sbt was building it correctly, it was only IntelliJ that was becoming confused). Cleaning up the build sometimes worked, sometimes didn't, but even when it worked it would go back after a while. Eventually we updated all dependencies in all projects and it now always works, but it was painful for a while.
Look, I understand that IntelliJ is better than Eclipse. Eclipse is just bad. (E.g. if you open existing workspace with a newer version of Eclipse it crashes - a problem they haven't fixed for 20 years!!!)
But I can tell you IntelliJ has way more issues than e.g. Visual Studio I used 20+ years ago.
I take preview to mean the model may be retired on an accelerated timescale and replaced with a "real" model so it's dangerous to put into prod unless you are paying attention.
Scheduled tasks in ChatGPT are useful for keeping track of these kinds of things. You can have it check daily whether there's a change in status, price, etc. for a particular model (or set of models).
Neither do I, but it's the best solution I've found so far. It beats checking models/prices manually every day to see if anything has changed, and it works well enough in practice.
But yeah, some kind of deterministic way to get alerts would be better.
That's actually not true. Cobain used a rather "weird" technique where he would "accidentally" hit 4th in addition to 5th. That actually has characteristic Nirvana sound and is sometimes called "Cobain chords". Assuming Cobain doesn't hit any other strings these are sus4 chords.
I find it really weird that people claim these are power chord even though they sound really different. In my view they are important to the vibe of "Smells like teens spirit" as they bring some dreamy characteristic as they make it a lot brighter compared to what it would be if it was played with only two lowest notes.
I'd be a lot more interested in AI which can take existing music as the base. I don't want a new song, I want to hear music I love in a new way. Like img2img but track2track.
Literally the opposite problem in Portugal: you go to a restaurant, food is good, you want to drink espresso in the end, the bring Delta's worst blend.
I like the idea but it did not quite work out of box.
There was some issue with sign-in, it seems pin requested via web does not work in console (so the web suggesting using --pin option is misleading).
I tried BYO plan as I already have openrouter API key. But it seems like default model pack splits its API use between openrouter and openai, and I ended up stuck with "o3-mini does not exist".
And my whole motivation was basically trying Gemini 2.5 Pro it seems like that requires some trial-and-error configuration. (gemini-exp pack doesn't quite work now.)
The difference between FOSS and BYO plan is not clear: seems like installation process is different, but is the benefit of paid plan that it would store my stuff on server? I'd really rather not TBH, so it has negative value.
On Gemini 2.5 Pro: the new paid 2.5 pro preview will be added soon, which will address this. The free OpenRouter 2.5 pro experimental model is hit or miss because it uses OpenRouter's quota with Google. So if it's getting used heavily by other OpenRouter users, it can end up being exhausted for all users.
On the cloud BYO plan, I'd say the main benefits are:
- Truly zero dependency (no need for docker, docker-compose, and git).
- Easy to access your plans on multiple devices.
- File edits are significantly faster and cheaper, and a bit more reliable, thanks to a custom fast apply model.
- There are some foundations in place for organizations/teams, in case you might want to collaborate on a plan or share plans with others, but that's more of a 'coming soon' for now.
If you use the 'Integrated Models' option (rather than BYO), there are also some useful billing and spend management features.
But if you don't find any of those things valuable, then the FOSS could be the best choice for you.
When I used `--pin` argument I got an error message along the lines of "not found in the table".
I got it working by switching to oss model pack and specifying G2.5P on top. Also works with anthropic pack.
But I'm quite disappointed with UX - there's a lot of configuration options but robustness is severely lacking.
Oddly, in the default mode out of box it does not want to discuss the plan with me but just jumps to implementation.
And when it's done writing code it aggressively wants me to decide whether to apply -- there's no option to discuss changes, rewind back to planning, etc. Just "APPLY OR REJECT!!!". Even Ctrl-C does not work! Not what I expected from software focused on planning...
> Oddly, in the default mode out of box it does not want to discuss the plan with me but just jumps to implementation.
It should be starting you out in "chat mode". Do you mean that you're prompted to begin implementation at the end of the chat response? You can just choose the 'no' option if that's the case and keep chatting.
Once you're in 'tell mode', you can always switch back to chat mode with the '\chat' command if you don't want anything to be implemented.
> And when it's done writing code it aggressively wants me to decide whether to apply -- there's no option to discuss changes, rewind back to planning, etc. Just "APPLY OR REJECT!!!". Even Ctrl-C does not work! Not what I expected from software focused on planning...
This is just a menu to make the commands you're most likely to need after a set of changes is finished. If you press 'enter', you'll return back to the repl prompt where you can discuss the changes (switch back to chat mode with \chat if you only want to discuss, rather than iterate), or use commands (like \rewind) as needed.
1. It started formulating the plan
2. Got error from provider (it seems model set sometime randomly resets to default?!?)
3. After I switched to different provider, I want it to continue planning, so I use \continue command
4. But when it gets \continue command it starts writing code without asking anything!
5. In the end it was still in chat mode. I never switched to tell mode, I just wanted it to keep planning.
I see, it sounds like \continue is the issue—this command is designed to continue with implementation rather than with a chat, so it switches you into 'tell mode'. I'll try to make that clearer, or to make it better handle chat mode. I can definitely see how it would be confusing.
The model pack shouldn't be resetting, but a potential gotcha is that model settings are version controlled, so if you rewind to a point in the plan before the model settings were changed, you can undo those changes. Any chance that's what happened? It's a bit of a tradeoff since having those settings version controlled can also be useful in various ways.
The installation process for the FOSS version includes both the CLI (which is also used for the cloud version) and a docker-compose file for the server components. Last time I tried it (v1) it was quite clunky but yesterday with v2 it was quite a bit easier, with an explicit localhost option when using plandex login.
I would get rid of the email validation code for localhost, though. That remains the biggest annoyance when running it locally as a single user. I would also add a $@ to the docker-compose call in the bash start script so users can start it in detached mode.
Yes, it showed up for me, luckily I had the logs open and remembered that was the solution in v1 (it wasn’t documented back then iirc). I git pulled in the same directory I ran v1 in so maybe there’s some sort of left over config or something?
Yeah, I noticed that (needing a dedicated OpenAI key) as well for the BYO key plan. It's a little bit odd considering that open router has access to the open AI models.
OpenRouter charges a bit extra on credits, and adds some latency with the extra hop, so I decided to keep the OpenAI calls direct by default.
I hear you though that it's a bit of extra hassle to need two accounts, and you're right that it could just use OpenRouter only. The OpenRouter OpenAI endpoints are included as built-in models in Plandex (and can be used via \set-model or a custom model pack - https://docs.plandex.ai/models/model-settings).
I'm also working on allowing direct model provider access in general so that OpenRouter can be optional.
Maybe a quick onboard flow to choose preferred models/providers would be helpful when starting out (OpenRouter only, OpenRouter + OpenAI, direct providers only, etc.).
Whatever Claude Code is doing in the client/prompting is making much better use of 3.7 than any other client I'm using that also uses 3.7. This is especially true for when you bump up against context limits; it can successfully resume with a context reset about 90% of the time. MCP Commander [0] was built almost 100% using Claude Code and pretty light intervention. I immediately felt the difference in friction when using Codex.
I also spent a couple hours picking apart Codex with the goal of adding Sonnet 3.7 support (almost there). The actual agent loop they're using is very simple. Not to say that's a bad thing, but they're offloading all planning and workflow execution to the agent itself. That's probably the right end state to shoot for long-term, but given the current state of these models I've had much better success offloading task tracking to some other thing - even if that thing is just a markdown checklist. (I wrote about my experience [1] building AI Agents last year.)
Cursor Agent Tools is a Python-based AI agent that replicates Cursor's coding assistant capabilities, enabling function calling, code generation, and intelligent coding assistance with Claude, OpenAI, and locally hosted Ollama models.
My reference for agent mode is Claude Code. It's far from perfect, but it uses sub-tasks and summarization using smaller haiku model. That feels way more like a coherent solution compared to Cursor. Also Aider ain't bad when you're OK with more manual process.
Windsurf: Have only used it briefly, but agent mode seems somewhat better thought out. For example, they present possible next steps as buttons. Some reviews say it's even more expensive than Cursor in agent mode.
reply