Hey HN! I'm one of the co-founders of Weave, and I wanted to jump in here to share a bit more.
Building this has been a wild ride. The challenge of measuring engineering output in a way that’s fair and useful is something we’ve thought deeply about—especially because so many of the existing metrics feel fundamentally broken.
The 0.94 correlation is based on rigorous validation with several teams (happy to dive into the details if anyone’s curious). We’re also really mindful that even the best metrics only tell part of the story—this is why our focus is on building a broader set of signals and actionable insights as the next step.
Would love to hear your thoughts, feedback, or even skepticism—it’s all helpful as we keep refining the product.
Skeptic here. How can you validate the difference in effort from a startup where growth happens in explosive moments with many rewrites in between vs a refined enterprise codebase with incremental changes? Is it productive if I have tried many changes in branches and none of them made it to prod?
Startups will naturally have higher output than enterprises for this reason - we'll show people benchmarks accordingly.
> Is it productive if I have tried many changes in branches and none of them made it to prod?
Our metric measures displacement, not distance - under the assumption that the end state is the part that matters the most. It will notice if the resulting change has a higher cognitive load and evaluate it accordingly - but if there is no resulting change then ultimately there's no output to measure.
I'd like to add that you need way more information on the landing page before I'm going to do much more than let you have my email address (if that.) Right now its a black box that takes in data(?) and spits out... something?
I just want to inform you that the pricing section is effed up. It talks about FramerBite pricing - which I guess is the thing you used to throw this landing page together. That seems very low effort and I would estimate the output metric of that to be 1.03 with a correlation of 0.96.
Building this has been a wild ride. The challenge of measuring engineering output in a way that’s fair and useful is something we’ve thought deeply about—especially because so many of the existing metrics feel fundamentally broken.
The 0.94 correlation is based on rigorous validation with several teams (happy to dive into the details if anyone’s curious). We’re also really mindful that even the best metrics only tell part of the story—this is why our focus is on building a broader set of signals and actionable insights as the next step.
Would love to hear your thoughts, feedback, or even skepticism—it’s all helpful as we keep refining the product.