Disagree. With possibly a few exceptions, secure software development is led by the breakers, not the builders. Some of the best of the breakers start out as competent builders, but either way: you can't defend something if you don't understand how it's going to be attacked, and the history of software security over the last 20 years suggests that we suck at predicting what attackers will come up with. Great example: "buffer overflows are easy to fix; just make the stack non-executable".
One of the best secure developers in the world is Daniel J. Bernstein, and, two things you'd want to know about DJB: (i) he missed LP64 integer overflows, which got flagged by (pure breaker) Georgey Guninski, and (ii) he's a world-class cryptanalyst and breaker in his own right.
There's a reasonable "builders vs. breakers" argument to be had, I'm sure, but my experience suggests that overwhelmingly the people making this "world needs more builders and less breakers" point are people who are annoyed at the prospect of sinking the time into becoming a competitive breaker.
Disagree. You don't need to be a breaker to learn how things get attacked; being a fixer also works. I've broken a few things over the years, but I've learned far more by fixing things in FreeBSD which everybody else has broken.
(Maybe you don't think I'm a good secure developer, though.)
As an example I deal with the registries (the people that deal with the registrars such as Verisign, PIR etc.) and I know what security they have in place to prevent a person from imitating a registrar and gaining access. I also know exactly how to circumvent that security (which you don't) and obviously the registries don't (because if they did they would have steps to prevent what I know could do to circumvent their security).
While this only illustrates one example, the mindset of someone whose thought process is excited by how to break into something is not necessarily the same as someone who designs a system (although that could be the same person).
Here's another example that is maybe a little closer to the parent's point. I once got into a medical conference by setting up a website about the medical specialty that the conference was about. They requested I fax them a request on a letterhead (easy to whip up in a graphics program) and was quickly issued press credentials. A domain was also registered called "(Speciality) Treatment News" with a simple banner. Doing that revealed flaws in their system for vetting who should be issued press credentials because the person setting up the system a) didn't think like a breaker does and b) didn't realize the simple skills that are necessary to fake a letterhead that looks legitimate.
The idea is you're not going to be able to think of as many attack vectors unless you also spend time breaking things.
Fixers are reacting, sure, but they're probably reacting to far more vulnerabilities than any one breaker could find. And reacting doesn't mean that you can't be leading -- it's entirely possible for someone to say "gee, OpenSSL seems to have lots of security vulnerabilities, maybe we should avoid using OpenSSL" and thereby pre-emptively immunize themselves against a wide range of yet-to-be-discovered breaks.
As for me being a breaker... I'd say that my security-related time is split roughly 90% building, 9% fixing, and 1% breaking.
A good practice is to have your applications run past an application security specialist. Before it goes into production, the appsec person can break it every way they know how, and the developer can then fix those issues. In this method, the developer is learning with every bit of code they submit, and hopefully will not make the same mistakes again. That's where the role of "security expert" comes in as it relates to the community at HN. HN is developer-oriented, and having a dedicated appsec person fulfills that role that is so often forgotten.
In my role as security engineer for my company (as opposed to being a developer), it's nice finding possible exploits and passing them to our appsec guy for review. Being the guy who both finds (or develops) and also exploits/documents/patches the security flaws leads to a feedback loop far too often. Being a security conscious developer doesn't preclude the need for an application security engineer.