Nathan Marz was the original creator of what became Apache Storm [1], which powered Twitter for some time. Skepticism is healthy, perhaps even warranted here, but I'm not betting against him just yet.
He is also the creator of Cascalog (Hadoop query dsl in Clojure) and the Lambda architecture pattern.
Not lambda as we know it now popularised by AWS, but an architecture for stream processing where batch views from expensive and slow batch jobs are combined with speed views from stream processors into the final live result.
100x increase in productivity is a silly, hyperbolic claim no matter who makes it. I'd even be skeptical of a 2x claim, because in 25 years I have yet to see any of these productivity plays actually pan out. What I have seen are small incremental improvements here and there, but you can't point to anything in the recent past that has improved productivity 10x or 100x (unless your old process was just total crap).
At best I would expect a small niche collection of very specific tasks to be improved, but definitely not applicable to general productivity.
well it should not be how personal opinions are formed - maybe you should look up first principle thinking if you have not yet.
I do not know if their 100X claim will come true or not, however saying something is impossible merely because it has not been done in the past is clearly wrong
“Powered Twitter” is an overstatement. Storm was indeed used at Twitter for select streaming use cases, but it was a bit of a mess and ended up being rewritten from the ground up for 10x improvements in latency and throughout [1]. Marz was at the company for < 2 years. Lately, Twitter has been moving data processing use cases to GCP [2].
Storm is also not very well regarded in the stream processing community due to its restrictive model, poor guarantees, and abysmal performance [3].
I have nothing against Marz, but I do think skepticism is warranted until we see what they’ve built.
[1] https://en.wikipedia.org/wiki/Apache_Storm