Hacker News new | past | comments | ask | show | jobs | submit | jongorer's comments login

the nitpicking in this thread is incredible lmao


I must say I find your comment off putting.


Not sure about how "zero knowledge" figures into this; is the app merely E2EE? In which case, it's no different from Evernote?

And there is no open source server component, so you can't host this yourself. And say what you will about Evernote, I trust them to stick around way more than another random notes startup thing.

p.s. not to get too political but it seems like the founder is expressing some problematic views on his twitter account (https://twitter.com/thecodrr). Problematic as in, he's this close to being an outspoken supporter of terrorism and radical Islam.


Since this is the second time it's been raised in this post, I looked through all of the author's tweets since October 7th.

This is subjective, but I didn't find anything beyond reasonably mainstream expressions of support for the people of Palestine and anger at Israel's conduct in Gaza. What did you see that was on the verge of unconscionable?


Evernote is not E2EE.


sounds like you're mad because you don't understand simple game theory (and perhaps also jealous of his mental capacities and/or status?). Mr Neumann wasn't advocating for "genocide" (which is a completely false characterization, if not a hysterical one) as much as he was trying to save the world from much worse fate.


...and he got it very very wrong. You seem to be missing that part.

It is a cautionary tale that, when dealing with humans and complex systems, these rational abstractions that assume perfectly spherical cows should be heavily discounted. The world is too complex to fit it into a mathematical model you can solve analytically.


Von Neumann's reasoning in this case is a good example of the limits of simple game theory and a too simplified modeling. Ironically, it shouldn't be too hard to come up with a credible model of evolutionary game theory that illustrates why pre-emptive strikes would have been bad in the long run. I'm not advocating EGT or game theory in general (there are many other problems with applying these "models" to real world phenomena) and I'm not saying von Neumann was clearly wrong, but I am willing to say with confidence that it's far-fetched and naive to believe he was clearly right. Luckily, most decision makers agreed with this assessment then.


"as much as he was trying to save the world from much worse fate."

So do you think the world would be in a better shape today, if the world would have listened to Neumann and had its nuclear war?

The characterization of starting a nuclear war as genocidal is very acurate, no matter how "noble" the intentions were. Stalinist sowjet russia was hell on earth, but nuking them would have just made hell spread and worse.

Von Neumann was clearly a genious. But when you leave out the dark details, it just becomes cult like glorification. I don't think, there should be a disclaimer everywhere he is mentioned, but in a praise article like this, it should be included.


That's not how counterfactuals work. We played Russian roulette and got very, very lucky. Even the hawkish experts agree on this. We've been on the brink of nuclear annihilation a dozen times since WW2 and each time it was random chance that saved us. It's almost enough to make one believe in anthropic arguments.

The grandparent is misquoting von Neumann and taking his remarks way out of context. von Neumann was part of a presidential commission from 1945 that came up with policy recommendations for the post-nuclear age. They accurate surmised that Russia would get the bomb, and that this would lead to what we now call a Cold War, with all the downstream ramifications like constant geopolitical instability from proxy wars. They were 100% right on the money. Niels Bohr called it the "complementarity" of the bomb. It was really remarkable how much they figured out from an understanding of nuclear physics, their experience in the war, and some rudimentary game theory.

They also knew about thermonuclear bombs. von Neumann was instrumental in doing the theory to show it possible during the Manhattan project. A prediction from the commission was that the inevitable outcome of Russia getting the bomb would be a nuclear arms race to develop more powerful bombs, faster delivery vehicles, and hair-trigger firing mechanisms. They predicted nuclear brinkmanship, and the removal of safeguards. They knew it would only be a matter of time before real events, or accidental mistakes led to a nuclear exchange between these two powers. And the longer you waited, the more devastating this would be.

They came up with recommendations: a nuclear non-proliferation treaty with allowances for mutual inspections. A revival of a league of nations to settle disputes. A mutual-defense treaty whose members would be permitted access to Anglo-American nuclear weapons for defensive purposes only. All of these would come to pass eventually, but at the time they were completely rejected (except the UN, although it came to serve a different purpose) by Truman and Churchill. The idea of Russia developing a bomb on a short timescale was seen as ludicrous. The idea of a "Cold War" was science fiction.

But in 1950 the Soviet Union had just detonated their first atomic bomb some months before. They still had a ways to go to ramp up production and make it a practical, deployable weapon. They had the bomb though, and would no longer be willing (if they ever would have) to partake in an arms reduction, mutual defense non-proliferation treaty. The Cold War was suddenly real, and ramping up. Bohr and von Neumann were proven right.

The US had not yet developed the thermonuclear bomb, so the bombs they had were like the weaker ones dropped on Hiroshima and Nagasaki. These were powerful weapons, no doubt, but not fundamentally any worse than the kind of devastation that was already normalized with the firebombing of Dresden, Tokyo, and other cities. Just more compactly packaged. So there was a very brief window where the invasion of the Soviet Union could have happened and would have been successful. How many would have died depends on how long it would take to force a surrender. Certainly hundreds of thousands, likely millions.

So now we get to 1950, and what von Neumann is saying is essentially this: "Look, you political half-wits hacks failed to take action when we warned you five f@&$ing years ago that this would happen, while there was still time to have peaceably avoided this mess. Now millions will die in pointless proxy wars fought between client states in the decades to come, culminating in an eventual nuclear exchange that will kill billions, destroy world civilization, and maybe even the entire human race.

"Or, hear me out, we can put a stop to this right now with a preemptive invasion of the Soviet Union. Millions will die, but millions are going to die anyway, and we will possibly save billions. Stalin kills millions of his own people anyway."

He didn't use those words, but that's a summation of what his argument was at the time. Once again, this advice was rebuffed. The politicians didn't want to start a war the public wouldn't understand. Better, they thought, to kick the can down the road. He then countered with the quote in the grandparent post, which is essentially saying "No, kick the can and you get a bigger mess. Settle it now and fewer people have to die." Better to rip that bandaid off.

It's a calculus of death. But that's no different than the logic that went into bombing Japan to prevent an even more devastating invasion of Honshu. It is sickening and abhorrent, but not necessarily wrong, and it is not genocide.


"But that's no different than the logic that went into bombing Japan to prevent an even more devastating invasion of Honshu. It is sickening and abhorrent, but not necessarily wrong, and it is not genocide. "

I see, we have a different base moral.

Well, in my world it is absolutely wrong to nuke a civilian city.

"so the bombs they had were like the weaker ones dropped on Hiroshima and Nagasaki. These were powerful weapons, no doubt, but not fundamentally any worse than the kind of devastation that was already normalized with the firebombing of Dresden"

And the firebombing of Dresden was very wrong as well, but unlike those conventional bombings, the nuclear bombs lead to lasting contamination. The bang of a fusion bomb might be way bigger, but the radiation is way worse with nukes.

"to prevent an even more devastating invasion of Honshu"

And the calculus was never between nuking a civilian city vs. full scale invasion. There would have been plenty of other options, like let the bomb explode in the sky in view of the emperors palace, or if that fails, dropping it on actual military bases.

Humanitarian ethics was just not a factor anymore to those planning it. They wanted the data for the effect of a nuke on a city. And von Neumann seemed to have been among those cold planing strategists, since he did propose nuclear war with his quotes, where I do not see them out of context, but as a clear statement for first strike war with nuclear weapons. And the effect of this is genocidal.


You’re overestimating the importance of business people. Viewing products from a business point of view inevitably leads to inferior products, as the focus is on purely making money. It’s not about making money or about the synergy of business and development. It’s all about creating useful shit for other people to use. You need domain expertise, good taste, and the ability to create value by building things. Focus on money and you’ll be cutting corners, trying to sell useless shit to people who buy it for others to never use.


I think we're talking about different things. Take three scenarios.

1. Mediocre idea for an app with a lot of funding, raised a competent team of coders, business school grads with no experience in the lead. This will fail. Those guys will take the money and buy jeeps, degrade the product with bad ego-driven ideas, etc.

2. Great idea for an app with no funding, founded by coders. These people will work their ass off all weekend for free. Business people can't see the value; will never see the value; if they touched it they'd ruin it, but they won't touch it. The app launches, gets to the front page of HN, a week later it's dead because there's no business model. Here you could make a case that some business people should have been involved.

3. The ideal case. This is not an idea for an app. This is not really even an app. Someone has a glass factory. Or a small chain of hotels. Their employees are constantly doing things on a whiteboard in the back of the office. They want to expand to more locations and scale up, but the system they have just gets more confusing and complicated the bigger they get, it's really hard to train new employees to use this whiteboard and pass these paper slips around the factory. They come and ask you how you can make it better and you build them an app that they pay for out of pocket, which puts the system up on screens all over the facility. One-off deal. Then they grow and they ask for more apps to manage their customers, their inventory, their billing. Multi-one-off-deals, but now you have an ecosystem you maintain for them.

Those are the business people I'm talking about, not the ones who approach you with an elevator pitch where "it's like Uber but for dogs" or something.


“Witness me”


While easy to understand, I’m constantly surprised to see this type anti-intellectualism. The fact that you don’t have the required prerequisites to understand a codebase doesn’t mean it’s bad. Educate yourself on Category Theory and functional programming techniques and learn to leverage these tools to your advantage.

Alternatively you can work in Go where braindead simplicity is the mandated norm.


Thank you for this reply. It demonstrates exactly the mindset of those few FP die-hards that I have seen wrecking havoc in a couple of companies where I worked for.

I'm not saying FP is bad, it is actually really powerful when it is in the hands of those who understand when and how to use it. The problem with the attitude that you demonstrate in your reply is also not limited to FP: in the era of OOO, there were those that spread the evangelism of design patterns to all places regardless if it made any sense; before that in the 90's there was a group of programmers that liked to generate code until no colleague understood anymore what was happening. And when you tried to tell them that abstractions don't come for free, but with a cost because they make code harder to understand, the answer for the last 25 years has always be the same, although rarely said straight: that it was only hard for -you- to understand, and not for the enlightened master himself.

I have seen more projects fail because of too much unneeded abstraction than by all other causes together. I even have seen companies go almost bankrupt because of projects engineered by lone wolfs where nobody understood the abstractions anymore except for the designer himself, and at some point he himself not really anymore either.

And the problem is that this mindset continues to be cultivated by CS books and conferences. Few people want to read a book that tells them that the secret of being productive as a team depends on mostly the culture in the team and the simplicity of the code, and not on the latest hyped framework, language or paradigm.


> I'm not saying FP is bad, it is actually really powerful when it is in the hands of those who understand when and how to use it. The problem with the attitude that you demonstrate in your reply is also not limited to FP: in the era of OOO, there were those that spread the evangelism of design patterns to all places regardless if it made any sense; before that in the 90's there was a group of programmers that liked to generate code until no colleague understood anymore what was happening.

Agreed, and what did we get? dogmatic decrying of how OOP is completely useless and objectively bad, not too dissimilar from some of the comments on FP and Scala on here.


> dogmatic decrying of how OOP is completely useless and objectively bad

And just who were the ones pushing this? The people peddling FP.

The people in this thread are not pushing anything. They're just sharing their experiences with Scala and how unproductive it is to deal with FP zealots.


I found this in another thread and it sounds like you might enjoy it also:

https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction


> before that in the 90's there was a group of programmers that liked to generate code until no colleague understood anymore what was happening.

Ah! Just write a quine and be done with it!

> I have seen more projects fail because of too much unneeded abstraction than by all other causes together. I even have seen companies go almost bankrupt because of projects engineered by lone wolfs where nobody understood the abstractions anymore except for the designer himself, and at some point he himself not really anymore either.

"cake pattern" on one side and scalatest on the other.

As for

> I’m constantly surprised to see this type anti-intellectualism.

You hardly need any Category Theory to theory to understand type theory or mostly important Scala's type system. This the exact knowledge and allows you use it effectively. Yeah, the way he puts it makes those who like to understand the theory behind everything look bad.


For most tasks programming it's more an engineering discipline than an intellectual pursuit. In that context complexity must always be justified.

I know, and have been told, to have written hard to read code. In my case it's usually vectorized code in numpy or a C or cython extension for really hot code. But I always have a good reason, usually performance when it matters.

It's usually said that premature optimization is the root of all evil, but it's nothing compared against premature abstraction. Optimization at least is local in its nature while abstraction tends to expand all over a code base, and when the assumptions made for the abstraction no longer hold people is still forced to keep dancing for a music that no longer plays.

"Educate yourself" assumes that your interests are everyone's interests but this is a vast field and not everyone has interest nor time to learn about category theory. Most probably your coworkers are quite intelligent. If they are not interested in your idea it may be that it's not appropriate or maybe not correctly framed, or not mature.


I'm stealing that phrase: "Premature abstraction is the root of all evil". It's hard for me sometimes to justify in a code review why an abstraction is not required (yet) when someone has put some effort into it.


I found this in another thread and it sounds like you might enjoy it also:

https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction


The thing is I DO understand category theory and FP after going off the deep end in Scala land. It did not make me a better programmer or make writing code any easier or faster. In fact it makes me spend 50% of my brain power on juggling those concepts and how they are encoded into a language that does not and never will support them as well as say Haskell.

I learnt the basics of FP a long time ago and understood the data-centric view of computation, the benefits of immutability etc - those things actually do help me write better code. The advanced FP, not so much. The nightmarish encoding of advanced FP shoe-horned into a language not built to support it that compiles into Java definitely not built to support it actively hurts. There are so many dirty macro shenanigans and hacks in the guts of the FP-purist libraries to force the language to implement things it just isn't particularly well suited to implement, which makes reading or debugging library code a nightmare. Like if you want Haskell, just use Haskell FFS and let us code reasonable Scala in peace.

There are so many parts of math that are much more useful in various fields - basic optimization in solving sudoku puzzles[1], linear programming in Z3, gradient descent in ML, calculus & vectors in graphics programming etc.

[1] https://norvig.com/sudoku.html


Everytime the “go learn category theory if you want to become a better programmer” trope arises on HN (less often than it used to, but still occasionally) I’m left scratching my head. I actually learned category theory as a math grad student, before changing fields and going in a more applied direction. I’ve spent decades writing software since then, and I can easily think of about 10 areas of math and CS that are way more useful day to day. Once you’ve mastered calculus, linear algebra, probability theory, statistics, numerical analysis, optimization, algorithms, complexity theory, combinatorics, asymptotic analysis—sure, go pick up category theory.


But Category Theory is different to all the other examples you gave, as it describes program composition. i.e. building larger programs from smaller ones. Computer scientists have been searching for the "mathematics of program construction" for a long time, desperately trying to make the process more formal. Category Theory is what many seem to have arrived at. There are even books about it, e.g. "The Algebra of Programming".

Most computer science graduates are well schooled in algorithms, complexity, linear algebra etc. However, the vast majority of their time commercially will not be spent implementing some core algorithm, but instead gluing systems together and consuming/writing APIs. Any mathematics that helps them better accomplish this would be good to add to their toolbox.


The author of "The Science of Functional Programming" book (https://github.com/winitzki/sofp) makes an argument that CT is at best tangential to practicing FP.


I found category theory to be helpful in giving a vocabulary for some very simple and essential things that otherwise get lost. My goto example are Monoids and Semi-Groups, which are almost mindbogglingly simple but pop up everywhere.

Category theory gives you the precise language to talk about such stuff and helps to communicate and talk about more complex concepts and that's always a plus.


I hear you. However the amount of category theory you need to understand 99.99% of FP is really really minimal. 10-15 pages of well written text


Y'know what, I'll bite. I can do 10-15 pages. You have a link?



Thanks. I appreciate the link. I'll take a look after work.


I sympathize with your thinking. There are two "readabilities".

1. How easy it is for an average programmer to understand the code.

2. How understandable the code actually is if the reader is fluent with that prior knowledge.

If one finds the latter is much better than code written in conventional techniques, but people are scared off by the former, it can get pretty frustrating, and "people are stupid" may be a conclusion drawn.

---

But my experience is that those techniques do not yield better code.

And "Educate yourself" made me barf.


So, I’ll probably get shit on for this, but I really cannot wait to leave this industry.

I hate how “best tool for the job” really just means “what we can have a large hiring pool of people who’ve only ever learned one programming style and possibly only one language of” or “for legacy reasons this is the only choice”.

Yes I’m familiar with reality, I just hate it.


Yes, it's disappointing but hardly surprising to see the phrasing of management parroted by so many developers. It's their job to worry about hiring, deadlines, etc. It's yours to make sure things don't shit the bed at one in the morning, especially if you're going to be paged about it. There are some languages that make it very easy for grads to deliver features, and others that are far more likely to be correct at compile time. The "best tool" is born from these competing demands; those too meek to engage with this inherently adversarial process don't do themselves or anybody else any favors.


Apparently "best tool for the job" means find he bottom of the barrel that can still engage somehow with the codebase. This Idiocracy went to the point where returning function pointers gets questioned during code review because "this might be too complex compared to imperative call".


The code base had multiple problems, none of which I would blame on category theory or Cats.

- Engineers had written higher abstractions seemingly just because they could. When I audited how internal libraries were used across our services, calling applications weren't making use of the advanced abstractions. I'm talking about things like using Cats to abstract across AWS S3 error handling. Cool, except that it wasn't needed because we never actually encountered the exotic compositions of failures anticipated by the libraries.

- The abstractions written for our own business logic were worse than abstracting over S3. They were premature. Our business logic had to change frequently because the end user experience was still evolving rapidly. Changes that violated previous assumptions and their corresponding abstractions took longer than they should have and/or led to very awkward code.

- At least at the time, tooling had more problems with the "advanced" code. The IntelliJ IDEA Scala plugin could not yet show how implicits were used. It couldn't find senders sending to an Actor the way it can easily find plain callers of a function. You would need to manually force a "clean" in certain modules before code changes would compile as expected. IDEs would also fail to flag code that couldn't compile, and incorrectly flag code that would compile, at a higher rate compared to plainer Scala.

I'm still glad that I have access to Cats, Akka, and other advanced parts of the Scala ecosystem. They're still used in a few places where their value is greater than their cost. Even in the plain code, I'm still very glad I have pattern matching, immutability-by-default, rich collections, map, flatMap, filter, fold, scan, find, etc. I have no plans to transition our company off Scala internally. If I were starting a greenfield project with myself as the sole developer, I'd probably be using Scala for that too. But I prefer to write a bunch of simple repetitive code first, then develop abstractions after it's clear what the commonalities are.


There are others options besides Go and category theory. I'd say that for now, languages that are like ML are enough. They're coming in the mainstream with pattern matching, immutable records, etc. It'll take some time for people to learn this, learn to use it properly, teach it to others. And maybe some years later a monad will be a pattern as common and well-known as an iterator. But today isn't that day yet, and until now you have to collaborate with your peers. You also have to assume that you may be wrong about some of this stuff, and keep an open mind for FP alternatives. For example, I think algebraic effects aren't part of category theory, yet they're an exciting new feature that might have an impact in "industrial" languages one day.


The future is unevenly distributed.

There are plenty of companies where everyone has a very mature understanding of monads and where new joiners are coached to reach that level of understanding.

And there are plenty of places that aren't using source control, CI and unit tests.


If all you do is add numbers together, is it anti-intellectual to question why one needs to grok Principia Mathematica first rather than just doing some arithmetic?

In other words, do you solve problems that actually require category theory or are you just navel gazing?


If you've ever done a map reduce on a dataset you've used category theory, whether knowingly or otherwise.

You don't need more than highschool maths to understand the category theory behind common typeclasses.


This is development nowadays, hire people fresh out if am eigh week javascript boot camp and everyone has to write code they can understand.

Instead of bringing the juniors up, we drag the more experienced guys down to their level.

The result is really shit basic code everywhere. or you end up using a language like go where there's basically only for, if and arrays and writing code in it is miserable and tedious.


This ain't anti-intellectualism. Scala fanatics (more like FP fanatics who use scala) write overly completed code just for purity's sake. Its ok, but it creates a maintenance nightmare and a huge hiring problem.


Formal style guidelines and mandatory tests for everything are largely wastes of time and often detrimental to software development.

To elaborate: formal style guides prevent simple things like formatting code in a manner that is easier to read in some cases (“why is this newline here? Why did you column aligned this block of expressions?”), and more insidiously force you to write worse code because they constrict you. Style guidelines should be just that - guidelines and engineers should have the decency to not request stylistic changes during code reviews unless they spot code that is obviously sloppy.

As for tests: insisting on having tests for everything impedes development in the present as well as in the future. More crucially the compulsive desire to have everything testable makes you write worse code as you need to abstract away parts that would simply be function calls or use patterns that obfuscate the code. Some will claim that this results in better designed code, but realistically speaking it just results in more complex code, which is almost always worse. You don’t need to use abstractions everywhere. Developers should focus on making shit work well and not over engineering code because they want to feel smart all the while making rationalizations about testable code and whatnot. I digress though so back to my original point regarding tests: in order for tests to justify their existence they have to test something significant and/or test a module that provides a service (read: hidden behind well defined API, api which you test to ensure that it doesn’t break its “contract”)


Yes, this is why subsidiaries in the private sector are useful - when something goes terribly wrong with one, like it did with NSO, it doesn't implicate the whole of Israel/IDF - at least not explicitly.


I’ve personally conducted business with Patrick, and integrity isn’t a term I’d associate with him. Most polite term I can think of would be “shrewd”.


I don't think integrity fits into any of the metrics that you have to report to YC.


At least they ask founders to be not mean [0] and specifically be nice [1] but do ask them to be relentless [2] and formidable [3], which may come off as shrewd?

[0] http://paulgraham.com/mean.html

[1] http://paulgraham.com/safe.html

[2] http://paulgraham.com/relres.html

[3] http://paulgraham.com/earnest.html


YC has pretty strong ethical guidelines for founders and people have been removed from YC for violating them, so I'm not sure where you're coming from with that, unless it's just a cheap shot.


It's not YC specific but systemic sadly.


Then its a pity, and an opportunity to improve or move aside for an organisation that does (and stronger ethics reputation for graduates thereof.)


Ouch.


[flagged]


Please do not post nationalistic or ethnic slurs to HN. It's not what this site is for, and it destroys what it is for. That should be obvious if you're familiar with the site guidelines: https://news.ycombinator.com/newsguidelines.html - would you please review them?


As someone who has had to face casual racism like this a lot due to being Indian, please take these casually racist tropes somewhere else. It's not okay.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: