Hacker News new | past | comments | ask | show | jobs | submit | freework's comments login

Here is my question to those who understand this "paper":

How does the discovery described in this paper help engineer something the world has never seen before?

As an engineer, I'm always looking for some new thing to make. What does this paper make more possible to make that was less possible to make before?


Nothing in this paper is actually new. Its a review. In general understanding various uncertainty principles is pretty foundational in engineering quantum things, for example transistors. They're also an essential part of how we understand electromagnetic waves from radio through WiFi and xrays.

In terms of direct engineering implications I think there are essentially none, but this is in the background of a lot of important stuff.


The way I interpreted it they're claiming their mathematical approach to relating the wave uncertainty in FFTs to uncertainty formula in Quantum Mechanics is a novel one. I don't think there's any actual new discoveries however, because there's an infinite number of ways to show that all of mathematics is internally consistent. However I have great respect for all their math, if it's all correct, and it may be useful to someone just like when Einstein "found" Lorentz formulas and Minkowski space which were done before him and ready for him to recognize the pattern that fit into his own tinkerings that we now call relativity.


"Theoretical computer science" is garbage.

Of all the scientific fields, computer science is the one that is mostly closely related to fields where theory is most worthless. If you have a computer science theory, then write some damn code to prove it. Then share your code. Others can then run it and replicate those results. At the "theory" phase, the idea is worthless.


You are very wrong.

Many of the key results of theoretical CS is to prove impossibility. There is no code to be written when you show that something is not possible.

- Halting problem: impossibility of TM to determine if other TMs halt. - crypto: impossibility for a Turing machine to break a cryposystem in polytime - sorting lower bounds: impossibility to sort objects given only a less_than operator on them in time less than O(n log n)

and so on. There is no code to be written for these, because they are mathematical theorems.

> computer science is about computers as much as astronomy is about telescopes ~Dikjstra.


Wow, just... wow.

Theoretical computer science is literally more solid and real than any code you could write. It's literally the mathematical foundation of how your favorite language works. It is all precise mathematical proofs, not just 'thoughts'.


What are mathematical proofs but mere 'thoughts'?

(I'm only being half rhetorical. I've been thinking about this deeply lately.)


The relationship between thoughts and proofs is analogous to the relationship between noise and music.

It’s a subset kind of relationship.

There is also the social aspect: people have to agree on it. Noise becomes music, and thoughts become proofs, only if several people agree.



You are either trolling or are absolutely clueless.

However in the event that you really want to learn this via programming see this book - https://news.ycombinator.com/item?id=39721301


In my opinion, the solution to all of this is to get rid of the whitepaper system all together.

From now on, all scientists who want to publish science, they publish their work in the form of a video. They film themselves doing the science and then put that video on YouTube, or maybe some science specific version of youtube. Other people can then watch the video and determine for themselves if the science is valid or not.

Its much harder to fake science if there is a video involved. Its too easy to make fake science if all you have to do you "prove" it is write some words.


It takes of order 6 months to do the work described in a single paper. Even if we imagine someone watching the videos at double speed, it would take 3 months per paper. And that's for an expert who can understand the material.

Folks don't have the time to read the literature thoroughly as it is. That's one of the reasons the junk gets published. So this is not practical.

Plus, imagine reporting to your funding agency, Dean, or boss, that all you accomplished this year was to watch a few videos. No new work of your own. I guess you would make a video of you watching a video. And then someone would watch a video of you watching a video ...


My guess is that more than half of the world's scientific resources are spent on projects in bad faith, so I'm actually ok with legitimate scientists spending 100% more to justify their papers, if the extra work can be covered by the surplus resources saved by eliminating frauds. However, there are many reasons the proposed method is impractical.


In most theoretical/abstract pursuits, from math to comparative literature, "doing the science" means "writing the text of the paper" - and nothing else; shall I film myself doing that?

On second thought, maybe I could make a video of how I manage all of those dirty LaTeX tricks to squeeze my paper into less pages, at least there's some tangible payoff there...


> "doing the science" means "writing the text of the paper"

The point is to show your work. You film yourself setting up the experiment to prove that you actually did the experiment. You also film yourself collecting the results, proving that you actually got the results you claim.

The only kind of science that consists of "just writing the paper" is math. Math will be exempt from this requirement. Fake math is not an epidemic


Economics <cough>


There is something about this article that kind of annoys me. It's my understanding that back in the year 1006, the entire world was not using the same calendar. Each of these observations would not have been recorded as simply "May 1, 1006", they would have been in each observer's respective calendar... Yet this article just states them as being recorded on the same day using the Julian/Gregorian calendar format that we use today. I would have liked the article to go into more detail about how they were able to sync up the dates across the various calendar systems.


This is Wikipedia. If you possess the knowledge about local calendars of the time, go on and improve the article.

For astronomical research, it's easiest when events are dated using one calendar to correlate the observations across the whole planet. No matter which calendar specifically, it just must be uniform. For cultural research, it's more important to use a local calendar and e.g. see how the supernova was related to other culturally significant events.

Also, an article written in English is bound to use the Gregorian or Julian calendar which is familiar to the readers. An article written in Arabic, or Hebrew, or Tamil, or Malay may use the respective different calendars instead, as familiar to the readers.


> An article written in Arabic, or Hebrew, or Tamil, or Malay may use the respective different calendars instead, as familiar to the readers.

Extrapolating from the only example I know, I wouldn't bet on it.

Before moving to Israel I knew that Rosh HaShanah (Hebrew New Year) is a public holiday and Gregorian New Year isn't, so I expected Hebrew calendar to be somewhat visible in everyday life or at least in official documents. Turns out, with the exception of holidays — to some surprise, including the decidedly secular Independence Day — it really isn't, everyone uses Gregorian.


This is somehow expected when we talk about modern Israel. Maybe it would be less so when describing events of 1006 AD, if descried by contemporaneous Jewish sources.

Again, this a difference between astronomy and history points of view. In natural sciences, one would expect the now-universal units that originated in Western science: Julian calendar, SI units, times in UTC, etc. In historical and otherwise localized studies, I would expect a local / period-salient calendar, local units as reflected in the period's documents, etc. Converting these into exact modern dates and units is sometimes hard, and subject to a debate among historians.


The article on SN 1054 (the supernova that formed the Crab Nebula) goes into a lot more detail about how the historical accounts were correlated.

https://en.wikipedia.org/wiki/SN_1054


In 1006, the entire Christian world would have been using the Julian calendar (Gregorian isn't invented until 1582), although there is some variance on when different countries recognized the new year in the Julian calendar. Albeit, May is one of those months where everyone agreed on the year.

Outside of the Christian world, the correlations between different calendars and the Julian calendar is quite well-known, because by the time Europeans contacted people following those calendars, they can ask "what day is today" and get the Gregorian-Julian-local calendar correspondence. If you've got a stable year count (not based on reigning kings), it's easy to work out older dates. If you have regnal year numbering, and you have enough written evidence that you can decisively determine how long each king reigns for, then you can also carry that over to a complete calendrical determination.

In cases where the calendar is no longer used, you can match up calendars by looking for records corresponding to known (largely astronomical) phenomena (eclipses are particularly helpful) and get correlates that way. This is how we match the Mesoamerican Long Count to the Julian/Gregorian calendar.


Evidently the various calendar systems have been ‘harmonised’ in such a manner that (I presume) according to our calendar it would’ve been on the 1st May 1006, which I take literally to mean something along the lines of “May 1st 2023 was the 1017th annual anniversary of the event”.


Hmm, but this is a very common problem particularly in history and archaeology and one that requires highly specialized expertise. Therefore almost always a modern unified calendar is used in new publications and the problem of dealing with historical dates as written is left to specialists. I don't think every article about per-modern events and events outside Gregorian-based calendar usage should mention that.


> It's my understanding that back in the year 1006, the entire world was not using the same calendar. Each of these observations would not have been recorded as simply "May 1, 1006"

That is correct, but what would that help anyone in general (for any historic reporting)? We give dates on our agreed today's scale.. as we usually would also give other measurements like masses or lengths in our scale, unless explicitly given with other earlier units?

Even today in the news you will usually hear "an earthquake happened tonite at 2am in far away land" (and only eventually added, but then explicitly the local time).

Especially for this event I wonder, how many different calendars would you want to had mentioned to be satisfied? :)


> That is correct, but what would that help anyone in general (for any historic reporting)?

The question is rather what are the error bars with these observations. When someone says "my car broke down last week" everyone understand that that is not a precise timestamp. On the other hand if you hear "according to the telemetry the crankshaft seized at 1696606533 unix timestamp." you know that they are talking about a very precise moment in time.

The problem comes if you take the first kind of description and convert it to a unix timestamp you imply precision where there was none found originally.

So when the wikipedia entry says "According to Songshi, the official history of the Song Dynasty (sections 56 and 461), the star seen on May 1, 1006, appeared to the south of constellation Di, between Lupus and Centaurus." Do they mean that Songshi wrote down the date according to his local convention which can be converted with a high confidence to our current date system as "May 1, 1006"? Or did they just erroneously implied more accuracy than what they have?


I think the calendar problem is an entirely different subject that would warrant it's own article. I suppose what you could do is look up the calendar for each historical report of the event and then convert it to a single calendar. https://en.wikipedia.org/wiki/List_of_calendars


If the article is twice as long, I'd say it definitely should have more details about each local reports, including dates in their corresponding calendars.

But at current length? I think that detail would not be necessary and even kinda distracting in term of briefness. It's not really of great importance for the event itself anyway.


I've seen this tablet before, and an not convinced that it is actually the Pythagorean theorem. Its just a tablet with some tick marks inscribed onto it, along with a circular looking thing. It's very much a stretch to say the person who etched those markings intended to express the Pythagorean theorem.

There was a point in time when I was very interested in ancient civilizations from Mesopotamia, but in more recent years I an way less interested in it. The scholarship in that field is just terrible. In my opinion, a lot of the stuff is on par with alien "investigators" and stuff like that, yet for some reason the general public sees the field as totally legit.


I would argue with you, but you've just produced a bunch of squiggles.

> It's very much a stretch to say the person who etched those markings intended to express the Pythagorean theorem.

No it isn't.

There are legit reasons to question a lot of the research on ancient civs, but that isn't one of them.


How is this different from seeing a fuzzy video of some lights in the sky and then coming to the conclusion that it is definitely a UFO? If you're so convinced that this carving definitely proves that the carver was intending to express the Pythagorean formula, then what is the evidence?

Some people's definition of "evidence" is different from my own. If somebody really wants to believe something, then just about anything qualifies as evidence. This is why UFO people consider literally every single fuzzy video as undeniable proof that aliens exist.


… because those “squiggles” are just “words” in a “language” you can’t “read”?

And if you could read it, you would find it contains a lot of relevant things, concluding with:

> … 1.414213, which is nothing other than the decimal value of the square root of 2, accurate to the nearest one hundred thousandth.

You might then think to yourself:

> The conclusion is inescapable. The Babylonians knew the relation between the length of the diagonal of a square and its side

Which is all clearly explained in the article you’re commenting on. Do you have anything else meaningful to add, beyond “it’s nuffin’ but squiggles mate” and “aliens”?


Here is a drawling of what they think this tablet says:

https://commons.wikimedia.org/wiki/File:YBC_7289_sketch.svg

It's just a bunch of numbers scribbled onto a tablet. For all we know it could just be some guy writing down the number of sheep he is willing to sell to his neighbor or something. To say this tablet proves the Mesopotamian knew about Pythagorean's theorem is quite a stretch.

To the people who want to believe, there is nothing that can be said. Believe what you want.

Also, this tablet has no provenance. According to the wikipedia page on this tablet, it says "It is unknown where in Mesopotamia YBC 7289 comes from" Basically it just magically appeared one day. For all we know it could be faked. In any other field, this artifact would be ruled inauthentic. But in this field, for some reason it just doesn't matter.


Imagine for a moment that the people who created the tablet used a different number system than us, and also imagine that we knew that number system and could convert it.

Then those “bunch of numbers” becomes something else entirely. Specifically, they become a bunch of numbers that highly relate to the Pythagorean theorem.


You sound very confused.


I agree Cuneiform tablets is a multistep process of recording an idea.


I don’t mind the poor scholarship it offers opportunities to have better ideas. What I love about the ancients is that they are just like us with fewer objects.


The solution to this problem is to require the submitter to include a unit test that demonstrates the problem along with the CVE. If the unit test succeeds in DDosing or whatever, then the CVE is published. If your unit test fails to produce the security problem, then it is ignored.


In other words, PoC || GTFO for all submissions?


Ultimately "Show me the code" is the only standard that has ever worked for Open Source.

Give me code to reproduce an issue for people who are contributing as developers.


This works only for programs that are publicly available.


> We'll probably see something similar happen with "Tankie" and "CCP" and "Moscow shill"

Those terms are all specific to political discussions. The Dunning Kruger effect is known across every single cross section of the internet, and it is absolutely everywhere.


I would point out that while I think Dunning-Kruger is a thing, and pretty memetic, the actual research about it has been re-appraised in recent years.

We believe in it, yes. Was it proved in the root paper? Turns out.. no. Or at best weakly.


> In that sense the update wouldn't be "here's another thing I see all the time" but "here is the absolute maximalist claim I can make about a person's character."

Well, the text of the law states that the probability of Nazis coming up "approaches 100%". This is to say the law was made after an observation of how common the argument was. In my experience, the Dunning-Kruger argument has the same probability on the modern internet.


> https://xkcd.com/2030/

I really hate this attitude. This is a very common belief held by non-technical people. They believe that it is impossible to have perfect security, and that every single piece of software ever built is either already hacked, or waiting to be hacked by future hackers.

I truly believe one day a cryptographic voting system will be rolled out, but it's a long way away, not because for technical reasons, but for cultural reasons.


Did you just call Randall Munroe a non-technical person?

> They believe that it is impossible to have perfect security, and that every single piece of software ever built is either already hacked, or waiting to be hacked by future hackers.

Yes. I do believe that.


every single piece of software ever built is either already hacked, or waiting to be hacked by future hackers

...or it's not worth hacking -- this is what your statement leaves out.

Nobody is going to be searching for bugs in cat(1) when there are bigger prizes out there.


> one day

There's the keyword.

And that day is not today.

Paper is fine. Let's use paper.

These voting machines that can be tracelessly hacked with a pen drive are terrifying, and no, there isn't enough awareness of that fact among people.


What do you propose to do with paper? Mostly it just gets fed into a machine. Is it impossible to vote twice with paper? Is it impossible to throw away paper? Is it impossible to miscount paper?

Paper doesn't impart any security property by itself. Can paper be used in a secure system? Maybe. Can a machine be used in a secure system? Maybe. Is all paper secure? No. Is any machine secure? No. Same for blockchain, mobile, internet, satellite, and ouija boards.


Do you have credentials or sources for these odd claims and false equivalences?

Do you have them on paper?


If you ask 10 different people why they are depressed, you'll get 10 different answers. This illustrated why I think the scientific method is the wrong way to go about understanding the human mind.

Also, if you ask a depressed person why they are depressed, they may not know exactly why they are depressed. I spent my entire high school years depressed. It wasn't until a few years after I graduated college before I realized the reason why I was depressed: it was because I had abusive parents. If you had asked me in high school why I'm depressed I would have said something like "I'm a bad person but I have no reason why". At the time I thought my parents treated me like I was a bad person was secondary to the actual problem.

If you really want insight on what makes the human mind depressed, then ask people who have overcame their depression. The first step to overcoming depression is to discover what is making you depressed. If someone is still depressed, they probably don't know what is making them depressed, and so their "data" is just noise.

The problem with the psychology field is that it has an obsession with "data". Everything has to be on a pie chart of a line graph or something like that. Every "study" has to be on a grand scale and then averaged together to make a single conclusion.

The problem is that the human mind of not replicatable. You can perform a "study" on a sample of people and get a result, and then replicate that exact same study on the exact same sample of people at a later time, and still get a different result. In order for the scientific method to be applicable, you have to be able to get the exact same result each time you replicate the study. This is not possible in psychology.


This is just objectively not true. Population-level data has been used to establish all sorts of things about psychological disorders. You mention your experience of parental abuse causing depression: it absolutely does! And that relationship was established unequivocally decades ago by showing the correlation between adverse childhood events and the development of depression.

One of your specific complaints is that depressed people do not report the causes of their depression very accurately. This is also true! The unreliability of depressed people is emphasized in the diagnostic criteria, and a good psychologist will probe for other underlying issues. But researchers absolutely can identify causes of depression by supplementing self-reports with objective data.

> The problem with the psychology field is that it has an obsession with "data".

Yeah, that's how science works. By collecting evidence you can make descriptive statements about the world. If psychologists didn't present data, they would be instead be rightly criticized for presenting data.

It sort of seems like you jump from the poor accuracy of depressed people's self reports to dismissing the entire field of psychology. Humans are complicated and messy, but we do know a ton about psychological disorders.


For what it's worth, I was also reacting to this assertion:

> The first step to overcoming depression is to discover what is making you depressed. If someone is still depressed, they probably don't know what is making them depressed, and so their "data" is just noise.

This was not my experience, and hasn't been the experience of others I know who have been depressed. The condition is often caused by an acute stressor, but in my case it was caused by a lack of social interaction and exercise during the pandemic. Trying to identify the "cause" of my feelings wasn't helpful because they weren't rational - instead I had to get out of bed, exercise, eat, and socialize until the episode cleared.


It’s saddening to hear about your years of depression. Parental abuse traumatizes deeply, and while I may not feel the same pain like you, I can imagine it is destroying life completely. I hope you can receive my and other people’s compassion and won’t relapse into depression.

I would still like to give some response on your content in hopes that you can benefit from it.

Psychology, or the part of what is accessible to the non-academic public, and Clinical Psychotherapy are related but different fields. Psychology alone can be very disappointing to people in pain - I can relate, and shared your sentiment, as I was in pain back then, too. Since I dug deep into the academic literature of Clinical Psychotherapy and the books for the general public, I realized I was just looking in the wrong corner of the library.

Compare Psychology being talking about what a nail is, to Clinical Psychotherapy being talking about how to construct a hammer and hammer the nail correctly into the wall. It’s related but not the same. And only one is really useful for us if our goal is to hang a beautiful painting on the wall.

Hope you find some inspiration from this. Whenever it comes back, please remember to get help quickly before the wounds get septic and require much more difficult intervention. There are good people out there.


You cannot rely on self-reporting as an accurate way to diagnose someone. You need to do single subject studies and doing a sample studies for this is obviously not the right way to go about it. Data collection is very important to understand human behaviour.


While some psychologists might not be aware of this (like the Reason criticism suggests... though mostly has the overworked angle), it seems to be almost insulting to suggest that researchers at his level are not ?


You'd be surprised how niche this sort of discussion is in neuropsych research. Is it because researchers aren't thinking enough about the big picture or because the NIH is incompetent and the researchers just go along with what they need to get funding? Idk, but either way it's not great. I suspect this problem exists in a lot of sciences but psychiatry in particular seems to have overcorrected for some of its less rigorous history.

That said, I don't think it's correct to say the scientific method can't apply. We absolutely have to be more careful about averaging over many people with different actual disease but the same current shitty label, and we also have to be better about looking at longitudinal signals. But there are ways to do this in a more scientific manner (e.g. involving well-defined testable predictions).

I agree that as part of this transitory period (especially with the sample sizes that most psych studies can feasibly get) there should be more synergy between qualitative human-focused approaches and what the by the book science people are doing. It's unfortunate how far behind research psychiatric practice can lag in some respects, and in other respects the psych researchers seem to not take practitioner observations too seriously these days.


<< The problem is that the human mind of not replicatable.

I am willing to agree that it is still more art than science in its current state, but I personally think we are slowly moving towards a replicable mind after all -- a terrifying prospect, because it would truly prove 'free will' is an illusion. It currently seems impossible due to sheer number of factors with overlapping effects. I don't think it is impossible though.


I suspect it is improbable, because an honest, undeniable belief that you lack free will would drive someone stark raving mad. It would represent a fundamental phase change in the way an individual interacts with the world, which I would wager would be antithetical to effective transmission of the belief.

Far more dangerous would be the 90% mark, where understanding of psychological determinism advances to a point where people still believe they have free will, but are wildly effective at influencing others. That looks like advertising today, but worse.


I don't believe in free will. I haven't really for about 20 years or so. However, I act as if I do, as I've always done. It's a habit I choose not to break. Changing to act as I believe would be too much for me to handle. I would need to rethink pretty much everything I do. It's a completely different set of axioms in every domain of human knowledge, ethics, behaviour and interaction.

I don't think I'm raving mad, but perhaps because I don't act on my beliefs, I don't fit your conditions?

However, I do believe the justice system would be better run with this in mind.


Not that I agree with your premise, but why would you personally adopt a cognitive dissonance? A healthy mind is generally regarded as satisfied with itself as it is free of self-contradictions.


I think behaving as if there is no free will is such a change from how I was raised and how 99.99% of society believes and functions, would require some Buddha-level strength of will, which I don't possess.

It's so different than all the constructs we've created, you can kind of throw "generally regarded" out the window, frankly :) (don't mean that to sound harsh if it does). All of psychology would need to be rewritten. By me? I guess?

The 3rd option is to try and convince myself of what I regard as a lie (that we have free will), which is also difficult, but probably easier?


> I suspect it is improbable, because an honest, undeniable belief that you lack free will would drive someone stark raving mad.

This presumes that humans are some kind of ultra-rational uber-mensch that is incapable of ignoring inconvenient facts.

Truth of the matter is that an illusion of free-will is just as good as the real thing from the perspective of an individual human's mind.

Also I think you're confusing free-will and predictability, everyone kind of seems to do this for whatever reason, but they aren't mutually exclusive at all.

All of your actions can be the result of quantum mechanical effects, eg. "true randomness", but still be completely unpredictable beyond a certain time/noise horizon even with a 'perfect' simulation. But unless you want to suggest that the free-will arises from the quantum-foam (which I mean, is as unfalsifiable a claim as any other religion, so go for it), you kinda run out of room to fit the free-will.

As for my experience of not believing in free-will? Its been pretty much fine.

I suspect that belief in something free-will-like is pretty evolutionary adaptive, so I think we'd expect it to arise in most simulated minds subject to similar evolutionary pressures.

The only thing I haven't really figured out is why we aren't just p-zombies, but what is life without some mysteries, right? (And given this, I wholeheartedly agree with u/mikeschurman about the need for a justice system that isn't unnecessarily cruel)


I don't believe in free will at all. I never really did from a physics perspective, but after many years of meditating, I don't believe it at the highest levels of abstraction now either.

I don't understand how that is supposed to change anything at all about how an individual interacts with the world though?


Yeah I don't think we're getting there in our lifetimes, especially not with how neurobio/psych research has been progressing lately, but I agree it is theoretically possible. And along the way there are many imperfect models that could still be highly useful - I don't think replicable is a binary thing at this level of abstraction.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: