Hacker News new | past | comments | ask | show | jobs | submit | 23skidoo's comments login

> Further down the social hierarchies, there are plenty of "Santa Claus fans" - really Wanting To Believe that their FBI / CIA / police / etc. are paragons of morality and competence - who react poorly to messengers bearing bad news.

I'll take this opportunity to plug the Iron Law of Bureaucracy which states that in any organization there are two kinds of people: those loyal to the goals of the org, and those loyal to the org itself. Organizations tend to get captured by those loyal to the org itself, even though the people loyal to its goals are the ones you actually want running the show. Eventually you end up with a top-heavy organization that's divorced from its stated goals and exists only to further its own influence and existence. You'll find it in schools, cancer charities, government, etc.


Sort-of yes...but looking at (say) the CIA's long indifference to a series of employees who effectively turned traitor against the CIA (by selling its hottest cold-war secrets to the U.S.S.R.), I don't think "loyal to the org itself" is a good description of the rot.


I had never heard this, and I'm immediately adding it to the big list of things that explain more about present reality than I'm necessarily comfortable with.


That's Lao Tsu, I just happen to be reading him right now.


> Few developers know how to code software that is nice to install and maintain anymore.

NetBSD is refreshing for precisely this reason. I'm not sure why it doesn't get more love tbh, it's not glitzy but goddamn that's how you code.


> and allows for new metrics without the most serious issues present in the Alcubierre solution.

...like violating causality?


Space-time can expand at faster-than-light speeds - this is known for sure, since we live in a 93-billion light-years wide universe that's only 13 billion years old.

As far as I understand, the current conjecture is that an Alcubierre drive could move at faster than light speeds (if negative mass/energy to build it existed), but that it it tried to move to its own past, it would either destroy itself because of some conjectured quantum gravity phenomenon - this is called the "chronology protection conjecture" and Alcubierre himself talked about it:

> The conjecture has not been proven (it wouldn't be a conjecture if it had), but there are good arguments in its favor based on quantum field theory. The conjecture does not prohibit faster-than-light travel. It just states that if a method to travel faster than light exists, and one tries to use it to build a time machine, something will go wrong: the energy accumulated will explode, or it will create a black hole.

[0] https://web.archive.org/web/20160318223348/http://ccrg.rit.e... (last three slides touch on this area)


> Space-time can expand at faster-than-light speeds - this is known for sure

Not exactly. In the standard cosmology, in an equatorial slicing (cf. slicing a cone with each plane perpendicular to the axis, giving circles) of the expanding universe, space expands very slightly at each point in space as the cosmological time ticks by. If we choose a point (p,t) and some coarsening procedure to reduce the count of points immediately around (p,t) to a finite number, then very soon after at (p,t+\epsilon), and using the same coarsening procedure, there will be more points immediately around p. So, for example, for something (p=const,t_0) we might count seven points in which we could find that something at t_1: {(p,t_1),(p x+1,t_1),(p x-1,t_1), (p y+1, t_1), ...}. But there might be, say, twelve points immediately around (p,t_n), twenty-four around (p,t_{n+n}), etc. The same thinking applies at every point in space with the same value of t. As the universe ages, the numbers of points in space is already enormous and at each of those points in space we add more points.[1]

To that last paragraph we add a system of coordinates where there are cosmological observers[2] "stuck" at a particular spatial coordinate like (x,y,z) at all times, observing more space appearing between them, requiring coordinates in between to label that space. Physically these observers, if already distant, see each other's image becoming smaller, dimmer, and redder, as if they were accelerating away from each other. Physically neither observer detects such an acceleration -- they are in perfect free-fall.

The metric expansion, and accelerated expansion, can be (and is usually treated as) purely local. Nothing interacts superluminally, and there is no need from observation to have large nonuniformities in local expansion in the known universe. That is, at large enough distance scales, the local expansion at any point is well-modelled as constant at all points and at all times: the cosmological constant. (At smaller distance scales, in regions dominated by gravitationally collapsing matter (including dark matter), the local expansion is zero. "Manhattan is not expanding", nor is the rest of the solar system or anything in our galactic cluster as far as we can tell).

In the unknown early universe, various approaches to cosmic inflation essentially generatesa lot more new points around old points than expansion does, and the difference in inflation around any point can be large (loosely, the number-of-points-generated-at-point-p gap between "expanding" and "not expanding" is much narrower than the gap between "inflating" and "inflating less" let alone "not inflating").

> 93-billion light-years wide universe that's only 13 billion years old

There are a lot more points between galaxy clusters in an expanding universe than there is in a non-expanding universe. (Inflation already stopped making new points in space long before the first protons formed, let alone galaxies).

For the most part part galaxy clusters tend to become more compact over time; the matter in them is thick and trending thicker. Expansion doesn't arrest that trend at all. Galaxy clusters shine across the electromagnetic spectrum, and also expel neutrinos and some amount of hot dust and gas. That thin expelled matter does not block expansion very close to it, and so thin matter gets smeared across new points as they appear in its immediate neighbourhood. In the cosmological frame this means stretching their wavelengths or equivalently reducing their kinetic energy or equivalently reducing their temperature adiabatically. However, the expansion is not strong enough to break molecular bonds, so molecules will cool; they won't snap apart because of standard expansion. (Likewise expansion doesn't ionize atoms or fission nuclei; it barely distorts clouds or streams of thick-enough molecular gas that float out of galaxy clusters. The key to turning expansion off is thickness of matter, which may be generated by electromagnetic interactions or even just gravitation).

Now, let's talk about "actual" FTL. There is "local" FTL in which a massive object and a pulse of light starting at the same point ends up with the massive object winning a short race ("short" being e.g. micrometers to kilometres). Our notions of causality arise from the structure of spacetime ("a Lorentzian manifold" gives us a particular <https://en.wikipedia.org/wiki/Causal_structure>), and (if not interfered with, i.e., in vacuum) massless objects (like a pulse of light) move at a particular speed that no massive object can exactly reach. A local FTL event is incompatible with a Lorentzian manifold, but if we see local FTL, out goes the objection that "everything we do strongly supports our idea that the universe is Lorentzian". (<https://en.wikipedia.org/wiki/Modern_searches_for_Lorentz_vi...>)

While reliably observing a local FTL event might seem inconvenient for the theories, relativists could certainly cope with what they would see as a breaking of the global hyperbolicity condition of our universe (which we get from having local Lorentz Invariance everywhere).

Alcubierre-like ideas do not break local Lorentz invariance; there is no local FTL, so it is less disturbing in some ways than tachyons or whatever. Indeed, Alcubierre neatly packaged up his idea into a "bump function" on a perfectly normal Lorentzian spacetime.

The Alcubierre idea essentially just breaks the constant cosmic expansion at points immediately around the "ship", making a lot lot lot more points behind the ship and destroying (or shrinking) a lot of points in front of it. It's a local effect confined to the region around the thin shell of the "warp bubble". The hardest thing is to make the space around the ship relax back to something close to what it was before the bubble zipped through it, and that's the source of many of the objections rooted in properties of as-yet-unobserved matter.

However, if we assume that this is all workable and ship or computer memory chip or whatever goes from A to B faster than light can, our global causal structure cannot be the strongest ones we can get by having local Lorentz invariance everywhere.

In principle we are probably ok climbing down to nearly any rung of the <https://en.wikipedia.org/wiki/Causality_conditions> however for most of those we can't "just" take a set of initial values and evolve them forward, which is everyone's preferred approach to problems in general relativity. The intial value formulation of general relativity came decades after exact solutions like Schwarzschild's and Lemaître, Tolman and Bondi, and perturbations upon all those, so this is a luxury that was not always available to relativists, and at least if future relativists lose global hyperbolicity they will have computers with lots of fast memory.

Examining a warp bubble spacetime that is on a lower rung of the causality ladder essentially requires completely specifying the values of all the fields at all the points in a large region as an "exception" sandwiched between two time-separated "initial" values fields. This requirement was part of Alcubierre's motivation to think about a warp bubble -- he already had an interest in the initial values approach and where it becomes hard to use and where it breaks down (indeed he wrote a textbook that deals with that, <https://academic.oup.com/book/9640)>).

As Alcubierre suggests in the quote you found, global hyperbolicity is what we appear to have in our universe, and there is nothing obviously "enforcing" it. So why isn't there obvious FTL in many places (or even everywhere)? Who knows. There is no "right" answer to that, and it might end up that it's just a feature of our universe like its three spatial and one timelike dimension. (cf. <https://en.wikipedia.org/wiki/Globally_hyperbolic_manifold>).

- --

[1] This invites a Zeno's paradox view of the metric expansion. At later times there are more steps that one must take between "0" and "1" (and at the same time even more steps between "1" and "2" on the same ruler). One has to visit every mark on the ruler between origin and destination, but the number of marks doubles and re-doubles and re-re-doubles over the course of one's travel.

[2] Experts will recognize that HN is a place for informal descriptions, and that here I flick (without explicit warning) between thinking in terms of comoving coordinates (more space between each coordinate, or more time for a pulse of light to move between two coordinates) and, without saying so, thinking in terms of Fermi coordinates (e.g. <https://link.springer.com/article/10.1007/s00023-011-0080-9>) when taking a more, um, expansive view of Raychaudhri's equations. I apologize if that makes this harder to follow for people familiar with the mathematical details of physical cosmology.


In case someone wants to read a really good explanation about this;

http://www.physicsmatt.com/blog/2016/8/25/why-ftl-implies-ti...

Generally speaking, all three of FTL (travel/communication), causality and relativity cannot all be true at the same time. If you can FTL-travel/communicate (they are the same really), then you can come up with a scenario in which an observer can see an event happening, and then see the cause of that event after it happened.

Extending that logic, if the observer can also move at superluminal speeds, he could prevent the cause of the event after seeing the event happen, leading to a paradox.


> Extending that logic, if the observer can also move at superluminal speeds, he could prevent the cause of the event after seeing the event happen, leading to a paradox

This is a guess, i.e. one possible outcome physicists are considering.

People have proposed alternative outcomes of FTL like the (in my opinion much more sensible) Novikov consistency principle, which roughly proposes that spacetime and the entities it contains (e.g. an observer's wordline) should be looked upon as a whole, in the sense that they need to be self-consistent. Spacetime is not time-dependent and does not evolve, so it does not make much sense to say "something something leads to a [spacetime] paradox".


Addendum: Self-consistency basically amounts to periodic boundary conditions – not really surprising when you have (almost) closed time-like curves.


Periodicity in time is unusual to think about in physics, partly because you start to get wacky results. If you could establish a CTC in nature, it would allow for computers that can efficiently (i.e. in polynomial time) solve not just NP problems but actually even all of PSPACE [0]. You can interpret this in two ways. There’s the hopeful way: “we should spend a boatload of money trying to find CTCs in our universe since they’ll let us create super-ultra computers”. And then there’s the pessimistic way, that nature probably isn’t going to give us a free lunch like that. Sadly, I’m in the pessimist camp on this one!

[0] https://www.scottaaronson.com/papers/ctc.pdf


This explanation, like previous explanations for "FTL implies time travel" I've read, presupposes that the signal is actually moving at superluminal speeds, i.e. actually covering a distance greater than 299,792,458 meters in a single second from the signal's perspective. This would not be relevant for Alcubierre drives or wormholes or anything else that warps spacetime, since the whole point of such things is to stretch/contract space such that the thing is traveling a much shorter distance per second.

In short, if I were to instantaneously poof out of existence here on Earth, poof into existence on Mars a second later, grab a rock, poof out of existence again, and poof back into existence back on Earth again another second later and hand you that rock, it doesn't seem reasonable to assert that I traveled backward in time when I clearly traveled forward by two seconds. The whole lecture on that external observer is irrelevant, since there's nothing to observe unless the observer happens to be in my bubble/wormhole/whatever - and even then, one'd only be observing subluminal actions/signals within that region of spacetime.


The problem is that if FTL travel exists then our existing theories about spacetime are wrong in some way (despite making lots of great predictions) or time paradoxes are possible in which case we have no idea what the consequences could be.

Your example assumes there is some underlying rate at which time advances for the universe (or at least Earth and Mars) and that spacetime as we know it (including relativity and time dilation) are just some kind of modifier on top.

But theory and experiment so far point to that not being the case. There is no "pop out of existence here and pop in over there" without time travel (as best as we can tell). The whole light cone / worldline explanations are more formal explanations of that.

Now you can magic this problem away by proposing any number of schemes... like saying the entire universe's worldline is exists within a metaworldline and time travel actually resets the state of the universe as it was in the past then re-runs the universe... but all of that always proceeds forward in the metaworldline. In other words all past histories existed in a causal order, changing the past just adds "new commits" to the universe but history is never really rewritten. Bam! Our magic theory solves all paradox problems without requiring billions of parallel universes and allows time travel! But it's not a theory we can test or make predictions with so it isn't a useful scientific theory. It might as well be literal magic.

Similarly you could propose that GR is wrong... but your new theory is gonna need to match GR's predictions that have proven true while making some new ones we can test, while also avoiding or explaining causality and paradoxes.


> Your example assumes there is some underlying rate at which time advances for the universe (or at least Earth and Mars) and that spacetime as we know it (including relativity and time dilation) are just some kind of modifier on top.

Not necessarily; only that time is advancing in a forward direction. Whether 1 second on Earth is 1 second or 10 seconds or 0.1 seconds or what have you on Mars doesn't change the underlying premise: something disappeared from one place and appeared some positive amount of time later in another place. The only way I see that implying backward time travel is if time on Earth or Mars is already advancing backward, and if that's the case then the effects of Alcubierre drives on causality are probably the least of our worries.

And on that note...

> There is no "pop out of existence here and pop in over there" without time travel (as best as we can tell). The whole light cone / worldline explanations are more formal explanations of that.

The whole concept of a "light cone" seems to assume that spacetime is uniform (or at least doesn't have bubbles or holes in it). If spacetime is lumpy / Swiss cheesy (as Alcubierre drives or wormholes would cause, respectively), then that would result in similar lumpiness or holeyness in the light cone. In other words: why assume that it's "cone" shaped in situations that would in all likelihood dramatically deform that cone? In other other words: the light cone / worldline explanations don't really address cases where spacetime is outright deformed to shorten the distance something has to travel in order to go from point A to point B.

Further, the "light cone" argument (as presented in the article) seems to hinge on when observers find out about events... but just because an observer observed something to happen in a given order doesn't mean it actually happened in that order. If the light from Mars blowing up reaches us one second before the light from Pluto blowing up reaches us, does that mean that Mars blew up one second before Pluto did? It doesn't seem like observations are absolute truths, and I'm failing to understand why we're treating them as such.


I think you're missing one factor, though: the fast-but-subluminal observer. You're only considering Earth and Mars, and someone poofing between the two.

The issue is that if you have a subluminal (but moving at a significant fraction of the speed of light) observer outside the reference frames of Earth and Mars, there are conditions where they could see you appear on Mars, and then communicate back to Earth -- before you left -- to tell you not to poof to Mars in the first place.

This doesn't have anything to do with the idea of physically moving through space at some rate (that is, "touching" every point between Earth and Mars during your journey there); poofing from one place to another would have the same effect. And I don't think it matters how much time you spend on Mars, whether it's 1 second or 1 day, before poofing back to Earth.

Also, this explanation does not suggest that the poofer has time-traveled to the past (as you argue against); it's the fast-but-subluminal observer who has done so.

At least that's how I understand it; I'm no physicist.


> I think you're missing one factor, though: the fast-but-subluminal observer. You're only considering Earth and Mars, and someone poofing between the two.

There wouldn't be anything meaningful to observe:

- An observer on Earth would see me poof out of existence and poof back into existence with a rock in my hand; a few minutes later, with a really good telescope, that observer might see me poof into existence on Mars, take a rock, and poof back out of existence.

- An observer on Mars would see me poof into existence, take a rock, and poof back out of existence; a few minutes later, with a really good telescope, that observer might see me poof out of existence on Earth and poof back into existence while holding a rock.

- An observer somewhere in between with a really good telescope might be able to see the poofing in and out of existence on Earth and/or Mars, but would only receive that light after I had already returned to Earth, and would lack the necessary information to reliably assert which happened first.

The relevance of that fast-but-subluminal observer is dependent on me actually traversing every last micron of space from Earth to Mars and back in those two seconds, but that ain't what's happening. Rather, I'm taking a shortcut, and in order for the observer to observe anything other than the endpoints said observer would need to be taking that same exact shortcut alongside me - otherwise, at worst, the observer just sees two copies of me (one on Earth, and one on Mars), and by the time the observer thinks to do anything about that I would already have handed you a Mars rock.

----

The more mathy explanation of this involves the Lorentz factor, which in Lisp (because I'm on my computer and Emacs is handy) is (assuming c = 1):

    (defun lorentz-factor (v) (/ 1 (sqrt (- 1 (expt v 2)))))
where v is the relative velocity between to reference frames. So, (lorentz-factor 0.1) would correspond to something moving at 0.1c, (lorentz-factor 1) would correspond to something moving at the speed of light, and (lorentz-factor 2) would correspond to something moving at twice the speed of light.

You'll notice that (lorentz-factor 1) produces a division by zero, and that anything past that produces imaginary numbers. That's the basis for the "FTL implies time travel" argument; it assumes that something is actually traveling at a faster-than-light velocity (i.e. actually moving through every last micron of the space from Earth to Mars and back within those two seconds) and thus producing a Lorentz factor which - when plugged into a full Lorentz transformation - would imply backward time travel.

However, that ain't really applicable to the "poofing" above (nor is it applicable to Alcubierre drives or wormholes, of which said "poofing" is an abstraction), because the specific premise here is that I am not actually moving at a velocity significantly above 0; instead, I'm stretching the space behind me / contracting the space in front of me (in the case of an Alcubierre drive) or punching a shortcut between two points in space (in the case of a wormhole) such that I don't have to move at a speed significantly greater than zero. Since my velocity remains basically 0, my Lorentz factor ends up being basically 1, and thereby eliminates the mathematical basis for my "poofing" having any implication of backward time travel.


Subliminal, which is mentioned multiple times.


^ this, and I'm guessing it was autocorrected (because that's what almost happened to me) but the term is subluminal


What did anyone expect? Everyone's been getting smoke blown up their ass for years now by 'revolutionary' app makers that did very little except skate around regulations and make speculators money. The entire industry looks sleazy no matter which way you cut it. My only surprise is that the editors felt the need to say anything.


yeah, I can't even really be upset about it.

It's like the whitehouse, they SHOULD be skeptical.


I agree with you but it shouldn't be overlooked that industry makes shortcuts which are disastrous for the environment to pad their margins very slightly so the executives can get a bonus.

The crux of the problem is the costs we allow to be externalized and the arduous legal process involved in getting a small fraction of the real damages paid. You shouldn't need a lawsuit to make a company pay for every penny of damage they did.


I'm a little perplexed by the ISBN system. The whole centralized affair, where you have to purchase ISBNs seems like a racket. ISBNs cost more in some countries (America) than they do in others (Canada). Not for any reason other than that they can get away with it.

Much better would be a UUID generated from unique values, like a hash of the timestamp and publisher of a book. If you limit the length and number of the fields you hash to generate the UUID, you could even prove there will be zero collisions and eliminate any need to collision checks and thus an organization that charges money.


ISBN was introduced in 1970. While hash functions did exist at this point (https://en.wikipedia.org/wiki/Hash_function#History) the computational resources generally available for this sort of thing were... rather lacking. The Apple II wasn't introduced until 1977.

I will leave figuring out which hashing functions were known back in 1970, and experimenting with calculating them by hand, up to you. :)


While archaic, ISBN doesn't seem a bad system to me.

Short values are more reliable in retail situations. They can be typed in by hand or read with cheap scanners.

You are of course free to publish without an ISBN if you don't care about the legacy ecosystem.

There's nothing stopping anyone from creating or promoting an alternative but I don't think the incentives are there. There's not enough money in it, and I don't think the cost savings are enough to make a switch compelling.



That's definitely an interesting question, why they don't use a longer identifier without central/hierarchical allocation. I don't have an answer, but some possibly relevant points:

* Rather than compute a hash you could just generate a random number: same risk of collision if done correctly (but different opportunities for making a mistake).

* When ISBNs were introduced in the 1960s people would have been typing and even handwriting them so keeping them short would have been important.

* ISBNs have now been incorporated into EANs (13 digits), which are used for all things sold by retailers, except in the USA and Canada, which, according to Wikipedia, use a system called UPC. (Ironically, the U stands for "universal" while the E stands for "European". Of course the 12-digit system got incorporated into the 13-digit system. Probably there will be a 14-digit system one day.)

* In a UK supermarket if the barcode won't scan someone has to type in the digits. I assume that in most cases they type all 13 digits but I haven't watched carefully. (Of course I am now inspired to watch more carefully next time it happens.) They could have a really clever interface connected to a real-time database of barcodes which recently failed to scan because I expect whole batches of a product have badly printed or crinkled packaging.

* A suitably designed 25-digit system would only take twice as long, or less than twice as long, to type in as the current 13-digit system, but the system would have to be suitably designed for that purpose. Having the computer tell the human at the end "there's a mistake somewhere" would be no good at all. At the very least you could have a check digit for each half and tell the human which half contains the mistake but of course you could do much better than that ...

* I have noticed that Sainsbury's (a major UK supermarket) has a system of 8-digit barcodes for its own products, but Tesco (another major supermarket) uses the standard 13-digit barcodes for its own products.

* ALDI products have giant barcodes printed in several places on the packaging without the corresponding digits printed underneath the barcode: the scanner will never fail!


> Much better would be a UUID generated from unique values, like a hash of the timestamp and publisher of a book. If you limit the length and number of the fields you hash to generate the UUID, you could even prove there will be zero collisions and eliminate any need to collision checks and thus an organization that charges money.

That's false. Your algorithm of hashing a timestamp and book publisher name cannot be proven to be collision-free.


but the probability of 16 completely random bytes is extremely low..


Yes, but I was refuting a false point, that those bytes can be proven to never collide... Obviously, they can collide. In the real world, programmers should be prepared for random collisions, yes, but also for created collisions...

False assumptions are the bane of correct design and will cause an entire system to fail in unpredictable ways or be exploited without detection.


Wikipedia for almost the past decade has reminded me of those televangelists that would beg for donations to keep the lights on and that Uncle Sam was kicking the door down to seize their assets. Then they'd drive home in a brand new Cadillac.


Let's get rid of qualified immunity and add stronger protections against eminent domain while we're at it. Also, we need to address corrupt judges, we have very few legal mechanisms for that.


I would like to drastically claw it back but I think there are limited circumstances where that immunity should apply.

At the very least, criminal immunity and being required to carry large insurance policies for civil damages. Paid for out of pocket and underwritten based on their job performance/complaints.


Police are people and people make mistakes, this needs to be recognized on some level, but QI as it is now is positively un-American. We're supposed to have checks and balances and recourse for misdeeds. Nobody should be above the law. You literally need circumstances like those surrounding George Floyd to hold police accountable in this country.

2A was supposed to be insurance against nonsense like this.


It reads funny, but fewer people drive at night and still make up about half of accidents.


Yeah but it doesn't say the cause is biker visibility. I'd guess bikers themselves are far more likely to miss some pothole (or other reason) at night time and fataly crash.


So why not write something like "accidents happen at a higher rate a night"?


or "nearly half despite ..."


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: