Hacker News new | past | comments | ask | show | jobs | submit | jagrsw's comments login

The ones under the Baltic (with the exception of some islands) are not 'critical', as in they're used for reasons (cost, latency, directness), but those reasons are not b/c there's no other way to connect to the Internet.

So, annoyance at most.


I experienced the trust factor (banning, w/o banning officially) issues on my Linux CS:GO account in 2021, dropping to yellow and then red. This made it difficult to find teammates, as I was constantly matched with cheaters.

I discovered I wasn't alone, as many other Linux users with Radeon GPUs and 16GB+ VRAM were experiencing similar problems. We created a GitHub issue to track the problem and try to find a solution: https://github.com/ValveSoftware/csgo-osx-linux/issues/2630

After some investigation, we found that Valve was punishing Linux users with certain hardware configurations (radeon cards with >=16GB of VRAM, which were quite new at this time).

Eventually, after a user reached out to gaben directly, the issue was fixed: https://github.com/ValveSoftware/csgo-osx-linux/issues/2630#...

I suspect this was because Valve was preparing to launch the Steam Deck, and gaben wanted to ensure that Linux users had better experience with the device (just a guess).


Could it be that Gabe Newell is a nice guy?

It's possible, but it's also important to be aware of the business side of things.

Valve makes a significant amount of money from in-game transactions, and some of their practices around this are shady. Issues like kids using their parents' CCs, gambling industry built around in-game items, and the potentially addictive nature of colorful virtual items marketed towards kids are valid concerns.

So, while gaben might be nice, it's unlikely that this gets in the way of Valve's drive to maximize profits in every way they can legally get away with.


That email address goes to a team of people, but if you send something substantial and well-meaning, they'll look into it.

He does respond to minor inquiries frequently, but do remember that his company supports a gigantic predatory underage gambling market.

> supports a gigantic predatory underage gambling market

Last year Valve updated their code of conduct and effectively banned gambling. They've also been known to send cease-and-desist orders to various CS:GO gambling sites.

So I wouldn't say that they support it, though for much time they weren't actively combating it either.


The above commenter was probably inspired by the recent investigative(!) video series by Coffeezilla (and decided not to mention it?). It was either in part 2 or 3, where C-z alleged/suspected Valve's legal actions to have been mostly about good PR and to send a signal to those ~websites~ businesses to keep them in check.

However it is indeed the case, that Valve has introduced greater and greater restrictions on inventory handling. The measures obviously go far beyond just counteracting possible scammers and phishing. Still, I am inclined to believe, they could've implemented all these features many years ago, if only they had wanted to. I highly recommend the videos. You can maybe skip the first one. It's mostly about casino owners' drama.

[1]: https://www.youtube.com/watch?v=q58dLWjRTBE

[2]: https://www.youtube.com/watch?v=v6jhjjVy5Ls

[3]: https://www.youtube.com/watch?v=13eiDhuvM6Y


I’ve tried searching and found the below, is that the sort of thing you mean?

https://www.seattletimes.com/business/bellevue-game-maker-va...


Yes and not much has changed since then. pyth0's sibling comment links the relevant Coffeezilla video.

You could say “support a virtual market with insufficient controls” and be more truthful and engender a more productive discussion. They’ve come down pretty heavily on the gambling side, no?

> They’ve come down pretty heavily on the gambling side, no?

Not really. Back when this was a big story (around 2016-2017) they sent out some cease and desists to a number of the big CS:GO gambling websites but many did not comply and there was no follow-up. To this day many of those original sites are still around and have since grown. Essentially Valve (and the skin market as a whole) benefit so greatly from this grey-market that there is no incentive for them to stop it. This is covered in part 2 of Coffeezilla's latest series investigating CS:GO gambling [1]

[1] https://youtu.be/13eiDhuvM6Y?t=493


>I suspect this was because Valve was preparing to launch the Steam Deck, and gaben wanted to ensure that Linux users had better experience with the device (just a guess).

Wait, how is punishing Linux users ensure Linux users have better experience?

Interesting though.


Probably meant that fixing it quickly was for the steam deck users. It might not have received attention otherwise.

> dropping to yellow and then red

How do you know what your trustfactor is? Or were you just speculating because the quality of games was lower? As far as I understand TF is hidden specifically so it can't be gamed.


In CS, the difference between high and low Trust is very noticeable; it's a big change when your games with silent / mostly-nice teammates and enemies start to become slur-fests. The value itself is not visible to the end-user, but its effects are certainly felt.

Now that you've written it out, it explains why my solo games are better than when we queue with a friend, who never plays outside our games together. And I end up promising "I just had nice team mates the last game!" :)

Someone mentioned how they were testing it in the linked Github issue: https://github.com/ValveSoftware/csgo-osx-linux/issues/2630#...

EDIT: formatting x 2


It could be related to 14eyes with modifications (finland and ireland, plus close asian allies).

https://res.cloudinary.com/dbulfrlrz/images/w_1024,h_661,c_s... (from https://protonvpn.com/blog/5-eyes-global-surveillance).

Israel, Poland, Portugal and Switzerland are also missing from it


As someone who is in the verge of being killed with no side in this entire reality it's cool that in addition to trade and economics we now get compute as a geopolitical indicator maybe it can really all be automated

If we wanted to model the universe as a set of equations or a cellular automaton, how complex would that program be?

Could a competent software engineer, even without knowing the fundamental origins of things like particle masses or the fine-structure constant, capture all known fundamental interactions in code?

I guess I'm trying to figure out the complexity of the task of universe creation, assuming the necessary computational power exists. For example, could it be a computer science high school project for the folks in the parent universe (simulation hypothesis). I know that's a tough question :)


I'm surprised that more sibling comments aren't covering the lack of a unified theory here. Currently, our best understanding of gravity (general relativity) and our best understanding of everything else (electromagnetism, quantum mechanics, strong/weak force via the standard model) aren't consistent. They have assumptions and conclusions that contradict each other. It is very difficult to investigate these contradictions closely because the interesting parts of GR show up only in very massive objects (stars, black holes) and the interesting parts of everything else show up in the tiniest things (subatomic particles, photons).

So we don't have a set of equations that we could expect to model the whole universe in any meaningful way.


At the level of writing a program to simulate the universe as we see it, ideas like classical gravity (see Penrose) would probably work.


They definitely wouldn’t work because we have strong evidence that relativity is a more accurate theory and significant evidence that either gravity does not obey an inverse square law or our estimation of the distribution or nature of dark matter is incorrect.

https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gr...


Our present best guess is that cellular automatons would be an explosively difficult way to simulate the universe because BQP (the class of problems that can be related to simulating a quantum system for polynomial time) is probably not contained in P (the class of problems Turing machines can solve in polynomial time).


The formulas are really not very complex. The Standard Model is a single Lagrangian with a couple of dozen constants.

https://visit.cern/node/612

You can expand that Lagrangian out to look more complex, but that's just a matter of notation rather than a real illustration of its complexity. There's no need to treat all of the quarks as different terms when you can compress them into a single matrix.

General relativity adds one more equation, in a matrix notation.

And that's almost everything. That's the whole model of the universe. It just so happens that there are a few domains where the two parts cause conflicts, but they occur only under insanely extreme circumstances (points within black holes, the universe at less than 10^-43 seconds, etc.)

These all rely on real numbers, so there's no computational complexity to talk about. Anything you represent in a computer is an approximation.

It's conceivable that there is some version out there that doesn't rely on real numbers, and could be computed with integers in a Turing machine. It need not have high computational complexity; there's no need for it to be anything other than linear. But it would be linear in an insane number of terms, and computationally intractable.


>The Standard Model is a single Lagrangian with a couple of dozen constants.

I hear it's a bit more complex than that!

https://www.sciencealert.com/this-is-what-the-standard-model...


It's a single lagrangian with a couple of dozen constants, in their pics there as well. It's just expanded out to different degrees.


Through smart definitions I can contract any longer term as much as I want.


Yes, and it's exactly those "smart definitions" that are the Standard Model. The whole goal is to produce even smarter definitions, including showing that as much as possible of it couldn't be any other way, preferably.


Yes and that's precisely why you're writing high-level code instead of ASM. It's your job.

Nah it really is simpler than that, that picture has exploded the summations to make it look complicated. Although it is strangely hard to find the compressed version written down anywhere...

the thing about Lagrangians is that they compose systems by adding terms together: L_AB = L_A + L_B if A and B don't interact. Each field acts like an independent system, plus some interaction terms if the fields interact. So most of the time, e.g. on Wikipedia[0], people write down the terms in little groups. But still, note on the Wikipedia page that there are not that many terms in the Lagrangian section, due to the internal summations.

[0]: https://en.wikipedia.org/wiki/Mathematical_formulation_of_th...


I can't help but wonder if, under extreme conditions, the universe has some sort of naturally occurring floating-point error conditions, where precision is naturally eroded and weird things can occur.


That would occur if a naked singularity could exist. If black holes have a singularity then you could remove the event horizon. In general relativity, the mathematical condition for the existence of a black hole with an event horizon is simple. It is given by the following inequality: M^2 > (J/M)^2 + Q^2, where M is the mass of the black hole, J is its angular momentum and Q is its charge.

Getting rid of the event horizon is simply a question of increasing the angular momentum and/or charge of this object until the inequality is reversed. When that happens the event horizon disappears and the exotic object beneath emerges.


I doubt it. Even the simplest physical system requires a truly insane number of basic operations. Practically everything is integrals-over-infinity. If there were implemented in a floating-point system, you'd need umpteen gazillion bits to avoid flagrant errors from happening all the time.

It's not impossible that the universe is somehow implemented in an "umpteen gazillion bits, but not more" system, but it strikes me as a lot more likely that it really is just a real-number calculation.


Right, I don't mean literally floating-point errors, but something similar.


That could very well be what the quantum uncertainty principal is, floating point non deterministic errors. It also could just be drawing comparisons among different problem domains.


The QUP is indeed what allows to quantize continuous equations with h, and once they have been turned into integers like this we can then meaningfully calculate our lack of information (aka 'entropy').


>These all rely on real numbers, so there's no computational complexity to talk about.

There's a pretty decent argument real numbers are not enough:

https://www.nature.com/articles/s41586-021-04160-4/

https://physics.aps.org/articles/v15/7


You (sorta) can! https://en.wikipedia.org/wiki/Lattice_QCD

The trick is (as the sibling comments explain) that it involves an exponential number of calculations, so it's extremely slow unless you are interested only in very small systems.

Going more technical, the problem with systems with the strong force is that they are too difficult to calculate, so the only method to get results is to add a fake lattice and try solving the system there. It works better than expected and it includes all the forces we know, well except gravity , and it includes the fake grid. So it's only an approximation.

> Could a competent software engineer, even without knowing the fundamental origins of things like particle masses or the fine-structure constant, capture all known fundamental interactions in code?

Nobody know where that numbers come from, so they are just like 20 or 30 numbers in the header of the file. There is some research to try to reduce the number, but I nobody knows if it's possible.


The scales get you:

You can’t simulate a molecule at accurate quark/gluon resolution.

The equations aren’t all that complex, but in practice you have to approximate to model the different levels, eg https://www.youtube.com/playlist?list=PLMoTR49uj6ld32zLVWmcG...


Stephen Wolfram has been taking a stab at it. Researching fundamental physics via computational exploration is how I'd put it. https://www.wolframphysics.org/


He is basically a crackpot. Any attempt at fundamental physics that doesn't take quantum mechanics into account is.... uhm.... how to put this.... 'questionable'.


I'm not even able to hold a candle to Wolfram intellectually- the guy is a universe away from me in that regard. But: Given a cursory look at his wiki page and Cosma Shalizi's review of his 2002 book on cellular automata [1], I feel fairly comfortable saying that it seems like he fell in the logician's trap of assuming that everything is computable [2]:

>There’s a whole way of thinking about the world using the idea of computation. And it’s very powerful, and fundamental. Maybe even more fundamental than physics can ever be.

>Yes, there is undecidability in mathematics, as we’ve known since Gödel’s theorem. But the mathematics that mathematicians usually work on is basically set up not to run into it. But just being “plucked from the computational universe”, my cellular automata don’t get to avoid it.

I definitely wouldn't call him a crackpot, but he does seem to be spinning in a philosophical rut.

I like his way of thinking (and I would, because I write code for a living), but I can't shake the feeling that his physics hypotheses are flawed and are destined to bear no fruit.

But I guess we'll see, won't we?

[1] http://bactra.org/reviews/wolfram/ [2] https://writings.stephenwolfram.com/2020/04/how-we-got-here-...


Wolfram really loves to talk about computational irreducibility.[1]

But I think his articles about Machine Learning are excellent. [2]

[1]https://www.google.com/search?client=firefox-b-1-d&q=%22comp...

[2]https://writings.stephenwolfram.com/category/artificial-inte...


That really seems to be mischaracterizing his work. The idea is that the quantum effects we see will eventually emerge.

Most people in the field don't think his research will be fruitful, but that doesn't make him a crack pot


most people in the field believe his research isn't even capable of being wrong


Someone seems to say something demeaning like that about him whenever he comes up, and I don't really know why. Which is fine, maybe it's a subjective thing. For what it's worth, the few times I read something of his, I loved it.


It's a complex issue. He is obviously extremely intelligent and at least a decent business man. If you've never used Wolfram Mathematica before, I implore you to pick up a raspberry pi and play with the educational version. It's nothing short of magical in many ways. I still prefer Python in a lot of ways (least of all with Python being free/open), but Mathematica notebooks are nuts. You can do anything from calculus to charts, geographic visualizations, neural networks, NLP, audio processing, optimization, text processing, time series analysis, matrices, and a bazillion other things with a single command or by chaining them together. It has its warts, but is very polished.

He also did some important early work on cellular automata if iirc.

Then he wrote "A New Kind of Science", which reads like an ego trip and was not received well by the community (it is a massive tome that could have been summarized with a much smaller book). He also tried to claim discoveries from one of his workers due to some NDA shenanigans (or something along these lines iirc). The latter doesn't make him a crank, just a massive egotist, which is a trait nearly all cranks have. Sabine Hossenfelder did a video on him and how he only publishes in his own made up journals and generally doesn't use the process used by all other scientists. I think a lot believe where there is smoke, there is fire. To his credit, she also mentioned that some physicists gave him some critical feedback and he did then go and spend a bunch of time addressing the flaws they found.


Well, one can love playing chess and that is all fine and good and so on but if someone says that chess is the fundamental theory of the universe, how much sense does that make? There might even even be truth in that statement, who could possibly know? All we can be quite certain about is that to actually demonstrate the hypothetical truth of the statement 'chess is the fundamental theory of the universe' some number, presumably larger than 5, of nobel price level of physics discoveries need to take place.


You are making an unscientific criticism.

Wolfram's claim is that Cellukar Automata can provide as good or better mathematical model of the universe than current current theories, by commonly appreciated metrics such as "pasimony of theory" (Occam's Razor). He's not making claims about metaphysical truth.


Well, the question 'is this Wolfram guy doing science' is as such not a scientific question. And the answer is a resounding 'no'.


I think Wolfram might be one of the 1000 smartest people alive and he has accomplished many great things and is very good at math. But it really seems he wants to be thought of in the same was as Newton and Einstein. So he tries to find some new ultra fundamental theory to achieve this. His book A New Kind of Science failed so now he is trying with the Wolfram Physics Project.


I sympathize with your opinion of him being a crackpot but he is also a genius and the idea is that the graphs in his theory are more fundamental than quantum mechanics and it would emerge from them.


The universe is already modeled that way. Differential equations are a kind of continuous time and space version of cellular automata, where the next state at a point is determined by the infinitesimally neighboring states.


My first thought was 'ah, yes.' My second thought was 'but what about nonlocality?'


I do wonder if you'd want to implement a sort of 3D game engine that simulates the entire universe, if somehow the weird stuff quantum physics and general relativity do (like the planck limit, the lightspeed limit, discretization, the 2D holographic bound on amount of stuff in 3D volumes, the not having an actual value til measured, the not being able to know momentum and speed at the same time, the edge of observable universe, ...) will turn out to be essential optimizations of this engine that make this possible.

Many of the quantum and general relativity behaviors seem to be some kind of limits (compared to a newtonian universe where you can go arbitrarily small/big/fast/far). Except quantum computing, that one's unlocking even more computation instead so is the opposite of a limit and making it harder rather than easier to simulate...


I don’t think the “not having an actual value until measured”, properly understood, would seem like an optimization.

I don’t know why so many people feel like it would be an optimization?

Storing a position is a lot cheaper than storing an amplitude for each possible position.

One-hot vectors are much more compressible than general vectors, as you can just store the index.

Also, it is momentum and position that are conjugate, not momentum and speed.


Ugh, I just listed things from the top of my head, no rigorous correct physics!

I'd be interested to know where those so many other people who feel that would be an optimization are, because I don't often see opinions like this at all, only either rigorous physicists posting equations and papers, or people not knowing anything about it at all to even philosophize about it.


http://oyhus.no/QuantumMechanicsForProgrammers.html gives a flavor of one possible shape of things. It's pretty intractable to actually compute anything this way.


One of the real promises of Quantum Computers is being able to simulate quantum systems better.


How complex? I'm no physicist nor an expert at this, but AFAIK we aren't really capable of simulating even a single electron at the quantum scale right now? Correct me if I'm wrong.


We can simulate much more than that, even at the quantum scale. What we cannot do is calculate things analytically, so we only have approximations, but for simulation that’s more than enough.


Stephen Wolfram is trying to model physics as a hypergraph

https://www.wolframphysics.org/universes/


I've always thought that gravity exists because without it, matter doesn't get close enough for interesting things to happen.


Less ambitiously, how small and clear could you make a program for QED calculations? Where you're going for code that can be clear to someone educated with only undergrad physics, with effort, to help explain what the theory even is -- not for usefulness to career physicists.

Maybe still too ambitious, because I haven't heard of such a program.


Wolfram actually got his start writing these.


Well, Newton thought he could do it with just 3 lines, and we've all been playing code golf ever since.


To be fair, his universe was much simpler than ours. He didn't need a nuclear reactor or particle accelerator to transmute lead into gold in his theory.


Rephrasing what some of the other answers have said, with a decent knowledge of math you could write the program, but you wouldn’t be able to run it in a reasonable time for anything but the most trivial scenarios.


Horribly complex and/or impossible.

(1) quantum mechanics means that there is not just one state/evolution of the universe. Every possible state/evolution has to be taken into account. Your model is not three-dimensional. It is (NF * NP)-dimensional. NF is the number of fields. NP is the the number of points in space time. So, you want 10 space-time points in a length direction. The universe is four-dimensional so you actually have 10000 space-time points. Now your state space is (10000 * NF)-dimensional. Good luck with that. In fact people try to do such things. I.e., lattice quantum field theory but it is tough.

(2) I am not really sure what the state of the art is but there are problems even with something simple like putting a spin 1/2 particle on a lattice. https://en.wikipedia.org/wiki/Fermion_doubling

(3) Renormalization. If you fancy getting more accuracy by making your lattice spacing smaller, various constants tend to infinity. The physically interesting stuff is the finite part of that. Calculations get progressively less accurate.


To go down this rabbit hole, the deeper question is about the vector in Hilbert space that represents the state of the universe. Is it infinite dimensional?


Yes, but that is not saying very much. Just one single harmonic oscillator already has a state space that is an infinitely dimensional Hilbert space. It is L^2. Now make a tensor product of NF * NP of these already infinitely dimensional Hilbert spaces defined above to get quite a bit more infinite.


> Could a competent software engineer, even without knowing the fundamental origins of things like particle masses or the fine-structure constant, capture all known fundamental interactions in code?

I don't think so.

In classical physics, "all" you have to do is tot up the forces on every particle and you get a differential equation that is pretty easy to numerically work with. Scale is a challenge all of its own, and of course you'd ideally need to learn about all the numerical issues you can run into. But the math behind Runge-Kutta methods isn't that advanced (really, you just need some calculus to even explain what you're doing in the first place), so that's pretty approachable to a smart high schooler.

But when you get to quantum mechanics, it's different. The forces aren't described in a way that's amenable to tot-up-all-the-forces-on-every-particle, which is why you get stuff like https://xkcd.com/1489/ (where the explainer is unable to really explain anything about the strong or weak force). As an arguably competent software engineer, my own attempts to do something like this have always resulted in my just bouncing off the math entirely. And my understanding of the math--as limited as it is--is that some things like gravity just don't work at all with the methods we have at hand to us, despite us working at it for 50 years.

By way of comparison, my understanding is that our best computational models of fundamental forces struggle to model something as complicated as an atom.


For gaming, RetroPie is a cool solution (it doesn't require raspberrypi).

I have a setup using HP Pro Mini 400 G9 that boots directly into EmulationStation. It's perfect for playing with my kid, covering everything from NES to more recent consoles, and also Steam and Minecraft (via https://github.com/minecraft-linux/appimage-builder/releases) if needed. The offline aspect of non-steam games is a big plus for easier parental management too.


I've got to say I prefer the raw Retroarch interface over the EmulationStation launcher. EmulationStation is pretty but the seams between it and Retroarch were just too big for me.


I'm starting to wonder if the Linux networking stack has become a bit too layered.

I recently spent some time debugging a Wireguard tunnel on a VPS. Simple 'ip r sh' checks and tcpdump'ing weren't revealing the full picture, and it turned out an obscure 'ip rule' added by the VPS provided was redirecting the traffic to the loopback for reasons.

It seems like policy-based routing (via ip rule) adds an extra, opaque, layer before the regular routing table. The packet router (below routing) further complicates things.


If you don't know it yet check out pwru [0] it's an eBPF based tool that let's you trace packets through the kernel using a tcpdump style syntax.

[0]: https://github.com/cilium/pwru


Otoh the flexibility of Linux networking stack makes it so amazing; if you can imagine it then there is almost certainly some way of doing it with Linux. It might not be fastest or cleanest, but it certainly can do a lot.

Also, policy based routing has been in Linux since 2.2 or something like that, so it's not like its some recent increase in complexity.


I miss a command that shows the entirety of the network stack state. All routes, rules, fw config from all namespaces, shaper policies, ipsets, bridge config (including vlan filters), etc etc. Not sure I concur with the too layered statement, however it is complicated and the tools to manage and explore it are not well integrated


This is definitely a problem. I'd love something like the Cisco IOS "show config" for Linux networking. (I'm probably biased towards Cisco cause that was the first real router I used back in the 90's.)


Ages old (~2014 IIRC) defer implementation for gcc and for clang:

https://github.com/google/honggfuzz/blob/c549b4c31815e170d3b...


They can, pretty much like in the EU. There might be some technical differences, but overall it's both "right to work and stay if employed"


Leaving aside personal preferences regarding the previous Polish and current Hungarian governments, the electoral processes are generally viewed as fair regarding the absence of major direct fraud related to vote counting. However, the fact is, that the state resources were used by the ruling parties to promote themselves.

The removal of the former Polish government was largely driven by public disapproval of state fund mismanagement. In Hungary, a key element of the current government's platform appears to be the promotion of national identity, including ties with diaspora communities formed after WWI (The Treaty of Trianon), potentially with implications for future "geopolitical alignments" (the likelihood of which is debatable).

These results, while influenced by the d'Hondt system, reflect the sentiment of the voting population, which is a democratic process, in principle. The ruling methods are not 100% democratic though (rule of majority with respect for minority rights)

However, the opinions of my "more Western friends" on those topics "diverge from on-the-ground realities".


In Poland the ruling party was using pegasus spyware to spy on opposition party politicians (including the head of campaign).

"Public" TV had literal North Korea level of propaganda too.

Even the (useless and bad) SMS service used to warn about bad weather... send out texts to remind old people to vote.


US conferneces don't bother with that aspect. Why should we put the onus on everyone else?


It’s about the field (computers, software). Not about the country. As a non english nor german native speaker, I appreciate English text/audio as much as possible in any situation.


I am not sure which "we" you are referring to, but CCC puts quite some effort in dubbing and subtitles (while those can take more time to publish)


Racism has no place in CCC. You are not part of "we".


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: