Hacker Newsnew | past | comments | ask | show | jobs | submit | CJefferson's commentslogin

I find it particularly useful when I would need to look up lots of library functions I don't remember. For example, in python I recently did something (just looked it up:

    for ever my file in directory 'd' ending '.capture':
        Read file
        Split every line into A=B:C
        Make a dictionary send A to [B,C]
   Return a list of pairs [filename, dict from filename]
I don't python enough to remember reading all files in a directory, or splitting strings. I didn't even bother proof reading the English (as you can see)


Same when you a few times per year need to write some short bash script. It's really nice to not have to remember how it really works again!


It's absolutely fine for you to be fine with it. What is nonsense is how copyright laws have been so strict, and suddenly AI companies can just ignore everyone's wishes.


Hey - no argument here.

I don't think the concept of copyright itself is fundamentally immoral... but it's pretty clearly a moral hazard, and the current implementation is both terrible at supporting independent artists, and a beat stick for already wealthy corporations and publishers to use to continue shitting on independent creators.

So sure - I agree that watching the complete disregard for copyright is galling in its hypocrisy, but the problem is modern copyright, IMO.

...and maybe also capitalism in general and wealth inequality at large - but that's a broader, complicated, discussion.


The problem with offering fallbacks is testing -- there isn't any reasonable hardware which you could use, because as you say it's all very old and slow.


rar is super popular in China, because for a long time (and still with many modern implementations) it is much better at preserving Chinese filenames in Windows than zip.


Does zip actually alter some filenames, then?


I don't know the current average, but it used to often be much higher than that. Maybe the average has improved?


I'm not sorry for being rough, you sound like someone who has no idea what a modern GPU driver is like. I haven't written any in about 15 years, and I know it's only gotten worse since then.

Go look in the Linux kernel source code -- GPU drivers are, by lines of code, the single biggest component. Also, lots of drivers support multiple cards. Do you think it would be sensible to have a seperate driver, completely independant, for every single GPU card?

GPU drivers aren't about "connecting some wires" between two APIs, because those two APIs turn out to be quite different.

Of course, feel free to prove me wrong. Show me a GPU driver you've written, go link some wires together.


While I won't endorse what the GP said, I wouldn't say that it's only gotten worse. I work for a modern gpu company (you can probably figure out which one from my comment history) on one of the modern apis and they much more closely represent what the gpu does. It's not like how opengl use to be as the gpus hold much less state for you than they use to. However with the new features being added now it is starting to drift apart again and once again become more complex.


That's interesting to know! I keep meaning to try fixing into the AMD stuff (mainly as it seems like the more open source one), but need to find the time to deep dive!


Yeah, we also have a gaming and a developer discord where I hang around. So feel free to join and ask questions there.


> it's only gotten worse since then.

It's worse all the way up. Modern GPUs support a huge amount of asynchronous operations. Applications put commands on queues, and completions come back later. The driver and Vulkan mostly pass those completions upward, until they reach the renderer, which has to figure out what it's allowed to do next. How well that's done has a huge impact on performance.

(See my previous grumbling about the Rust renderer performance situation. All the great things Vulkan can do for performance are thrown away, because the easy way to do this doesn't scale.)


Why would Rust rendering be worse than any other rendering? Rust claims to be well suited for handling parallelism.


It is fantastic for CPU parallelism. The problem is the CPU/GPU boundary is difficult to deal with and exposing an API that is both fast and safe and flexible is almost impossible.

I don't believe it's possible to make an efficient API at a similar level of abstraction to Vulkan or D3D12 that is safe (as in, not marked unsafe in rust). To do so requires recreating all the complexity of D3D11 and OpenGL style APIs to handle resource access synchronization.

The value proposition of D3D12 and Vulkan is that the guardrails are gone and it's up to the user to do the synchronization work themselves. The advantage is that you can make the synchronization decisions at a higher level of abstraction where more assumptions can be made and enforced by a higher-level API. Generally this is more efficient because you can use much simpler algorithms to decide when to emit your barriers, rather than having the driver reverse engineer that high-level knowledge from the low-level command stream.

Rust is just not capable of representing the complex interwoven ownership and synchronization rules for using these APIs without mountains of runtime checks that suck away all the benefit of using these APIs. Lots of Vulkan map quite well to Rust's ownership rules, the memory allocation API surface maps very well. But anything that's happening on the GPU timeline is pretty much impossible to do safely. Rust's type system is not sufficiently capable of modeling this stuff without tons of runtime checks, or making the API so awful to use nobody will bother.

I've seen GP around a lot and afaik they're using WGPU which is, among other things, Firefox's WebGPU implementation. The abstraction that WebGPU provides is entirely the wrong level to most efficiently use Vulkan and D3D12 style APIs. WebGPU must be safe because it's meant to get exposed to JS in a browser, so it spends a boat load of CPU time to do all the runtime checks and work out the synchronization requirements.

Rust can be more challenging here because if you want a safe API you have to be very careful in where you set the boundary between the unsafe internals and the safe API. And Rust's safety rails will be of limited use for the real difficult parts. I'm writing my own abstraction over Vulkan/D3D12/Metal and I've intentionally decided not to make my API safe and to leave it to a higher layer to construct a safe API.


I'm currently writing a Vulkan renderer in Rust, and I decided against wgpu for this reason - its synchronization story is too blunt. But I don't necessarily agree that this style of programming is very much at odds with Rust's safety model, which is fundamentally an API design tool.

The key insight with Rust is to not try to use borrowing semantics unless the model actually matches, which it doesn't for GPU resources and command submission.

I'm modeling things using render graphs. Nodes in the graph declare what resources they use and how, such that pipeline barriers can be inserted between nodes. Resources may be owned by the render graph itself ("transient"), or externally by an asset system.

Barriers for transient resources can be statically computed when the render graph is built (no per-frame overhead, and often barriers can be elided completely). Barriers for shared resources (assets) must be computed based on some runtime state at submission time that indicates the GPU-side state of each resource (queue ownership etc.), and I don't see how any renderer that supports mutable assets or asset streaming can avoid that.

I don't think there's anything special about Rust here. Any high-level rendering API must decide on some convenient semantics, and map those to Vulkan API semantics. Nothing in Rust forces you to choose Rust's own borrowing model as those semantics, and consequently does not force you to do any more runtime validation than you would anywhere else.


> I'm currently writing a Vulkan renderer in Rust,

More info, please. nagle@animats.com

(I need a Vulkan renderer in Rust. There are four that are not built into a game engine. Three are abandoned and one is unfinished.)


> Lots of Vulkan map quite well to Rust's ownership rules, the memory allocation API surface maps very well. But anything that's happening on the GPU timeline is pretty much impossible to do safely.

I agree with this, having been dabbling with Vulkan and Rust for a few years now. Destructors and ownership can make a pretty ergonomic interface to the cpu side of gpu programming. It's "safe" as long as you don't screw up your gpu synchronization which is not perfect but it's an improvement over "raw" graphic api calls (with little to no overhead).

As for the GPU timeline, I've been experimenting with timeline semaphores. E.g. all the images (and image views) in descriptor set D must be live as long as semaphore S has value less than X. This coupled with some kind of deletion queue could accurately track lifetimes of resources on the GPU timeline.

On the other hand, basic applications and "small world" game engines have a simpler way out. Most resources have a pre-defined lifetime, either it lives as long as the application, or the "loaded level" or the current frame. You might even use Rust lifetimes to track this (but I don't). This model is not applicable when streaming textures and geometry in and out of the GPU.

What I would really like to experiment with is using async Rust for GPU programming. Instead of using `epoll/kqueue/WaitForMultipleObjects` in the async runtime for switching between "green threads" the runtime could do `vkWaitForSemaphores(VK_SEMAPHORE_WAIT_ANY_BIT)` (sadly this function does not return which semaphore(s) were signaled). Each green thread would need its own semaphore, command pools, etc.

Unfortunately this would be a 6-12 month research project and I don't have that much free time at hand. It would also be quite an alien model for most graphics programmers so I don't think it would catch on. But it would be a fun research experiment to try.


> As for the GPU timeline, I've been experimenting with timeline semaphores. E.g. all the images (and image views) in descriptor set D must be live as long as semaphore S has value less than X. This coupled with some kind of deletion queue could accurately track lifetimes of resources on the GPU timeline.

> What I would really like to experiment with is using async Rust for GPU programming.

Most of the waiting required is of the form "X can't proceed until A, B, D, and Q are done", plus "Y can't proceed until B, C, and R are done". This is not a good match for the async model.

That many-many keeps coming up in game work. Outside the GPU, it appears when assets such as meshes and textures come from an external server or files, and are used in multiple displayed objects.


> waiting required is of the form "X can't proceed until A, B, D, and Q are done

In my experience, this is the common pattern on GPU workloads. On the CPU (where async happens), the wait pattern is usually much simpler.

Of course there would need to be a way to not await on the CPU and pass "futures" as semaphore waits to GPU.

But all of this is just wild ideas at this point.


> The abstraction that WebGPU provides is entirely the wrong level to most efficiently use Vulkan and D3D12 style APIs.

I agree with this, although the WGPU people disagree.

There could be a Vulkan API for "modern Vulkan" - bindless only, dynamic rendering only, asset loading on a transfer queue only, multithreaded transfers. That would simplify things and potentially improve performance. But it would break code that's already running and would not work on some mobile devices.

We'll probably get that in the 2027-2030 period, as WebGPU devices catch up.

WGPU suffers from having to support the feature set supported by all its back ends - DX12, Vulkan, Metal, WebGPU, and even OpenGL. It's amazing that it works, but a price was paid.


> But anything that's happening on the GPU timeline is pretty much impossible to do safely. Rust's type system is not sufficiently capable of modeling this stuff without tons of runtime checks, or making the API so awful to use nobody will bother.

I wonder if there's any thinking / research around what the PLT (programming language tech) would look like that could manage this. Depending on what kind of safety is sought, compile-time safety is not necessarily the only way to ensure this.

Of course it depends greatly on what kind of safety we are looking for (guaranteed to run without error vs memory-safe behaviour that might bail out in some cases, etc)


I indeed wouldn't expect safe approach here to be necessarily efficient. But you aren't forced to make everything safe even if it's nicer. I've seen some Vulkan Rust wrappers before which tried to do that, but as you say it comes at some cost.

So I'd guess you can always use raw Vulkan bindings and deal with related unsafety and leave some areas that aren't tied to synchronization for safer logic.

Dealing with hardware in general is unsafe, and GPUs are so complex that it's sort of expected.


While the post office didn't screen all letters, it was possible to get a warrant to read all the mail going to a certain address, and police often did that.


I agree, but think it's worse.

It's easy to get a superficial understanding of the problem, and then very easy to subtly mis-understand it.

I've reviewed published papers by respectable people where they've made one of several easy mistakes:

* Be careful about how you encode your problem, it's easy to make it "too large", at which point your problem can be solved in P in the input size. For example, don't represent a sudoku as triples "x-pos,y-pos,value", where those 3 numbers are encoded in binary, because now if I give you a problem with only 2 filled in values, you can't take the solution as your 'certificate', as your input is about size 6 * log(n), but the solution will be n * n * log(n).

* It's also easy if you write your problem as a little language to accidentally make it impossible to check in P time.

* When proving a reduction (say turning SAT into a Sudoku, to prove Sudoku is NP-complete), it's (usually) easy to show how solutions map to solutions (so you show how the SAT instance's solution turns into a particular solution to the Sudoku). It's usually much harder, and easy to make mistakes, showing the Sudoku can't possible have any other solutions.


I've also seen people make the wrong direction of reduction.

Basically, they show that you can use SAT to solve Sudoku, and then claim that this makes Sudoku NP-complete. (All it shows is that Sudoku is in NP.)

People make similar mistakes often when they want to show that a certain problem isn't solvable in linear time, and they try to show that sorting can solve your problem. But it's the wrong direction.


> Basically, they show that you can use SAT to solve Sudoku, and then claim that this makes Sudoku NP-complete. (All it shows is that Sudoku is in NP.)

Wait, did you mess up the direction here too, or am I confused? If you reduce problem A to B, then it means B is at least as hard as A, because solving it would solve A. Which certainly means in this case that Sudoku is NP-hard. And it doesn't (without introducing additional facts) imply Sudoku is in NP either. I don't see anything wrong here, do you?


You've messed up the direction here.

Reducing unknown-complexity to NP-complete means you can bound the complexity by above, but not by below. I can reduce binary integer addition to SAT, which means that binary integer addition is no harder than SAT... but we also know by other algorithms that it is in fact easier than SAT.

To bound by below, you have to reduce NP-complete (or NP-hard suffices) to unknown-complexity.


Oh gosh. I think I was so focused on the reduction direction that I think I misread the premise of the comment -- it seems somehow my brain parsed "use SAT to solve Sudoku" as "solve SAT using Sudoku". That's why I was saying that reducing SAT to Sudoku would imply Sudoku is at least as hard as SAT. It indeed would if they had actually done that, but they did the opposite. Not sure how my wires got crossed when reading, but thanks!


No, the GP is correct. If you use SAT to solve Sudoku, you have reduced Sudoku to SAT, not the other way around.

That is, you've shown that an oracle that solves any SAT problem in constant time can solve any Sudoku in polynomial time.

The more difficult side is to show that for any SAT instance, you can reduce it to a Sudoku.

Really proving that you can use SAT to solve Sudoku is not a great or interesting result; since Sudoku is a decision problem it is very clear that it is in NP. Or see that verifying that a Sudoku solution is correct is achievable in polynomial time.


> > they show that you can use SAT to solve Sudoku

> Wait, did you mess up the direction here too, or am I confused? If you reduce problem A to B, then it means B is at least as hard as A, because solving it would solve A.

Using SAT to solve Sudoku is a reduction of Sudoku to SAT. The order of the problems names switches depending on how you write it.


Thanks, yeah, that's what I messed up. I was so focused on the reduction direction that I misparsed the statement.


Yeah, this is a confusing topic and easy to get wrong!


> I don't see anything wrong here, do you?

Lololol


There's nothing subtle about the mistake in the paper at hand. The reason everybody expects proving P != NP to be difficult is that it's very hard to say anything at all about arbitrary programs. The authors just assume without justification that any program that solves SAT must operate in a certain recursive way -- obvious rubbish. It's hard to overstate how embarrassing this is for the Springer journal where this nonsense is published.


"Gödel's Incompleteness Theorem"

https://www.youtube.com/watch?v=IuX8QMgy4qE

Algorithmic isomorphism practically ensures most CS approaches will fail to formally model the problem clearly. To be blunt, that million dollar prize will be waiting around a long time.

An infinite amount of Papers do not necessarily have to converge on a correct solution. =3


Gödel's Incompleteness Theorem has little to do with complexity theory. Complexity theorists routinely find out if two complexity classes are included in each other or not.

Why would Gödel's Incompleteness Theorem stop them for P=NP in particular?

Why is the current definition of P or NP insufficiently formal or clear?

PS: Citing YouTube Videos in mathematical discussions is a big red flag indicating you have not really understood things.


I would advise listening to Professor Thorsten Altenkirch brief introduction about the subject, and consider delaying argumentum ad hominem opinions a few minutes.

> "not really understood things"

Something currently impossible to prove is by definition confusing. lol =3

https://www.youtube.com/watch?v=aNSHZG9blQQ


The ad-hominem was maybe unwarranted, sorry. Let's get back to the subject matter by me repeating the material question in my comment above, which you ignored: "Complexity theorists routinely find out if two complexity classes are included in each other or not. Why would Gödel's Incompleteness Theorem stop them for P=NP in particular?"

PS: I read through the transcript of the YouTube video you linked (and also know the material from my CS degree education) so I do in fact know what Gödel's Incompleteness Theorem is. Note that the video is really not about P=NP at all, not any more than it is about 1+1=2. The use of P=NP in the video was just to give an example of "what is a statement."


Personally I never saw degrees as an excuse to post nonsensical answers or troll people with personal barbs. P!=NP can be shown true for certain constrained sets, but P=NP was shown it can't currently be proven.

People can twist it anyway they like, but stating they can "sort of" disprove or prove P=NP is 100% obfuscated BS. That is why the million dollar prize remains unclaimed.

Have a great day, =3


Complaining about ad hominem and then writing that awful comment, you should be embarrassed.


> P!=NP can be shown true for certain constrained sets

What? That's not even wrong. What do you mean by that?


"Directions (I speak no English)"

https://www.youtube.com/watch?v=6vgoEhsJORU

Best of luck =3


I have no idea why Gemini is saying that. Proof my contradiction is totally fine. Sure, many people prefer a more direct proof as they are nicer to read, but proof by contradiction is totally fine and sometimes the only way to prove important results.


I'm not disagreeing (I'm on the fence. Also a bit of a nube.). I thought this was a good read and on topic.

https://www.quora.com/In-math-are-there-any-proofs-that-can-...


The search was a little easier than that, as we knew how to solve every state in 20 moves, so the problem was proving some move that could be solved in 20 moves couldn't be done quicker in some unusual way. While that still took a while, the fact you knew the start and end limits how many moves you have to search.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: