Hacker News new | past | comments | ask | show | jobs | submit | shmerl's comments login

See some details here: https://lwn.net/Articles/955708/

Another reason for people to ditch it and use Godot.

What about Skippy?

nvim + nvim-dap + nvim-dap-ui + gdb beats gdb's barebones TUI by a huge margin.

> Much of Koskinas and his team’s efforts stem from Vanguard having the deepest level of access to a gamer’s computer

Why would you trust anyone to do that? It's malware style access. Client side anti-cheats that need kernel level access are unacceptable, that's why I'd never play any games with such garbage.

Instead of this, let these companies focus on server side anti-cheats that detect behaviors that can be defined as cheating. Shouldn't AI be good for these kind of tasks? But of course it's cheaper for them to slap malware on user computers and call it a day.


What is DX?

Developer experience, like how nice it is to work with

I see, thanks

Developer experience

Thank you!

> completely unnecessary

Is it? How can you boot from encrypted volumes without it? I looked into systemd-boot, but I don't think it's capable.


You can set the resolution explicitly, otherwise it will render at resolution UEFI started the boot with.

This is a significant peeve of mine. The need to explicitly specify resolution in boot managers is annoying for both laptops and machines that aren’t always used with the same monitor, because no matter what it’s going to end up in fallback with an ugly stretched resolution some portion of the time, rendering beautification with themes somewhat moot.

This limit made sense 20+ years ago but today it feels highly anachronistic, kind of like finding a corded rotary phone mounted on a wall in the kitchen of an otherwise cutting edge home. Surely it’s something that could be fixed?


May be, but this is so minor in general, that I barely care as long as it boots properly.

Way bigger annoyance is that grub still doesn't support luks2 and uses some gimped variant of libcrypto without proper hardware acceleration that decrypts boot volumes for almost a minute. That is way more serious than boot resolution annoyances.


That's also a peeve of mine. Is there a way at all for grub to use hardware acceleration there? Or maybe the bootloader isn't allowed to do such things

Yes - use newer libcrypto. They are in the process of switching, but it just takes very long. I don't see why bootloader won't be allowed to use the CPU features that accelerate decryption.

> They are in the process of switching,

Nice! Do you have a link with the progress of this? Maybe in a mailing list or something. I can't manage to find it

Also, do you know whether grub plans to support luks2?

And maybe even veracrypt - ok this one is unlikely. (cryptsetup can read veracrypt just fine and the Linux kernel copes with it, maybe it's a matter of porting this code to grub? One issue is that grub would need to embed the number of iterations of the key derivation function somehow - the thing veracrypt calls PIM - because unlike luks, veracrypt doesn't store it in a header that can be read before unencrypting)


The main bug is here: https://savannah.gnu.org/bugs/?55093

But I do recall some other post which went into more details and was saying that switching was taking time due to lack of stable API and other issues.

Try searching for grub 2 + libgcrypt. Some links are also in that bug.


In terms of material impact, it is minor but as far as impression shaping papercuts go, bootloader jank is pretty high up on the list.

What can I say? Feels weird to include a JSON library in a bootloader.

We are talking about a stage 3 bootloader which supports booting from encrypted volumes and ZFS snapshots.

JSON-support would be adding 0.001% extra to the overall package.


Maybe a stage 0 bootloader, definitely not a stage 3 one.

My pet peeve is that grub repartitions windows disks on chain load, so if it ever boots with the disks remapped, there's a chance it'll plow apart the partition table of whatever poor disk got mapped to that hd#.

DMCA 1201 violates freedom of speech, but it's backed by corrupt beneficiaries, so it was never tossed. This one is comparable.

Attorney here! (Not your attorney, not legal advice.)

Why do you believe it runs afoul of the First Amendment?


Circumvention and circumvention tools are prohibited regardless of whether there is any underlying infringement, e.g. preventing an excerpt from being taken for the purpose of criticism. In general, fair use is required to square copyright with free speech, but circumvention for the purposes of fair use is prohibited.

More than that, it prohibits you from telling someone how to circumvent DRM, even if the purpose of doing so is just to, say, watch a movie they legally purchased on the device of your choice.

Banning secret numbers is dumb, but it's also the part of the law which is the most completely ineffective. Do you think you can find a copy of DeCSS on the internet? Of course you can.

The actual problem is the fair use problem, because it prevents you from e.g. creating a third party Netflix or Twitter client without the corporation's approval. Which in turn forces you to use their app and puts them in control of recommendations, keeps you within a walled garden instead of exposing you to other voices, etc. It's a mechanism to monopolize the marketplace of ideas.

Of course, Apple et al have turned this against the creators in order to extract their vig, which is a related problem.


Courts have never equated code with speech in such a way that it’s protected the same way as, say, political speech. People have been making the argument that “code is speech” (without understanding that not all speech is treated alike by our legal system) since DMCA was still being drafted 20+ years ago, but the legal system has never seen it that way.

What about Bernstein v. United States?

> the Ninth Circuit Court of Appeals ruled that software source code was speech protected by the First Amendment and that the government's regulations preventing its publication were unconstitutional.

https://en.wikipedia.org/wiki/Bernstein_v._United_States


Obviously it's impossible to cover all cases in a HN comment. I was perhaps a bit too broad when I suggested that the legal system doesn't treat code as speech. It does, sometimes; but even when it does, not all speech is treated alike for the purpose of legal analysis.

In the Bernstein cases, the Government was trying to squelch the author from publishing code that he personally wrote, that had scientific expressive value, and of which the Government required prepublication review, and the Ninth Circuit held:

"We emphasize the narrowness of our First Amendment holding. We do not hold that all software is expressive. Much of it surely is not. Nor need we resolve whether the challenged regulations constitute content-based restrictions, subject to the strictest constitutional scrutiny, or whether they are, instead, content-neutral restrictions meriting less exacting scrutiny. We hold merely that because the prepublication licensing regime challenged here applies directly to scientific expression, vests boundless discretion in government officials, and lacks adequate procedural safeguards, it constitutes an impermissible prior restraint on speech."

The Court, as you can see, was trying very hard not to declare some sort of framework or test to be applied to future cases.


This reply doesn't seem responsive to the issue. It's not just whether you can censor someone from publishing code -- that's a separate problem. It's whether the law can prohibit circumvention even when the copying is fair use -- or when the same technological protection measure is locking away works in the public domain.

We’re talking about freedom of speech here, so First Amendment law is on point. There’s no other mechanism in our legal system than the Constitution that would prevent DMCA, including its anti-circumvention provisions, from having full force and effect.

Similarly, absent some Constitutional protection, states can restrict who can purchase lock picks.


The Constitution doesn't seem to be very respected these days, in either the Executive nor Congress these days.

It obviously doesn't, because the constitution is by definition whatever the supreme court says it is, yet by assumption the law in question hasn't been tossed. However, notice that the GP said "freedom of speech" and not the first amendment. Perhaps they understand the former to be more expansive than the latter.


The DeCSS case actually went to court (Bunner case), though it wasn’t about the T-shirt. It was a civil case based on trade secret law, not DMCA. The trial court assumed that sharing a trade secret without the permission of the secret’s owner is unlawful. That assumption wasn’t challenged.

That law is consistent with trade secret law in general. The First Amendment does not require trade secrets to lose all protection. If it did, you could freely disclose your own employer’s secrets without penalty.


Did Bunner work at the DVD Consortium? Can you freely discuss my employer's secrets without penalty?

No, I cannot, if I know or have reason to know it is a trade secret. (The misconception that the law allows the equivalent of "secrets laundering" is still far too pervasive.)

The Uniform Trade Secrets Act (the law in most states) defines misappropriation as:

(i) acquisition of a trade secret of another by a person who knows or has reason to know that the trade secret was acquired by improper means; or

(ii) disclosure or use of a trade secret of another without express or implied consent by a person who

(A) used improper means to acquire knowledge of the trade secret; or

(B) at the time of disclosure or use, knew or had reason to know that his knowledge of the trade secret was

(I) derived from or through a person who had utilized improper means to acquire it;

(II) acquired under circumstances giving rise to a duty to maintain its secrecy or limit its use; or

(III) derived from or through a person who owed a duty to the person seeking relief to maintain its secrecy or limit its use; or

(C) before a material change of his [or her] position, knew or had reason to know that it was a trade secret and that knowledge of it had been acquired by accident or mistake.


> Can you freely discuss my employer's secrets without penalty?

Yes; in order for trade secrets to be protected, they have to be secret.

> Did Bunner work at the DVD Consortium?

I have no knowledge of this.


> Yes; in order for trade secrets to be protected, they have to be secret.

This is not true. See the Uniform Trade Secrets Act for the full text. People who know or have reason to know that the information is a secret are bound by the law, and the definition of "trade secret" does not require that the information never have been disclosed to an unauthorized person.


If you can, then Bunner should be able too.

If I can trade Amazon stocks based on where I think Amazon stocks are going to go, then Jeff Bezos should be able to too.

The Copyright Clause in the Constitution grants Congress the power to create copyright statutes, but does not itself create copyright. Federal copyright statutes defer to the First Amendment when they conflict. Copyright (a first-order speech restriction, where speech that infringes copyright is by definition unprotected by the First Amendment) is a bet: giving authors some extent of exclusive control over copying, distribution, and modification of their works will be better for freedom of speech than not giving any control. Fair use provides a flexible but imperfect safety valve for specific situations where the copyright bet might fail. Fair use is, by definition, not copyright infringement [1]. Therefore, I view the fair use statute (as well as the fair use doctrine prior to codification) as necessary for copyright law to avoid violating the First Amendment, and I believe that most restrictions on fair use rights are also restrictions on First Amendment rights.

The text of DMCA 1201 does not restrict fair uses [2]:

> (1)Nothing in this section shall affect rights, remedies, limitations, or defenses to copyright infringement, including fair use, under this title.

However, in practice, DMCA 1201 has a plausible chilling effect on some fair uses and First Amendment protected speech. For example [4]:

> Opponents also say it creates serious chilling effects stifling legitimate First Amendment speech. For example, John Wiley & Sons changed their mind and decided not to publish a book by Andrew Huang about security flaws in the Xbox because of this law. After Huang tried to self-publish, his online store provider dropped support because of similar concerns. (The book is now being published by No Starch Press.)

Although the D.C. Appeals Court in Green v. Department of Justice found that the triennial rulemaking process for requesting exemptions to DMCA 1201 from the Library of Congress does not restrict freedom of speech [5], I emphatically disagree with the court in that regard because the Copyright Office summarizes the rulemaking process like this [6][7]:

> The Librarian of Congress, pursuant to section 1201(a)(1) of title 17, United States Code, has determined in this ninth triennial rulemaking proceeding that the prohibition against circumvention of technological measures that effectively control access to copyrighted works shall not apply for the next three years to persons who engage in certain noninfringing uses of specified classes of such works. This determination is based on the Register’s Recommendation.

Why would the Librarian of Congress need to provide exemptions when the "certain noninfringing uses" should already be exempted by the text of the DMCA 1201? That is, why would the granted exemptions include things that should already fall under fair use and the First Amendment in almost all cases, like "Audiovisual Works—Criticism and Comment—Filmmaking", "Audiovisual Works—Criticism, Comment, Teaching, or Scholarship— Universities and K–12 Educational Institutions", and "Literary Works—Text and Data Mining—Scholarly Research and Teaching" [6]? Why do advocacy groups have to affirmatively request and justify these 90%-fair-use exemptions which expire every three years? It sure seems to me like the writers of DMCA 1201, the Librarian of Congress, and someone at the Copyright Office observed or intuitively understood that DMCA 1201 would significantly restrict First Amendment protected speech in practice. Alternatively or in addition, said people observed or intuitively understood that fair use as an affirmative defense significantly fails to protect First Amendment protected speech in practice.

[1] https://www.law.cornell.edu/uscode/text/17/107

[2] https://www.law.cornell.edu/uscode/text/17/1201

[3] https://www.eff.org/deeplinks/2024/08/federal-appeals-court-...

[4] https://en.wikipedia.org/wiki/WIPO_Copyright_and_Performance...

[5] https://en.wikipedia.org/wiki/Green_v._Department_of_Justice...

[6] https://www.govinfo.gov/content/pkg/FR-2024-10-28/pdf/2024-2...

[7] https://www.copyright.gov/1201/2024/


Using poor quality AI suggestions as a reason not to use Rust is a super weird argument. Something is very wrong with such idea. What's going to be next, avoiding everything where AI performs poorly?

Scripting being flexible is a proper idea, but that's not an argument against Rust either. Rather it's an argument for more separation between scripting machinery and the core engine.

For example Godot allows using Rust for game logic if you don't want to use GDScript, and it's not really messing up the design of their core engine. It's just more work to allow such flexibility of course.

The rest of the arguments are more in the familiarity / learning curve group, so nothing new in that sense (Rust is not the easiest language).


Yes, a lot of people are reasonably going to decide to work in environments that are more legible to LLMs. Why would that surprise you?

The rest of your comment boils down to "skills issue". I mean, OK. But you can say that about any programming environment, including writing in raw assembly.


First argument sounds like a major fallacy to me. It doesn't surprise me, but it find it extremely wrong.

Why?

Because it's a discouragement of learning based on mediocrity of AI. I find such idea perpetuating the mediocrity (not just of AI itself but of whatever it's used for).

It's like imagine saying, I don't want to learn how write a good story because AI always suggests me writing a bad one anyway. May be that delivers the idea better.


It's not at all clear to me what this has to do with the practical delivery of software. In languages that LLMs handle well, with a careful user (ie, not a vibe coder; someone reading every line of output and subjecting most of it to multiple cycles of prompting) the code you end up with is basically indistinguishable from the replacement-level code of an expert in the language. It won't hit that human expert's peaks, but it won't generally sink below their median. That's a huge accelerator for actually delivering projects, because, for most projects, most of the code need only be replacement-grade.

Why would I valorize discarding this kind of automation? Is this just a craft vs. production thing? Like, the same reason I'd use only hand tools when doing joinery in Japanese-style woodworking? There's a place for that! But most woodworkers... use table saws and routers.


It's not about delivery of software, it's about avoidance of learning based on mediocrity of AI. I.e. original post literally brings LLMs being poor at suggestions for Rust as a reason to avoid it.

That implies that proponents of such approach don't want to pursue learning which requires them to do something that exceeds the mediocrity level set by the AI they rely on.

For me it's obvious that it has a major negative impact on many things.


Your premise here being that any software not written in Rust must be mediocre? Wouldn't it be more productive to just figure out how to evolve LLM tooling to work well with Rust? Most people do not write Rust, so this is not a very compelling argument.

Rust is just an example in this case, not essential to the point. If someone will evolve LLM to work with Rust better, it will still be mediocre at something else, and using this as an excuse to avoid it is problematic in itself, that's what I'm saying.

Basically, learn Rust based on whether it's helping solve your issues better, not on whether some LLM is useless or not useless in this case.


> Why would I valorize discarding this kind of automation? Is this just a craft vs. production thing?

The strongest reason I can think of to discard this kind of automation, and do so proudly, is that it's effectively plagiarizing from all of the experts whose code was used in the training data set without their permission.


No plausible advance in nanotechnology could produce a violin small enough to capture how badly I feel about out professional being "plagiarized" after decades of rationalizing about the importance of Star Wars to the culture justifying movie piracy.

Artists can come at me with this concern all they want, and I feel bad for them. No software developer can.

I disagree with you about the "plagiaristic" aspect of LLM code generation. But I also don't think our field has a moral leg to stand on here, even if I didn't disagree with you.


I'm not making an argument from grievance about my own code being plagiarized. I actually don't care if my own code is used without even the attribution required by the permissive licenses it's released under; I just want it to be used. I do also write proprietary code, but that's not in the training datasets, as far as I know. But the training datasets do include code under a variety of open-source licenses, both permissive and copyleft, and some of those developers do care how their code is used. We should respect that.

As for our tendency to disrespect the copyrights of art, clearly we've always been in the wrong about this, and we should respect the rights of artists. The fact that we've been in the wrong about this doesn't mean we should redouble the offense by also plagiarizing from other programmers.

And there is evidence that LLMs do plagiarize when generating code. I'll just list the most relevant citations from Baldur Bjarnason's book _The Intelligence Illusion_ (https://illusion.baldurbjarnason.com/), without quoting from that copyrighted work.

https://arxiv.org/abs/2202.07646

https://dl.acm.org/doi/10.1145/3447548.3467198

https://papers.nips.cc/paper/2020/hash/1e14bfe2714193e7af5ab...


I don't mean to attribute the overwhelmingly common sentiment about intellectual property claims for things other than code to you, and I'm sorry that I communicated that (you didn't call me on it, but you'd have had every right to).

I stand by that argument, but acknowledge it isn't relevant here.

My bigger thing is just, having the experience of writing many thousands of lines of backend code with an LLM (just yesterday), none of what I'm looking at can meaningfully be described as "plagiarized". It's specific to my problem domain (indeed, to my extremely custom stack) and what isn't domain-specific is just extremely generic stuff (opening a boltdb, printing a table with lipgloss), just assembled precisely.


it could be a weird argument, but as a rust newcomer, i have to say it's really something that jumps to your face. LLMs are practically useless for anything non-basic, and rust contains a lot non-basic things.

So, what are the chances that the pendulum swings to lower-level programming via LLM-generated C/C++ if LLM-generated Rust doesn't emerge? Note that this question is a context switch from gaming to something larger. For gaming, it could easily be that the engine and culture around it (frequent regressions, etc) are the bigger problems than the language.

I haven't coded in C/C++ in years but friends who do and worked on non-trivial codebase in those languages had a really crappy experience with LLMs too.

A friend of mine only understood why i was so impressed by LLMs once he had to start coding a website for his new project.

My feeling is that low-level / system programming is currently at the edge of what LLMs can do. So i'd say that languages that manage to provide nice abstractions around those types of problems will thrive. The others will have a hard time gaining support among young developers.


Developers often pick languages and libraries based on the strength of their developer tools. Having great dev tools was a major reason Ruby on Rails took off, for example.

Why exclude AI dev tools from this decision making? If you don’t find such tools useful, then great, don’t use them. But not everybody feels the same way.


It's a weird idea now, but it won't be weird soon. As devs and organizations further buy into AI-first coding, anything not well-served by AI will be treated as second-class. Another thread here brought up the risk that AI will limit innovation by not being well-trained on new things.

I agree that such trend exists, but it's extremely unhealthy and if anyone, developers should have more clue how bad it is.

Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: