Hacker News new | past | comments | ask | show | jobs | submit | artemonster's comments login

went to write exactly that. Ambitions are great and I dont want to be dissuasive, but monumental tasks require monumental effort and monumental effort requires monumental care. That implies good discipline and certain "beauty" standards that also apply to commit messages. Bad sign :)

Not really. In the initial phase of a project there is usually so much churn than enforcing proper commit messages is not worth it, until the dust settle down.

I massively disagree. It would have taken the author approximately 1 minute to write the following high quality hack-n-slash commit message:

``` Big rewrites

* Rewrote X

* Deleted Y

* Refactored Z ```

Done


Many times it is “threw everything out and started over” because the fundamental design and architecture was flawed. Some things have no incremental fix.

Different people work differently.

Spending a minute writing commit messages while prototyping something will break my flow and derail whatever I’m doing.


I am deeply suspicious of anyone who doesn't bother or who is unable to explain this churn. For the right kind of people, this is an excellent opportunity to reflect: why is there churn? Why did the dust not settle down? Why was the initial approach wrong and reworked into a new approach?

I can understand this if you are coding for a corporate. But if it's your own project, you should care about it enough to write good commit messages.


Is your objection to the inevitable fact that requirements churn early on (regardless whether you're doing agile or waterfall)?

Or is your objection that solo devs code up prototypes and toy with ideas in live code instead of just in their mental VM in grooming sessions?

Or is your objection that you don't think early prototypes and demos should be available in the source tree?


None of the above. My objection is the lack of explanation.

Churn is okay. Prototypes are okay. Toying with ideas is okay. They should all be in the source tree. But I would want an explanation for the benefit of future readers, including the future author. Earlier in my life I have more than once run blame on a piece of code to find myself writing a line of code where the commit message does not explain it adequately. These days it's much rarer because I ask myself to write good commit messages. Furthermore the act of writing a commit message is also soothing and a nice break from writing for computers.

Explain how requirements have changed. Explain how the prototype didn't work and led to a rewrite. Explain why some idea that was being toyed with turned out to be bad.

Notice that the above are explanations. They do not come with any implied actions. "Why is there churn" is a good question to answer but "how do we avoid churn in the future" is absolutely not. We all know churn is inevitable.


For single-author quickly-changing projects I'd guess that it's quite likely for only like 1% of the commits to be looked at to such extent that the commit message is meaningfully useful. And if each good commit message takes 1 minute to write (incl. overhead from the mental context switching), each of those uses better save 100 minutes compared to just looking at the diff.

I suspect you have never worked on single-author projects where you fastidiously write good commit messages. If you never got into the habit of writing good commit message, you won't find them valuable at all when you are debugging something or just wondering why something is written in a certain way. Once you consistently write good commit messages you begin to rely on them all the time.

"why something is written in a certain way" most likely doesn't even have an answer while a project is still in the rewrite-large-parts-frequently stage. Sure, you could spend some time conjuring up an explanation, but that's quite possibly gonna end up useless when the code is ruthlessly rewritten anyway.

That said, specific fixes or similar can definitely do with good messaging. Though I'd say such belongs in comments, not commit messages, where it won't get shadowed over time by unrelated changes.


And for your average backup system it's only like 1% of backups you need to be able to restore, probably much fewer. Trouble is, your won't know which ones ahead of time - same for commits.

Difference being that if you automate backups they're, well, fully automatic, whereas writing good commit messages always continues to take time.

how awesome that this exists. I was learning how CPU works and designing my own CPU w emulator like 20 years ago as a teenager by googling into obscure forums, blog posts and homemade cpu webring. I made an experiment not long ago "would I be able to find in google by myself all learning materials to do that again". The outcome of that experiment deeply unsettled me. Google just gives you shit and total garbage. Half of the results are AI generated, other half is sloppily written half assed abstract pseudo tutorial like nonsense on medium or other paid-for-engagement platform. My children would not be able to reproduce such self learning without watching some youtubers doing it or by accessing some curated paid course or by accidental stumbling upon "gems" like this i.e. from HN. We desperately need back old google and old internet and somehow save and preserve humanitys knowledge.

I am glad you followed up on this, to see if you could do it again! That matches my experience.

I remember feeling like the big tech corps had turned "consumer" into a pejorative and started relentlessly abusing their customers circa 2016 or so... Especially microsoft, post Windows 8. Consumer devices don't need to work. That's for pro devices. Consumer devices just need to sell ads, soak up user time, and let businesses market their goods for consumption!

The majority of search results from late 2019 or so and onwards have only degraded. Even on other platforms, like YouTube -- you get 4-5 real results, and the rest are "suggested for you", even if you've logged out. Google and Youtube both feel like "consumer" search engines, where advertising and eyeball time trump usefulness and user authority (i.e. the user being able to ask for what they want, and get it).


I agree. Its hard though, SEO people are malicious, persistent and with modern tech, have incredible tools.

And with hand curation, its hard to feel like its 'worth it' when instead of being able to build a community, your results are scraped and shown out of context.

If you have any thoughts on how to get that sort of culture back I'm open to it


I pay for kagi.com and they seem to be fighting that battle. I also frequent their "small web" (https://blog.kagi.com/small-web) initiative.

tbh I have dreamt about what could be possible if we were making some sort of "closed doors" internet branch. You can access with a single account bound to you, invite only, something like PGP with a web of trust. Good "legacy" internet websites can be chained and indexed through some sort of thematical webring, with a good search and comment functionality added on top, as a global HN. Any external content is opt-in and vetted. Internal content with user rating system (not googles SEO algorithm ranked), i.e. allowing users to downvote nonsense bullshit into hell. robots on internal content are allowed through strictly controlled API that also pays original authors. Browsing automatically costs some "tokens" that are being paid towards owners of sites you visit so at least semi useful sites can sustain themselves and good ones make money, without spamming everything with ad banners or being incentivized to do ragebait-clickbait content. But thats all nonsense dreams, nobody will be willing to pay for browsing internet, even if high quality.

In the same vein, I feel like the 'fair source' movement makes sense - pay a fixed percent of profit and get access to a massive collection of licenced software.

Just like with yours though, allocating it fairly is centralized and very hard to make everyone happy. And nobody wants to pay for something that used to be free.


Part of me thinks we need a new protocol, and a new lightweight web built around markdown with absolutely no (client side) active content allowed.

What I'm not sure about is how to combat bad actors / spammers / low-effort pages and AI slop. I'm leaning towards some kind of git-like storage with history as a mandatory part of the protocol, and some kind of cryptographic web-of-trust endorsement system.


Sounds kinda like Gemini on top of IPFS/Dat/Hypercore. Imo some cool things but I'm not sure the problem is a technical one.

Content addressing has some real benefits in allowing something like the internet archive to be transparent (ie: it doesn't matter who hosts it). But that's mostly solving linkrot.

Searching through everything is still as hard as ever, and if the incentives are the same will be just as gamed. And people would have to make good content in the first place which is hard to justify without a good audience at the same time


I'm starting to use Claude.ai more and more instead of googling. For the moment this seems to cut through the noise of the modern web.

I believe that it does, I'm worried long term that it will discourage people from making and curating webpages themselves though.

Definitely a possibility - hopefully AI will similarly empower creation of better content instead of AI slop noise.

In fact I wonder if Claude.ai could come up with similar CPU teaching tools and a syllabus based on some of the great resources linked in this discussion.


I mean, probably, but only because it was trained on this already.

For new things though, why would you bother posting them to the internet if you can't use it to build an audience or make a connection.


and possibly not even credited for the content you created...

True


Id appreciate more explanations from the power of combined bitflip & goto

Sure!

https://github.com/tomhea/flip-jump/wiki/Learn-FlipJump

This will let you understand how to implement the very basic "if" in flipjump.

I tried to make it as easy for newcomers, but please feel free and update me if something is written complicated.

After you understand up to the macros, you can try yourself to understand the xor macro, which most of the library is built based on it: https://github.com/tomhea/flip-jump/blob/fe51448932e78db7d76...


I like how chopsticks catch (a very impressive feat) completely distracts everyone from totally fucked timeline and already spent budget on mars mission. Its like any criticism is being drowned in loud cheers. Only time will tell, but I hope I will be wrong on this one



What's the criticism exactly? Like I don't get your point? Yes they are behind on timelines and on Mars, does that mean that we should post reddit-tier cynical comments every time about that? I'm not saying that you're doing that, it's more that I don't get why this is surprising.

And on the other hand, it's also funny to see how "skeptics" (whatever that means in this case) dismiss or belittle achievements that were claimed to be impossible a few months or years ago (for example, the chopstick landing). It's like a never ending treadmill of

this is impossible->okay it happened, that's cool, but now xyz is impossible.

Plus, it seems normal to me that people care less about some sort of budget details or delays than really cool technical feats.


[flagged]


They're the ones who were sent in to return two humans from the ISS after Boeing's ship malfunctioned last year. The explosions are typically from R&D projects; SpaceX is capable and practiced at transporting humans (and cargo) without their ships blowing up, and that's where most of their actual business currently is. (The Dragon is the vehicle they use for manned ISS missions.)


SpaceX is by far and away the most capable organization on earth at taking all types of payloads to low earth orbit.


SpaceX takes non-human payloads to low earth orbit every couple days. Over 100 in 2024.

They regularly take human payloads, too. They’re the only American launcher currently able to do so.


I actually get this take, but for me it's the ultimate distraction and a way to legitimize the CEOs rubbish behavior.

"How can he be wrong when he is a genius and can land a rocket in two chopsticks?"


I’m in a slightly different boat. The CEO’s rubbish behavior sucks, but the company shouldn’t be diminished by that. The people behind SpaceX are a modernd day Apollo Program. Absolute marvels of engineering.


The Apollo program was amazing because it was tax payer funded, it was every Americans project. Space X isn't.

They are making the impossible merely late. Which, you know, is still pretty fucking cool.

I’d love to see any other country or competitor catch a stainless steel rocket larger than the Statue of Liberty that was just cruising back to earth at sub orbital velocity. Everybody else is so far behind it’s not even funny.

Spacex is cool as shit. Screw the “skeptics” and haters. Some people have a complete lack of imagination.


Starship started development in 2012. SLS started development in 2011, New Glenn in 2012.

SLS flew in 2022 around the moon. New Glenn just flew, reaching orbit with an actual payload.

Starship hasn't reached orbit, the best they did was send a banana to the Indian ocean.

Remind me again how SpaceX is the fast company?


No, they are making the possible very late.


> very late

when was your fully reusable full-flow staged combustion rocket engine scheduled flight, again?


Why does that matter? SpaceX is setting themselves up for failure by insisting that they need to nail re-entry first. Whenever they focused on a test flight for re-entry I'm wondering why they aren't working on more important things like the payload doors or orbital brimming. They will get the re-entry tests for free!

And even if they don't. The upper stage is cheap enough that it can be expended and still be cheaper per flight than Falcon Heavy. So that tells me that the delays are on purpose. Their test flight planning is designed to maximize ego stroking.


does this work with boox?


Unfortunately it does not! If you/someone has any appetite I can open-source this so people can can contribute more integrations (some friends wanted to get the PDF sent to a kindle email)


would love PDF to kindle Email version


kindle support would be great!


I have 2 points to say: First, in all companies that I have worked for DEI was just a bureaucratic corporate fucking "tick the box" bullshit. Maybe it is time we start recognize and acknowledge this part of it too? Nobody is rolling back morality and in most part of tech culture there were no issues in the first place (high concentration of different immigrant cultures, very high LGBT prevalence, ppl on the spectrum, etc.).

Second, while the ability to spread disinformation at scale is a huge danger for our society, I would also prefer to absolutely remove the "squeaky clean" feeling from social networks, for example what you get while visiting LinkedIn, or when people have to use words like "seggs" or "unalived" to avoid idiotic filters. Fuck, I want to be able to say fuck without being downvoted to hell. LinkedIn, despite being the most "positive" social network, is the most toxic and useless one, because all interactions are robot-like. There must be some "dirty" part of 9gag and 4chan present in social networks too. This is what makes interactions sincere. And yes, this brings ugly parts of our societies also to light - racism, bigotry, hate. But there is no light without darkness. I hope by removing some of the guardrails social networks become more "raw", but social too, otherwise its a corporate nonsense space like LinkedIn. You may as well replace all users with chatGPT bots. I may be wrong on this one, though, idk.


How debuggable is this (besides sifting through wall of debug log text)? Can you step through your declarative GUI building process inside DSL or its like this: "DSL text goes into magic magic...POOF! here is the result. hopefully nothing went wrong or glhf"


The popular debugger for Ruby is a combination of two libraries: byebug and pry. Using these should allow you to step into/over code in a familiar way, if you've used most breakpoint-based debuggers.

If you end up giving it a try, please report back!


These two gems have been superseded by the `debug` gem.

https://github.com/ruby/debug


Could you say more about that? I have been using pry, and it appears to still be updated. Is there a reason to stop using pry, or are you expressing a preference for the official debug gem?

Thanks!


Seems like the latter to me. One or the other gem is fine depending on your preferred interface.


I've built a DSL engine on top of CUE + Go's text/template [1]. This largely becomes feeding data into a set of templates, and even this can be hard to debug because template engines often lack the extras needed to support it.

I'd be curious to see if a more code based DSL engine has better debug support. I would imagine you would be stepping through both the DSL code and the engine, if it is more dynamic (i.e. there is not a two step process for DSL authoring)

What I like about a text/template engine is that anyone can use it (create new DSLs) without knowing the language the engine is implemented in. CUE appeals to me as the language for writing/using the DSL because (1) I don't have to learn a new syntax per DSL and (2) it becomes data (json/yaml) I can use anywhere for other purposes beyond generating code.

[1] https://github.com/hofstadter-io/hof


my experience with interpreter pattern is that you will be spending 90% of debugger time stepping through abstract "eval" functions that are irrelevant to what you want debug.


I solved this by writing code to walk the stack and extract the information I needed (this was Python, but am sure it would translate to Ruby).


buy couple of dvds and resize your tab to be a single pixel wide - infinite points glitch


> "GPU "software" raytracer"

> WebGPU

> this project is desktop-only

Boss, I am confused, boss.


I'm using WebGPU as a nice modern graphics API that is at the same time much more user-friendly and easier to use compared to e.g. Vulkan. I'm using a desktop implementation of WebGPU called wgpu, via it's C bindings called wgpu-native.

My browser doesn't support WebGPU properly yet, so I don't really care about running this thing in browser.


That's a fascinating approach.

And it gets me a bit sad about the state of WebGPU, however hopefully that'll be resolved soon... I also on Linux am impatiently waiting for WebGPU to be supported on my browser.


Can you debug it in browser?


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: