Hacker Newsnew | past | comments | ask | show | jobs | submit | otabdeveloper4's commentslogin

AI is the biggest productivity boost in human history since the invention of writing. It only makes sense that they need 100x and 10000x engineers!

No, that's just the normal slope of the hype curve as you start figuring out how the man behind the curtain operates.

Congrats, you grew up. It's not Claude's fault.

> white people have way lower genetic diversity

This isn't true.


Indeed really not true. For example: I am quite mixed, declared my own race on an unclaimed spot of non-scientific identity-formation, and despite what some would say and discounting any possible future irradiation, my genitic diversity is absolutely zero ;-)

This is the accepted theory, but akshually there is no way from a linguistic standpoint that the "k" between the "s" and "l" would simply appear out of nowhere. The linguistics is impossible.

The etymology deriving from σκυλεύω makes more sense.


Interesting, could you expand on this please? What does σκυλεύω mean and how would this transformation work?

dispossed ennemy, captured, this makes a lot more sense indeed https://lsj.gr/wiki/%CF%83%CE%BA%CF%85%CE%BB%CE%B5%CF%8D%CF%... than the notion "slav" would have somehow evolved "up" linguistically to get a "k" inserted. y'all know languages mellows over time, donty'all?

> from "greatest productivity boost in mankind's history" to "uhm akshually you are holding it wrong" in the short span of two years

Yikes. Cringiest tech bubble ever.


All code is perf-sensitive.

Also, literally every language claims "only a x2 slowdown compared to C".

We still end up coding in C++, because see the first point.


I’m not claiming only 2x slowdown. It’s 4x for some of the programs I’ve measured. 4x > 2x. I’m not here to exaggerate the perf of Fil-C. I actually think that figuring out the true perf cost is super interesting!

> All code is perf-sensitive.

That can’t possibly be true. Meta runs on PHP/Hack, which are ridiculously slow. Code running in your browser is JS, which is like 40x slower than Yolo-C++ and yet it’s fine. So many other examples of folks running code that is just hella slow, way slower than “4x slower than C”


FWIW, I just tested it on a random program I wrote recently, and it went from 2.085 seconds with Clang+jemalloc to 18.465 seconds with Fil-C. (No errors were reported, thank goodness!) So that's a 9x new worst case for you :-) It's basically a STL performance torture test, though. TBH I'm impressed that Fil-C just worked on the first try for this.

And on the next one, a SIMD-heavy searcher thingie (where it found a real bug, though thankfully “only” reading junk data that would be immediately discarded!), it went from 7.723 to 379.56 seconds, a whopping 49x slowdown.

All code is perf-sensitive. Not all code is important enough to be written as we'd like it to be.

We don't like all code to be written in some C or C++ dialect.

Then why use C? Take a look at actually perf-sensitive hot loops, and they are predominantly some inline assembly with a bunch of SIMD hacks, which can be 1000x times faster than C...

Unfortunately inline assembly isn't portable even to different revisions of one CPU architecture, much less different ones.

> All code is perf-sensitive.

I'm doing some for loops in bash right now that could use 1000x more CPU cycles without me noticing.

Many programs use negligible cycles over their entire runtime. And even for programs that spend a lot of CPU and need tons of optimizations in certain spots, most of their code barely ever runs.

> Also, literally every language claims "only a x2 slowdown compared to C".

I've never seen anyone claim that a language like python (using the normal implementation) is generally within the same speed band as C.

The benchmark game is an extremely rough measure but you can pretty much split languages into two groups: 1x-5x slowdown versus C, and 50x-200x slowdown versus C. Plenty of popular languages are in each group.


> I've never seen anyone claim that a language like python (using the normal implementation) is generally within the same speed band as C.

Live long enough and you will. People claimed it about PyPy back in the day when it was still hype.


Pypy is not the normal implementation. I was specifically excluding special implementations that only do part of a language and do it much faster. Especially with something like pypy that has extremely good best case scenarios, people can get too excited.

We had MS-DOS shovelware shareware on CD-ROM back in the day. The cartrige thing is a specific nostalgia thing not everyone experienced.

> because they take time

No they don't. It's literally just a skill issue.


Fast algorithms are often more complicated.

To give just one simple example: to get the textbook complexity bound for the Dijkstra algorithm, you need some fancy mergeable heap data structures which are much more complicated, and thus time-intense to implement than the naive implementation.

Or you can get insane low-level optimizations by using the SIMD instructions that modern processors provide. Unluckily, this takes a lot of time and leads to code that is not easy to understand (and thus not easy to write) for "classically trained" programmers.

Yes, you indeed need a lot of skills to write such very fast algorithms, but even for such ultra-smart programmers, finding and applying such optimizations need a lot of development time, which is why this is often only done for code parts that are insanely computation-intense and performance-critical such as video (and sometimes audio) codecs.


the most common cause is architecture, not algorithms

Define "normal people". Due to Chinese phones and sanctions and other geopolitical bullshit a significant part of the world is forced to use alternative app stores already. Yes, these people are very aware of "sideloading". (Due to Google's own previous moronic foot-shooting policy.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: