Hacker Newsnew | past | comments | ask | show | jobs | submit | raunakchhatwal's commentslogin

Don't you want programmers to familiarize themselves now to prepare for the time when it does? Claude 3.5 sonnet is getting close.


1954: "Don't you want power plant operators to familiarise themselves with this tokamak now to prepare for when it actually works?"

(Practical nuclear fusion has 10 years away in 1954, and it's still 10 years away now. I suspect in practice LLMs are in a similar space; everyone seems to be fixated on the near, supposedly inevitable, future where they are actually useful.)


AI and VR follow the same hype cycles. They always leave a trail of wasted money, energy, and resources. Only this time Gen AI is leaving a rolling coal-like trail of bullshit that will take some time to clean up. I am patiently waiting for the VC money to run out.


"From what I've seen of people running Linux on these things, it is definitely not something you'd want to develop on"

Can you please elaborate? I'm a programmer with a linux framework laptop (NixOS specifically).


The JH7110 is a multi-year-old SBC that is slower than a Raspberry Pi 3 is. It does not have many extensions for things people today take for granted (no hardware crypto for instance is in practice a massive loss.) So, if you're OK with that, then it will be fine. But most people probably aren't interested in making their expensive laptop perform worse than a 15-year-old device in every way.


It's slow to the point of being outpaced by an ARM SBC from 2016, and it's not even current with today's RISC-V spec. This is a curiosity, nothing more, but it will still be far and away the nicest (but not the only!) RISC-V laptop. Give me a Pi CM5 + 16GB RAM Framework motherboard carrier and I'll get out my credit card.

Benchmarks of the CPU in question: https://www.phoronix.com/review/visionfive2-riscv-benchmarks...


Completely ridiculous benchmarks for what people will use this board for. The xz compression and SQLite are the only slightly relevant tests -- and on those it's pretty close to a Pi 400.

Comparing an 8 GB JH7110 board to a 1 GB Pi 3 is beyond ridiculous. The Pi might win a few micro-benchmarks that use NEON, but not general purpose C code, and in real use 1 GB is incredibly limiting.

All Arm SBCs at present as far as I know -- certainly including the Pi 5 and RK3588 boards (Rock 5 etc) -- are far behind the current Arm ISA, as they implement ARMv8.2-A from 2016. And none of them even have the optional SVE vector ISA that was defined as part of ARMv8.2-A.

In contrast, the JH7110 implements mid 2019 RISC-V specs, plus some things from late 2021 (e.g. Zba and Zbb).

The SpaceMIT 8 core SoC in the BPI-F3, Muse Book and others being released now implements RVA22+Vector ratified in March 2023. The Canaan K230 (on e.g. the CanMV-K230 board) also implements the same RVA22+Vector spec.

Late this year the 16 core ~2.5 GHz P670 (A78-class) SoC will leapfrog anything available on currently known Arm SBCs. Milk-V say the base model of their Oasis SBC will be $119. Sipeed says a fully-kitted board will be $300.


There are better options[0] already announced for RISC-V laptops.

The chip there (spacemiT K1) is available in Banana Pi BPi-F3, which is already shipping. It is RVA22 with the ratified V extension.

A laptop with JH7110 makes little sense today.

0. https://linuxgizmos.com/musebook-riscv-v-laptop-with-spacemi...


I'm a huge fan of Haskell and the language uses many concepts from category theory, such as monads, functors, and applicatives. However, these are all abstractions which are simple enough to not need any basis in mathematics. I tried to read resources on category theory to understand its motivation, but unlike real math where theorems actually seem to contain new insight, I never found such a thing in category theory.


The entire body of CT has just a small handful of what anyone would consider theorems or lemmas. It’s mostly just stacked definitions.

This is a set. This is a mapping. This is a set of sets. This is a mapping from elements to elements. This is a mapping from elements to sets. This is a mapping from sets to sets. This is the inverse mapping…. Chapter after chapter.

But then there’s “insight” that all of mathematics can be cast as CT, because… all math is just things that map to things! Whoa, far out, dude. Mind is blown.


I think chatgpt's effectiveness for self-studying would depend on the subfield of pure math. For example, real analysis I believe is still best self-studied by just reading baby rudin and doing the examples and exercises. However, I really could not make much progress on topology until chatgpt walked me through what the open set axioms actually meant in the context of metric spaces (which most of the topological spaces one encounters are), otherwise they just seemed very arbitrary.


In my opinion, it is less dependent on the subfield than the textbook you use for that subfield. Unfortunately, math textbook recommendations are relatively subjective, with many popular choices unsuitable for self-study, or even study.

With regards to topology, your experience rings true. In short, anyone with knowledge of calculus / basic real analysis wanting to learn topology should read "Real Analysis" by Carothers.

Usually topology is taught after real analysis, extending many results that hold on the reals as the main motivation. But this process is quite abrupt without the intermediate context of metric space, leaving many people confused. It doesn't help that Baby Rudin is quite terrible at teaching these concepts for you. On the other hand, Carothers' book is a paragon of mathematical exposition. It excels at telling you why metric space, topological space, and all the definitions are made that way.

With regards to the parent, I have to say "Proof is left as exercise" is probably the number one thing that forces students to actually read the texts. The best way to learn is to ask ChatGPT after you're stuck, not before.


I have worked through some of Carothers myself and like it a lot.

Are there other math books you think highly of that are similar, i.e. good for explaining why the definitions are the way they are as well as teaching the material?


IMO I would always rather have syntactic sugar in Haskell, since parse errors are usually not the kind of bugs that one keeps making. If I encounter a parse error, I would probably not write code that leads to the same parse error ever again since there are so few of them I've encountered that I need to keep in mind to avoid.

I remember thinking figuring out the indentation of a where clause was almost impossible in Haskell since I constantly was getting it wrong at first, but now I almost never make a mistake using the where keyword.


I definitely agree that it improves writing fewer bugs. When I first began using leetcode, I was proud that I was able to begin at leetcode medium and even solve the hard ones because the tip was to grind the easy ones before progressing to medium and hard problems. However, reading up on how DSA interviews are conducted, I realized that I probably would be penalized for not getting my solution right the first few times, whereas my style of solving leetcode problems at first was to get it right only after like the sixth+ try. Also, leetcode problems are also a good way to learn new languages, I'm currently using it to learn Rust, and learning Haskell probably would've been smoother if leetcode supported it.


I use Mullvad vpn and pay using monero, so this was the first question in my mind. If you look at Mullvad's privacy policy, they made clear that paying with card obligates them legally to store personal information about the customer. Obscura in their FAQ says the payment method is Bitcoin, and Bitcoin can be traced to an exchange unless it was mined.


Bitcoin or any cryptocurrency can be converted to monero, which can then be used to pay (or converted back to BTC to pay) via any number of shady coin conversion services.

Bitcoin can also be anonymized via coinjoin like samurai or Wasabi, although Wasabi is not recommended: https://medium.com/oxt-research/a-statement-on-two-discovere...

In fact, if you're truly trying to be as as anonymous as possible with privacy redundancy, coinjoining before converting to XMR is your best course of action.


If creator of service claims you can pay anonymously with crypto and doesn’t offer XMR, they are not being earnest and intellectually honest, because we know they know XMR is the only legit actually anon crypto. Why would anyone use this service when they can use Mullvad or IVPN and pay via XMR directly?


I think he may be referring to installing Nix itself, which does require root even if the intention isn't to install anything system-wide. I did once think about modifying the nix installer to let me set an arbitrary nix store because I wanted nix packages in a docker container I was debugging, but never really got around to it. Let me know if you know of somebody else who tried this.


So this is possible, but there are a lot of caveats. First, the installer itself explicitly says:

```

# Please don't change this. We don't support it, because the

# default shell profile that comes with Nix doesn't support it.

readonly NIX_ROOT="/nix"

```

I haven't seen any configurations where the entire /nix is relocated, but nix _does_ support relocating the store with the environment variable `NIX_STORE_DIR`.[1]

However, this means that you can no longer use the the binary cache and *everything* you install has to be compiled from scratch, including glibc. The reason is that nix usually patches paths like `/bin/myprogram` to `/nix/store/1238f...-myprogram-1.2.3/bin/myprogram` in everything that depends on `myprogram` during build time to isolate the build outputs from the system. If you change your store, all those paths will now be invalid, including the hash part.

So using a nix store that isn't `/nix/store` is possible, but I don't think anyone is actually doing it except in a few select scenarios.

You can also compile nix itself with a different root. That will work as expected, but you still have the issue that you need to compile everything you install yourself.

[1]: https://nixos.org/manual/nix/stable/command-ref/env-common.h... (you can also relocate most other directories. The `prefix` in the paths is `/nix`)


Now that's interesting. I use Homebrew in a similar way. It does mean I have to compile a lot of things from scratch, but Homebrew has knowledge of which packages are relocatable and which aren't, so I get to use binary "bottles" for about 25% of the packages I install. I'll have to give this a try.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: