Hacker News new | past | comments | ask | show | jobs | submit | more tubby12345's comments login

>I suppose the target groups of people studying his introductory book is not any random junior student at a random CS department

I have never understood this take on CLRS because I was basically this archetype - I went back to school for a MS in CS, at a mediocre state school, after a BS in math during which I never took a single programming class and my first algos class was out of CLRS. So I literally taught myself how to program the summer before by going through the Django tutorial and jumped straight into algos. I wasn't some kind of math genius either (or else I wouldn't have jumped ship after the BS). But there's not a single proof in the back that spans more than a page or two so I cannot fathom what people find difficult about digesting it. And the exercises are basically Leetcode problems, well known to everyone by now.

After many years of referring to the book over and over (during prep seasons for interviews) and helping other people through parts of it I think the reality is most CS students just don't understand that there's more to CS than JavaScript and Django and CLRS is a real shock more so than challenge.


I think the problem is CLRS is extremely dry and can easily put you to sleep so people do just enough to pass or get a good grade and forget about it. As a math grad, I think you had the advantage of having a lot more theory. Ideally more algo classes would combine with practical stuff like the youtube channel Coding Train or thr book Nature of Code by Schiffman to make things interesting for people who like to see things happen immediately.


>As a math grad, I think you had the advantage of having a lot more theory.

The funny thing is that I didn't. I sucked exactly at the kind of combinatorics (figured that out in the first few weeks of stats where you have to count things cleverly), graph theory type stuff during my math BS and so I avoided all of it. I excelled at analysis but CLRS has very little of that (maybe a couple of sections on convergence of numerical algos?).

What I did have was "mathematical maturity" enough to read proofs, but like I said there are so few that it doesn't matter IMHO.


> people who like to see things happen immediately

Our modern culture and lifestyle cultivates a desire for instant gratification in ways that make rigorous study of math and theory incredibly hard for most people who don't have the guiding structure of a math degree programme to counter-balance these forces.


You're wrong and and so was he and you definitely shouldn't weigh in since I'm sure you're not nearly as qualified as he was (neither wrt physics nor virology):

https://en.m.wikipedia.org/wiki/Bell_test


Why do people incessantly complain about this? It's by far the lowest brow complaint I've ever seen about anything - "some hypothetical person might not be able to Google". Note they're never complaining on their own behalf because they have the OP's link. So it's a case of "won't someone please think of the poor Googler".

In 20 years of googling there is only one project I had a hard time googling (https://dl.acm.org/doi/10.1145/390016.808445). It does not happen.


I remember having trouble googling for examples of Nice and Clean code back when I was interested in those languages, because even adding "programming" and similar to the mix wasn't really helpful. but that was also a while ago, google has gotten better at contextual keywords since.



yes, but (a) that is now; the languages themselves had their heyday when search engines were a lot less good, and (b) it wasn't finding the language homepage itself that was hard, it was finding third party pages with code examples, blog posts etc.


>yes, but (a) that is now

As far as I can tell the complaint is being made now, not 10 years ago. But maybe I'm wrong and actually I've woken up in the past - please let me know if that's the case.


It can matter. One thing that is hard to impossible to Google is Alonzo Church's early 1970s paper on an hyperintensional logic he called LSD.


https://www.google.com/search?q=Alonzo+Church+Logic+of+Sense...

Took me exactly two searches - one to figure out what lsd stood for (which I'm sure you knew, so you wouldn't have the same issue) and then one more to pull up the paper.


>Why do people incessantly complain about this?

May be because number of those names grows and it's becoming overwhelming?

>"some hypothetical person"

It's not hypothetical person I constantly stumble on strange names and frequently in context where I do not care how it is called as long as I know what function it performs (even approximately). And strange names appear more and more and the main question I have usually: What is it? What it does?.

The same happens with web sites for new products. With all great slogans like "It will improve your productivity" and similar sentences I am not interested to read it's lacking one mention of what it actually does and what it's all about.

Thus the idea to use prefix-Name format in descriptions. What do you think about that idea? Would it improve educational function? Would it be easier to understand then searching for the term?


qq: i know that for HLS sometimes (most?) the generated HDL is 10x the number of resources (flip-flops?) compared to hand-written HDL. how bad is clash in this respect?


Clash is not HLS. You have full control of register placement and pipelining just like VHDL and Verilog. Clash in that sense is not "higher level" then VHDL or Verilog. In some respects you could even say that Clash is "lower level" because you don't write things just right to be inferred correctly. You actually specify what hardware you want. E.g. you write I want a blockram with this size here, and not if I write this specific Verilog the tools will infer a blockram.

What Clash gives you is the power and tooling of Haskell.


>Clash is not HLS

i don't understand - how do you generate the bitstream if you're not generating verilog or vhdl first?


Clash does generate Verilog or VHDL but the only reason it does this is to interface with vendor tooling.

HLS generally means you compile a very high level description of computation to VHDL/Verilog. This high level description doesn't contain hardware details like registers, ram usages, pipelining etc. During the process of HLS the synthesis tool will try to translate this description to a digital circuit. It will itself place registers, rams pipeline as necessary.

That is the reason HLS doesn't reach the performance of VHDL/Verilog, these HLS tools just aren't as good as a human making the digital circuit.

Clash is not itself coming up with a digital circuit like HLS is doing. The developer is specifying the digital circuit. Just like with VHDL or Verilog. It's just an alternative way of writing it.


got it - it's like chisel. thanks


Looking at your posting history I can be a little more concrete: Chisel is essentially a metaprogramming framework for VHDL/Verilog. Clash is a compiler closely based on GHC that compiles Haskell code, not a DSL defined within Haskell, to VHDL/Verilog.


you're saying seemingly contradictory things (that would best be resolved for me if i just dug into clash, so i will):

>HLS generally means you compile a very high level description of computation to VHDL/Verilog.

...

>Clash is not itself coming up with a digital circuit like HLS is doing. The developer is specifying the digital circuit.

...

>Clash is a compiler closely based on GHC that compiles Haskell code

what does it mean for clash to compile haskell code but not to come up with a digital circuit? haskell code (afaik) doesn't represent combinational or sequential logic. well maybe it does using clash (thanks to haskell crazy metaprogramming faciilities) but then what does it mean to "compile"? verilog isn't compiled, it's still synthesized to gates and luts and whatever right? what does clash compile to natively if not verilog/vhdl (which then gets synthesized)?


It's not contradictory. A Haskell function(modulo io and boundless recursion) gets compiled to combinational logic without registers. a digital circuit circuit in clash are normal Haskell function combined together with registers and other combinators. Clash can compile to an executable as well as to hardware. The executable is a cycle accurate simulation of the circuit.


Not exactly. Chisel is a DSL while clash is not. But it's at the same level of circuit design abstraction.


>I don't know what is tapeout

The person you're responding to isn't an FPGA dev (or at least not primarily). They're talking about verilog for ASIC design where the last step is is making the lithography mask that "tapes off" parts of the silicon substrate (like a painter tapes parts of a wall when painting).

I play in the space (Chisel and FIRRTL and CIRCT) so I agree with you but you're being far too dismissive of the people you're aiming to convert.

>But as a language Clash can do everything Verilog does.

Ironic since people say the exact same thing of Haskell and eg python and yet we still don't have wide Haskell adoption.

You have to deeply internalize that a PL or HDL is a tool. Thus, this position makes zero sense

>its ability to work together with vendor-specific tools and other ecosystem stuff, I don't know much about this

No one uses tools that don't fit somehow into their workflow. Further, if the users of the tool are happy with their current toolset then you have a very hard road to hoe in convincing them to adopt your tool.


Sorry it it came out as dismissive. I only wanted to show that my claim is specifically about the language, not current state of tooling which I'm not as familiar with (and which is important).


Coincidentally enough I'm neck deep in this area for the last month (I haven't read this paper but I've read others from the group). There are a bunch of things around this that are related and look vaguely the same: abstract interpretation, staged execution, symbolic execution and partial evaluation.

I'll give a use case (the use case I'm interested in) that might help to illustrate the point: dynamic neural nets. How do you optimize neural nets that have control flow and dynamic shapes (variable size tensors)? Most (all?) hardware fast paths (GPUs, accelerators, SIMD registers) have hard (parden the pun) requirements on memory layout; e.g, if you want to use vector/SIMD units on CPUs effectively you have to align and chunk your data to fit into the fixed width registers. So for nets that have dynamics it's pretty hard generate machine code for using these things - most people pad to some "good enough" width and pay the price on throughput.

All of these techniques speak to this problem (among others obviously) by inferring as much as possible at compile time and kicking the can to runtime for the rest. Partial evaluation, the one I'm most familiar, propagates, usually on an IR, as much statically known information as it can infer (depending on the quality of the implementation), and simplifies thereby (think things like substituting lengths of lists to simplify comparisons). After the partial eval loop hits a fixed point (no more changes discovered) you have a representation of the neural net that has unbound variables (for some tensor shapes) in it but also code for quickly determining more of these unknown shapes at runtime.

A really good pedagogical paper/dissertation is from the guy responsible for large parts of TVM:

https://digital.lib.washington.edu/researchworks/handle/1773...

Also these blogposts on how abstract interpretation works in Julia are good

https://mikeinnes.github.io/2020/07/29/mjolnir.html

https://aviatesk.github.io/posts/data-flow-problem/?s=09

The last is pretty formal but good if you're comfortable with abstraction (no pun intended).

And here's one of their papers on staged programming for NNs

https://research.google/pubs/pub47990/


no it means vaguely what the person you've responded to is alluding to - window controls and such that mimic the controls that the user's OS uses. incidentally I agree with the person you've responded to - what's the point if all apps are different anyway (despite uniformity in window controls). but i realize i'm an outlier - 90% of tech users probably don't have the patience to learn new UIs over and over again.


I feel like this is greatly exaggerated. I did half of that in 3 days (500km) with the last day being 250km (because I was sick of doing it) and I didn't feel any such "caloric furnace". I ate a couple of pbjs each day and then had some rice and beans for dinner. I didn't do this very intentionally, I just didn't have much choice (I did this ride in Africa, where there aren't many ice cream shops).

It's universally understood that a century is one of the easiest "feats of endurance", which is why you see people far into their 50s and 60s still able to do them.


>> I feel like this is greatly exaggerated. I did half of that in 3 days (500km) with the last day being 250km (because I was sick of doing it) and I didn't feel any such "caloric furnace"

I think this depends on your fitness, especially your aerobic and anaerobic thresholds and the shape of your lactate curve. Less fit people burn a lot of glycogen even at lower wattages which results in crazy hunger when your glycogen stores are depleted. Trained people are able to burn mostly fat at relatively high wattages and even skinny people have a lot of energy stored in fat.

Personal anecdote: I decided to use the pandemic to get back into shape. When I started accumulating endurance kilometers, I had to eat a carbo-gel every 30 km and I still felt really hungry after the ride and I had to devour enormous portion of pasta immediately. Now I can easily go 100 km on empty stomach and I do feel kind of hungry after that, but nothing exceptional.


True. It takes a few years when you don't get much faster anymore but eventually the body settles into a habit of simply not running itself as deep into super low blood glucose levels anymore as it used to, thanks to shifting the threshold where fat conversion is phased out upwards. You still arrive from a 5h+ Very Hungry (which is part of the joy because everything tastes so good!) but not quite as badly as you used to from a 2h.


Drinking is another matter though. You really need your fluids on a trip like that.


Both very true but also overrated, in a way. When it's cold enough to avoid overheat without sweating and the body is sufficiently used to the task at hand to not trigger much sweating proactively from exertion that's actually just barely enough to keep you warm you can get by with surprisingly little drinking. One rainy day on an 11h ride I only increased drinking frequency in the last hour or so when I realized that a bike with empty bottles would be easier to carry up the stairs.


quite true - i drank about a liter an hour throughout the trip and urinated pretty infrequently lol


can you give an example instance? I'll test on both.


For the vector fields just anyone with a couple of singularities would work. The moment you set unequal axes Mathematica blows up.

Sorry but am home. If you mail me during the week I shall find examples for you.

The matrix was something I saw happen, I do not have the example handy, sorry.


I work on compilers and I have lines in papers like "runtime performance is important for many heavily used services [1,2,3,4]". The reason you do this is to preempt some annoying reviewer whose area is some other dimension of the same problem (eg correctness).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: