Hacker News new | past | comments | ask | show | jobs | submit | wespiser_2018's comments login

The other issue is the heating element. As the nozzle size gets larger, the rate limiting factor is no longer the motion system and whatever adaptive control, but how fast you can melt the plastic.

Most consumer 3d printers can't really take advantage of these large size nozzles, although you could print slowly and it'd still be cool!


You want a really long meltzone like a Chube hotend.


I'd like to think karma. Nothing with spoil your perception of a company faster than a toxic waste dump of GE manufactured PCBs in your hometown.

I'll never be able to understand the "Genius of Jack Welsh", but I don't suppose I'm really missing that much :)


Oh, the Genius of Jack Welch is simple. Lie to investors while dismantling the company you're in charge of. Use the profits from selling off the dismantled bits to cook the books so it looks like you're becoming more profitable, this will get you lots of bonuses and you'll become super rich, then just walk away before anyone figures out your bullshit. Sure you've destroyed hundreds of thousands of jobs, and ruined one of the most profitable companies in the world, but you're rich now, so fuck it. Pure genius.


Since Welch climbed the corporate ladder over the years, this says a lot about the rot predating him as well.


If you ever try to get a knuth check, you'll receive back a printed copy of your email, along with his hand written notes and a response. Really cool!


I once received such a letter, in which Knuth explained how my bug report was in fact mistaken. I wrote back to him (on the same printout paper) to thank him for his reply, and included a check for $2.56. He cashed it!


(-:


I got back a comic, and another time a T-Shirt (with the MMIX instruction set).


It doesn't surprise me 75% of SWEs report retaliation based on my experience in tech, but credit to 100% of people with integrity that spoke truth to power and risked paying the price. The world would be a lot better place (in my humble opinion) if more people spoke up when things were wrong, and were less afraid of losing a job at a company crossing an ethical line they hold. I know that's a luxury opinion, but if you did the right thing and lost something important, that's a company throwing away the best of humanity and a clearly toxic culture.


More great corporate culture would be awesome. But it's very hard for the employee to distinguish between talk and policy versus reality on the ground. So even when the corporate culture and incentives are actually good and clean, it's risky for the employee to speak up.

What might help is more communication of the proportion of complaints or reports that resulted in what. In a "measure it if you want to see it happen" manner. The measure will then be gamed but hopefully that's only a second order effect.


I've both worked with infrastructure as code, Pulumi, and was a grad student researcher in bioinformatics for several years, and I've developed the following take:

Biology is messy, tangled, and sloppy system built over a billion years under evolutionary pressure. There's no clear analogy to intelligently designed software, and anytime you make an analogy, like DNA == Source code, there's a mechanism which would destroy it's predictive power to explain biological phenomena. Like with DNA, computer software doesn't create the machine it's executed on, code is 1d, while DNA is definitely multi-dimensional, where it's folding, epigentic modifications, and other modifications matter a lot.

All the interesting biology for complex animals happens during the first few stages of development. There's no computational equivalent to this recursively constructive process. Additionally, biology has a single guiding principle through which we understand everything: evolution, and using computer analogies really diminish that.

Therefore, biology is biology. It's not analogous to a Von Neuman architecture machine, or any other computing device we've created. The first principles are simple different.


Thank you @wespiser_2018, spot on.

My career is in both fields + PhD in one. Tortured analogies of biology as computers make me cringe, they're misleading at best. Sure the ribosome superficially looks like a FSM, but that gets you basically nowhere.

Comp-sci people: If you're curious about biology, spend some quality time at Kahn or edX, watch some university intro-bio lectures on youtube, read an intro-level undergrad biol textbook, etc.


I was reading "The Origin of Knowledge and Imagination" and the author was reflecting on the same thing. The mind is not a computer, you do not think like a computer. In fact, there is no separate concept called "the mind". The whole body is part of our perception and action ecosystem. It makes sense as software, while imperfect, is too orderly and simplistic.


People go to prison for three-ish reasons: To reform the criminal, to deter crime, and to offer restitution in the form of punishment to the victims of the crime, and arguably to keep society safe.

I feel bad that Sam is going to go away for life, and ideally we'd live in a society where we don't need prison to reform people, but Sam is a case where there is considerable deterrent effect and well as a public protection interest. When someone steals so much from so many and admits no fault, that's not a person who can go to a day program and reform their ways, they need severe consequences to get the to honestly reform.

There's also what the victims say should be done. Just thousands and thousands of people felt pain, and will get a say in the sentencing. If they all forgive, sure, it's okay to give him a light sentence, but the degree of suffering SBF caused is just so profound people will want justice to an un-repentant fraudster who stole their future.


I assume some people killed themselves because of the financial ruin his fraud created. I have to imagine a lot of people's lives fell apart, relationships were destroyed, families broken up. Because of the choices this man made to enrich himself.

Victims of crime deserve revenge in some part and the state is the mechanism to deliver that revenge. We should always remember the victims of crimes first.


R is fine, it's no more absurd than other non-typed languages like javascript. Most languages are very good at one or two things, then not so good or appropriate for other tasks. For R, that's statistics, modeling, and exploratory analysis, which it absolutely crushes at due to ecosystem effects.


Well… I also consider Javascript to be a horrible language. Python is horrible as well, but better than R. IMO python and javascript are in the same ballpark.

Not all non-typed languages are bad. Clojure, for example, is one if the most elegant languages I’ve worked with (despite my dislike of the JVM).


I worked in data science for a few start ups, and even though I know Python (it's my LeetCode language of choice), R just dominates when it comes to accessing academic methods and computational analysis. If you are going to push the boundaries of what you can and can't analysis for statistical effects and leverage academic learnings, it's R.


"Simulation first" is how I did things when I worked in data science and bioinformatics. Define the simulation that represents "random", then see how far off the actual data is using either information theory or just a visual examination of the data and summary statistic checks. That's a fast and easy way to gut check any observation to see if there is an underlying effect, which you can then "prove" using a more sophisticated analysis.

Just raw hypothesis is just too easy to juke by overwhelming it with trials. Lots of research papers have "statistically significant" results, but give no mention of how many experiments it took to get them, or any indiciation of negative results. Eventually, there will always be the analysis where you incorrectly reject the null hypothsis given enough effort.


The difficulty of teaching statistics is that the maths you need to prove things are right and gain an intuitive understanding of the methods are far more advanced than what is presented in a basic stats course. Gosset came up with the t-test and proved to the world it made sense, yet we teach students to apply it in a black box way without a fundamental understanding of why it's right. That's not great pedagogy.

IMO, this is where Bayesian Statistics is far superior. There's a Curry-Howard isomorphism to logic which runs extremely deep, and it's possible to introduce using conjugate distributions with nice closed form analytical solutions. Anything more complex, well, that's what computers are for, and there are great ways (STAN) to run complex distributions that are far more intricate than frequentist methods.


> There's a Curry-Howard isomorphism [between] logic [and Bayesian statistical inference].

This is an odd way of putting it. I think it's better to say that, given some mostly uncontroversial assumptions, if one is willing to assign real number degrees of belief to uncertain claims, then Bayesian statistical inference is the only way of reasoning about those claims that's compatible with classical propositional logic.


The will to assign real numbers to degrees of belief is the controversial assumption. Converted bayesians tend to gloss over this fact. Many, as in a sibling comment, state that MLE is bayesian statistics with a uniform prior, but this isn't true of most if not all frequentist inference, based on frequentist NHT and CI, not MAP. Modeling uncertainty with uniform priors (or even more sophisticated non-informative priors a la Jaynes) is a recipe for paradoxes and there is no alternative practical proposal that I know of. I have no issue with bayesian modeling in a ML context of model selection and validation based on resampling methods, but IMO it's not up to the foundational claims its proponents often do.


Maximum likelihood (which underpins many frequentist methods) basically amounts to Bayesian statistics with a uniform prior on your parameters. And the "shape" of your prior actually depends on the chosen parametrization, so in principle you can account for non-flat priors as well.


IMHO, the discussion should not be so much whether to teach Bayesian or maximum likelihood. But instead, whether to teach generative models or to keep going with hypothesis tests, which are generally presented to students as a bag of tricks.

Generative models, (implemented in e.g. Stan, PyMC, Pyro, Turing, etc.) split models from inference. So one can switch from maximum likelihood to variational inference or MCMC quite easily.

Generative models, beginning from regression, make a lot more sense to students and yield much more robust inference. Most people I know who publish research articles on a frequent basis do not know p-values are not a measure of effect sizes. This demonstrates current education has failed.


Maximum Likelihood corresponds to Bayesian statistics with MAP estimation, which is not the typical way to use the posterior.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: