Hacker News new | past | comments | ask | show | jobs | submit | rbabich's comments login

Look past the silly title, and pick up a copy of "Baby 411": https://www.amazon.com/dp/1889392723/

It's coauthored by a pediatrician, so the recommendations are evidence-based, but it's also practical and relatively concise.

[Disclaimer: The edition I read was published over a decade ago, but the latest edition doesn't appear to be radically different.]


A graphic in the middle of the video suggested that the herbicide being used (in one particular case) was acetic acid, i.e. vinegar.


Oh nice okay, the main concern I had was continued usage of existing pesticides. Acetic acid assuming that works actually sounds pretty great!



think.com used to be Thinking Machines. When they folded, their data-mining software business was sold off to Oracle, along with the domain:

http://www.informationweek.com/oracle-buys-data-mining-techn...


There are a couple of (related) reasons. The first thing to recognize is that systems such as this one are designed for parallel workloads where all processes are running in lockstep, communicating via MPI with frequent barriers. This is very different from MapReduce and other asynchronous or "embarrassingly parallel" workloads where GFS, HDFS, etc. tend to be used. Distributed filesystems used in high-performance computing (such as Lustre, IBM's GPFS, etc.) also have to be able to handle both reads and writes with high throughput, whereas GFS is mostly optimized for reads and appends.

Why not just install disks in the compute nodes and run Lustre there? Since all the nodes are working together in lockstep, system jitter is a major problem. Imagine that you have a job running across 10,000 nodes and 160,000 cores, and a process on one of those cores get preempted for a millisecond while a disk I/O request is being serviced. Everyone waits, and you've suddenly wasted 160 core-seconds. Now, if this happens only 1000 times per second across the whole machine, it's clear that you're not going to make much forward progress, and the whole system is going to run at very low efficiency. For this reason, Crays and similar large machines run a very minimal OS on the compute nodes (a linux-based "compute node kernel" in the case of Cray). Introducing local disks would go against the whole philosophy.

There's also the issue of network contention. The network is typically the bottleneck, and you want to minimize the extent to which file I/O competes with your MPI traffic.

As someone else mentioned, the solution is to have a dedicated storage system (often Lustre running on a semi-segregated cluster). This approach is used almost universally by the 500 systems on the Top 500 list (http://top500.org), for example. It's not just inertia :-).


Disk io has negligible CPU overhead. Preempted for a millisecond? A millisecond is millions of instructions. You're off by orders of magnitude. No matter where the disk is located, the disk io has to go across the network. If network capacity truly is the bottleneck, you have a different design problem and you cant exploit the CPUs.

EDIT: I still dont buy it, but I will give some thought to the synchronous/lockstep nature of the environment.


That was a straw man (sorry). There's also the overhead of maintaining consistency, synchronizing metadata, etc. I don't think assuming 0.1% CPU overhead for Lustre is a terrible estimate, but even if it were much lower, the argument would still hold (at least at the scale of Titan).


As an author of that paper, I can tell you that the code generator was rather simple and mainly used to perform loop unrolling, avoid explicit indexing, and replicate bits of code that couldn't quite be encapsulated in inline functions. It's possible to go further, but this sort of metaprogramming doesn't really eliminate the need to write in CUDA C.

For what it's worth, we long ago abandoned scala in favor of python for the code generator, just to make it more accessible to others interested in working on the project (generally particle physicists by training): http://lattice.github.com/quda/




This doesn't address the fact you are making a logical fallacy. You can't make a calculation assuming the ball bounces infinitely often and then after the calculation go back and say the ball stops bouncing because it invalidates your original calculation.


Of the responses so far, this one is closest to being correct. Rather than "asymptotically approaching zero," however, the height of the bounce will quickly converge precisely to zero. Assume that the previous bounce (up and back down) took time t. Then the ball will stop bouncing after time t/(1-sqrt(0.6)) ~ 4.4t. After that, the ball will simply continue moving ("rolling") to the right. This follows from summing the geometric series 1 + sqrt(0.6) + sqrt(0.6)^2 + sqrt(0.6)^3 + . . . , where 0.6 = (1 - 0.4) is the ratio of the height of the next bounce to the current bounce, and we take the square root since height and time are related by h = 1/2 at^2.

Incidentally, for anyone who has a ping pong ball handy, this is very close to what happens in real life.

Edit: To clarify, it's the parent's "old, wrong answer" that's closer to being correct. btilly (below) also has it right.


No, you are making the same mistake as some of the other people. You are summing a geometric series so you are saying the ball bounces infinitely often and the time it takes for the ball to bounce infinitely often is blah. But if it bounces infinitely often then it never stops to roll on the floor because if it did stop and roll on the floor in a finite amount of time then you wouldn't have an infinite series to sum which would mean that the potential energy in the horizontal direction would be zero in a finitely many bounces which contradicts the problem statement and part of your original reasoning.


The series is infinite, but the sum of the series can still be finite. These two things are not at odds.


Ya, and what are the terms in the series representing? Is it air time of each bounce? Your calculation makes it clear that it is. So you are saying you are calculating the air time for infinitely many bounces, the key word here is infinite, i.e. the ball bounces up and down, up and down infinitely often. So if the ball bounces infinitely often how can it stop and roll on the floor because infinitely many bounces means not stopping after finitely many bounces and rolling on the floor. Your calculation for the time is confounding two things, air time and bouncing. You can calculate the air time assuming infinitely many bounces but then you can't go and claim that the ball stops bouncing after t = whatever because you calculated t = whatever assuming the ball never stopped bouncing and then after the calculation went back and changed your assumption, that is a logical fallacy if I ever saw one.


1) The ball stops bouncing in a finite amount of time.

2) The ball bounces an infinite number of times.

3) This is not a contradiction, no more than the idea that a projectile passes through an infinite number of spatial points in finite time. Please go and read about geometric series and Zeno's paradox on wikipedia, as another commenter has already suggested.


no rolling without friction.



Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: