Hacker News new | past | comments | ask | show | jobs | submit | Phillipharryt's comments login

Correct me if I'm wrong here, but I am under the impression they're only non-deterministic in the practical sense (i.e, it produces this output on my machine, I can't know what minute differences there are on your machine), but that's not non-deterministic in the truest sense. If you have completely identical inputs you will get the exact same output, ergo, deterministic.


You are correct. Compilers are deterministic, but reproducible builds can be a challenge.


This is just patently untrue. The failure rate of ventue-backed startups is 75%. The failure rate of all startups is 90%. Funding is correlated with a lower failure rate.

https://www.failory.com/blog/startup-failure-rate


You can use statistics to make inferences about groups as a whole, but you can’t use stats to deduce anything about any specific case.

A coin has a 50/50 chance of being heads, but just because the coin landed on tails last time doesn’t mean it’ll be a head the next time.

Here is the link I should have included originally https://en.m.wikipedia.org/wiki/Ecological_fallacy


You said ' “Funding raised” is completely uncorrelated with how successful the company is. ' I showed you this statement is wrong, by providing you a statistic that proves they are correlated. We were not talking about any specific case, I don't know why you've opened with that. The coin flip statement is true but also completely unrelated to our discussion.


That is a form of machine learning. It uses an algorithm with an enormous ability to look ahead, and selects the moves according to a heuristic that have been shown previously to be most likely to create the desired end state.

Chess is not a solved game from the starting state, so it has to make assumptions based on the data it has. This is machine learning. Please don't make such definite statements. You even say it in your own comment "selects the best possible true of outcomes", how do you think it selects this? It uses heuristics to assign values to different board states, and in the case of Deep Blue these values were created through previous game analysis. If a knight to c2 on turn 8 is rarely seen in the same game as a winning board state, then this is valued lower. Looking through the tree wasn't the ML part, but knowing how to pick the best node on the tree was. Deep Blue is ML.


> It uses heuristics to assign values to different board states, and in the case of Deep Blue these values were created through previous game analysis.

Unless I’m misinformed, this part isn’t true. The heuristic was hardcoded with the help of human experts.


From the paper Deep Blue by Murray Campbell et al (people who worked on Deep Blue) "The initialization of the feature values is done by the “evaluation function generator”, a sub-program which was run on the master node of SP system." Which would suggest it generated the heuristic itself. The features may have been hardcoded, but assigning values to them wasn't. In addition to this, feature values could be static or dynamic, meaning it would update dynamic ones depending on the board state. It not only generated the heuristic values (feature values) it could modify them to reflect their relative change in impact throughout the game.


Very interested to watch the second part to this video, off the top of my head I can't come up with a situation in which analogue computation or signals are better than digital ones. Digital's versatility means we are making 2 signals represent an infinite number of other possible values, so there is certainly an inefficiency there, but the analogue signal's propensity to degradation and uncertainty is another hurdle I would find hard to overcome and produce a better computer with.


Pure speculation ahead.

It doesn't have to be better in an absolute sense, but being good enough for a cheaper price, lower power usage, smaller footprint, etc.

I think a lot of floating point calculations could fall into this. For example in neural nets, maybe there are analog versions to calculate the weights, sigmoid function and so on.

And for graphics, you don't really need the exact color value of each pixel. Maybe those could be estimated in analog functions too.


I certainly agree with the idea of not being better in an absolute sense, not sure I agree with both use cases. Graphics are built around digital representations of colours and shapes, Vectors are incredibly easy ways to represent 2d graphics, and are very easy to manipulate for digital computers. Polygons were quickly discovered as a memory efficient way of doing the same things in the 3d space. Analogue graphics representation or manipulation became outdated very quickly. For example https://www.youtube.com/watch?v=0wxc3mKqKTk&ab_channel=VICET... shows how much old analogue machinery is required to replicate what could currently be done by most phones. I don't know enough about your other possible use case to comment on it.


What I was imagining was the scene still being represented digitally with polygons, but the shader could still benefit from analog functions. Say, you could do functions like sine and logarithms faster/better/cheaper. So you'd get the same image, but with some added noise. Again, it's just pure speculation on my side.

That video was amazing, by the way!


High frequency signal processing is an obvious example of a case where an analogue computer can be superior under certain conditions. Say you want to detect when a signal has risen above a certain average magnitude over a particular time window. You can quite easily do that using a few op amps and passive components, even up to GHz frequency signals. To do the same thing digitally would require high end ADCs and either a very fast CPU or an FPGA. If your budget is tight then even frequencies of 1MHz might prove challenging to process digitally.

This is probably one of the reasons why analogue fly by wire flight control systems existed quite a way into the digital age. The original Su-27 had an analogue fly by wire flight control system, for example.


https://youtu.be/vHlbC74RJGU

I watched this talk, which describes the current von Neumann computer architecture as "analog communication with digital computing". This consumes more energy than digital communication with analog computing. Projects like Neurogrid, Intel's Loihi chip and pretty much any system that can efficiently run spiking neural networks.

Neuromorphic computing is where this is going.


The resurgence of analog computing is common hype thing. I guess this is the latest iteration that the video is referring to

https://spectrum.ieee.org/analog-ai

Here is another random article from 2019 https://semiengineering.com/using-analog-for-ai/


Analog washing machines had one nice practical thing since you could force the "program" counter forward or backwards. This was especially practical if you were with a tight schedule and the program contained unneeded parts. You could skip them manually. Of course you had to know what you were doing, like not open the door with water in the machine.


Open source! In the old days when taking off the lid or back plate you would find a full schematic of the analogue computer inside inside.


It’s been a while, but didn’t the program knob in those machines turn in discrete steps? If so, then that system was — to be pedantic — a mechanical digital computer, not an analog one.


The most direct analogue (ahem) would be a music-box dial or perhaps a Jacquard loom.

The washer cycle(s) were driven by a clock which rotated a drum or cylinder with pegs that would start and stop specific actions. So, fill, agitate, drain, spin, rinse (fill, agitate, drain, spin), and spin-dry. The mechanisms were bog simple.

Whether you consider these analogue gear logic, or digital pin memory is somewhat arbitrary and a semantic distinction. Either way, the "programme" is fixed, and there is no interactive logic, only a pre-defined behaviour which is followed. Fill and drain were controlled via float switches, I believe.

Users could modify the routine somewhat by selecting different sections of the dial (which programmed different wash cycles) and by where within each the wash started (longer or shorter pre-soak), by selecting fill levels, and by selecting water temperature.


There is nothing that prevents a digital machine from exposing such UX to the user.


Ah, but there is, corporate culture and design culture!

Of course, not much technical barriers, maybe some minor complexity in actually showing it and providing an interface.


The "leading edge" of most corner technologies are usually better in analog. For example, SDRs in radio are only effective up to a certain data rate and frequency bandwidth. At some point analog signal processing (in this case classic "analog radio") is more effective and often the only possible implementation.

Thankfully I work on the leading edge of several technologies and I'm trained in analog so I see this stuff all the time.


Indeed, something like converting the frequency of a laser to a usable clock signal has to be done in the analog domain, and not necessarily even in the electronic domain. Also, (as Horowitz and Hill pointed out) getting higher performance digital electronics to work requires understanding analog techniques.

I do some analog work too, but today's mantra is: Get it into the digital domain as soon as possible.


I have no experience with analog computers at all, but I think those could be less of a problem as of today. You could plug a bunch of digital sensors/controllers/actuators to the analog computing unit to monitor for those, which was simply not possible in the 60s. Also, you can check their accuracy against their digital equivalents or simulations, which are less efficient but yield better results.


Doesn't the inclusion of the digital accuracy checkers then decrease the efficiency, and mean you might as well use a completely digital computer? Just supposing here, but interfacing digital with analogue probably is a poor middle ground between the versatility and ubiquity of purely digital computers (countless existing systems exist to do whatever you want, with optimised algorithms and chips to work with) and purely analogue (presumably gain efficiency advantage by not having to cater to versatile use-cases).


> Doesn't the inclusion of the digital accuracy checkers then decrease the efficiency, and mean you might as well use a completely digital computer?

Not really. Whatever output the analog computer returns can be digitized with no detriment to its performance, pretty much in the same way a sensor which measures a physical property can have its output fed into a digital system with negligible interference over the original measurement.

Also, the same rationale can be used to probe intermediate steps and automatically check for their accuracy, even if only during validation phase. This is a possibility that was definitely not available, say, 60-odd years ago.


There are many options backed ETFs, these are essentially tracking the right to buy x product at y price. Sure if you trace it down long enough there is an asset there, but realistically you're buying an ETF tracking a number of contracts. The ETFs tracking oil futures aren't going to cash in on these options and collect the oil, so the assets are never going to change hands. They'll sell them at a determined date. Making them essentially contract ETFS. Is a contract for a good you're never going to buy in any other way more 'imaginary' than an electronic currency when compared to hard assets? I guess that's up to the consumer to decide.


Australia allows drinking in parks unless it's a designated no-drinking area (usually any area outside an alcohol store). That being said different states in Aus handle alcohol differently and some are very strict. The major states of NSW and Vic are pretty chill though.


In Australia, all alcohol (yes, even beer and wine) can only be bought at designated liquor stores. The vast majority of which are only open 10AM~10PM, with a handful in Sydney and Melbourne open after midnight.

Further, public drinking is all but restricted to destitutes outside of NYE and Aus day.

Contrast that with Germany. Every two blocks there's a Späti, many open 20–24h daily, each selling beer, wine, and spirits, at all hours of the day. You'll find hordes drinking in the streets, parks, and riversides anytime weather permits. It isn't unusual to see people sharing a beer on the U-bahn on the way to the work.

One culture demonises (although indulges in) alcohol, the other embraces it.


I'm not sure that is Germany as a whole though, Berlin yes, but in Munich there are not nearly as many spatis, and from the people that I've interacted with, in smaller towns the only place to get a beer in the evening/night is the gas station.


True. Also in Baden-Württemberg they banned selling alcohol at gas stations at certain times (iirc) and here in Bavaria there were some temporary bans (I think due to covid). But all this is /mostly/ because our (in Bavaria) shops aren't allowed to be open 24/7, it has nothing to do with alcohol per se.

And yes, in general I 100% agree with the local policy that beer is free to buy and consume from age 16. The majority of people (I know) start drinking responsibly by age 20-25 and from the stories I heard from other countries there are quite a few less hospital visits involved.


> in smaller towns the only place to get a beer in the evening/night is the gas station.

That is because that is only place to buy anything. You wont buy bread in small town in the night either. Because the stores are closed.


Saying Australian culture demonise alcohol is a laughable claim. Australian summers are about BBQs and beers on the beach.


Sadly phone companies cannot exist in a vacuum. In the book "Losing the Signal" there's great evidence that Blackberry faced far more pressure than merely losing market-share. Their push-based email service (the back end) previously existed as the only of its kind for phones, then Microsoft updated their version in 2006 so all other phones could have it. Blackberry lost a key differentiator.

Then there was Verizon pushing them to release phones that would work well on their new 4G networks, while Blackberrys were being optimised for 2.5G as one of their CEOs was determined to preserve the selling point of great battery life.

The companies they relied on were not letting them fall into a niche product-producer role. They would have been pushed out of the market, not even by their competitors, by their partners.


"Losing the Signal" sounds like an interesting read, so adding to my reading queue.

Thanks for mentioning it.


In economic theory there is no benefit to diversification (which is what a company does when it absorbs another company) unless there are organisational improvements shared between the companies. If the acquired company is truly completely unrelated there would be no benefit at all as Apple isn't able to improve the running of an unrelated company. Being bought out by Apple might be a nice outcome for any company's shareholders (as they might get a premium on the their share prices), but technically there would be no improvement to the business in terms of expected revenue.


The issue is then speed, you can't engrave at anywhere near the same speed as you can print.


Though funnily enough named after the same Muddy Waters track


And we also know that Papa was a Rolling Stone :-)


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: