Hacker News new | past | comments | ask | show | jobs | submit login
Analog computing may be coming back (bellmar.medium.com)
99 points by mbellotti on Jan 31, 2023 | hide | past | favorite | 84 comments



It's interesting that the black art of analog design never really goes away even from computing. I was reading Geoffrey Hinton's "Mortal Computation" where he briefly speculated towards the end on low power (in watts used, not capability) neural networks being embedded in hardware via something like memristor networks. It makes me imagine neural networks being distributed as a template (i.e. a blob of XML describing the topology and the weights) and then the network weights get tuned a little different to each device or device model to get the best performance per watt out of them. Since every device is physically a little bit different from the next, in precision analog applications with discrete components the components have to be matched. This is often the case in assembling differential pairs in analog audio circuits. Similarly with the control system parameters with hard drives. Since each DC motor is a little bit different from the next, the control loop gets tuned at the factory to each drive. So in this case, the neural network gets matched to the circuit it's embedded in. Mortal computation indeed, each neural network becomes truly unique. I could be full of it, but it's fun to imagine at least.


The mark of a maturing domain is the evolution from only general tools to general + specialized. We've gone from only CPUs to CPU + GPU to specialized AI chips (Neural Engine, Tensor chips etc.) and specialized computing is a big tent which can fit many different architectures together.

Analog computing is the closest thing to bioengineering in fundamental computer science that I know of, so I am confident that it will find a niche. I remember reading about Mythic AI here on HN, who were doing some cool work with analog computing chips for ML. My hunch is that matrix multiplication is the most expensive mathematical operation we do as a society (not unit expensive, but in overall absolute cost) - and our progress in AI is directly proportional to how easy / cheap it is to run.


Maybe I'm missing something, but wouldn't any optical computer have to still funnel signal through binary logic gates at some point? In what sense is that any more analog than (digital recordings on analog) magnetic tape decoded by a modem? The ultimate computation is still 1/0


Get a ruler. Draw a line 10 cm long, AB. Now grab end B and draw another another line, BC, with a certain angle $/alpha$ wrt to the first. Now measure the distance AC.

Congratulations! You have just built an analog computer to use the Law of Cosines to solve for line segment AC. Non-dimensionalize your result and its a general LofC solver. A problem that (I suspect) would take the majority of modern day Eng undergrads a week to program without the use of the math lib[1], can be solved by any keen middle schooler.

Now build a robot that measures AC for you and you have an API for your analog computer.

Typically an analog computer is thought of as a set of opamps and diodes, whose currents and voltages solve a set of non-linear ODEs; but thats a very narrow view. An analog computer is, ultimately, any physics experiment whose model is known

Wind tunnel? Navier Stokes analog computer

Cold atoms traveling through a double slit in a magnetic field? Analog Quantum computer

RCL circuit? Analog computer solving the response of a car's suspension.

[1] code reuse and libraries are a big reason why digital computers are more popular to solve models nowadays. Cost, bandwidth, are another. Ostensibly so is reproducibility. But if CS scientists cannot get reproducible builds, what hope does a humble physicist hacking on C or Matlab have?


40s navy videos showed many electromechanical computers using simple geometric blocs encoding sin x as a groove.

Coupled together the embodied a mid sized equation (of the other way around), live and reactive as we'd say today.

Kinda blew my mind.


The issues arise when converting the analog outputs of those physical experiments back to the digital domain for processing. High speed and resolution analog to digital converters are not cheap, and often require different process node technologies for implementation. The cost savings from the elimination of computing the models digitally have to be weighed against the costs arising from the converters' introduction.


I agree, hence my robots API joke.


> Typically an analog computer is thought of as a set of opamps and diodes, whose currents and voltages solve a set of non-linear ODEs;

Oh oh oh! and also resistor/capacitor circuits! Great for integration or differentiation!

Integration and differentiation have interesting audio effects; maybe our ears might be one of the better analog computers.


> Integration and differentiation have interesting audio effects

An integrator is just a low-pass filter and a differentiator is just a high-pass filter, so you'll most likely get "boring" audio effects ;)


Our senses are incredibly good.


>> Now build a robot that measures AC for you and you have an API for your analog computer.

This line made me laugh out loud, but what an amazing explanation and set of examples. Thank you.

The original ECU in my 1980 Datsun ... is digital, right? I just never thought of the insane analog vacuum pot system that controls the cold start valve and the air conditioning selector and the cruise control to also be a "computer" but I guess it does compute things, in its humble way ;)


Glad to get a chuckle. I was more more proud of my reproducible builds jab.

Does a 1980's Datsun have an ECU? That's not obvious to me. Even a 6502 would be an expensive add on at the time given car margins. Maybe a one bit controller?

But there are analog solutions to all these control situations. And you don't need vacuum tubes!

Measuring and adjusting the fuel-air mix is the carburetor's job. The carb doesnt just add fuel. It adds the right amount of fuel. Its needle calibrates it for different conditions (namely pressure).

The spark advance "computer" was super cool. Originally spark advance was a lever controlled by the user. Faster rpm -> adjust for more advance. Eventually the lever was attached to a governor (a set of balls attached to a spinning linkage and a stiff spring. It measures rpm) and the spark advance disappeared under the hood.

The automatic version of your car would have had a (very complicated) hydraulic circuit in the tranny pushing pistons that measure throttle, rpm, velocity and choose the gear accordingly.

The manual version had a vast organic neural network doing the same job but using just sound pitch of the engine. The manual transmission was cheaper because it didn't include the neural network shifter except on the very highest end cars - think RRs - and then only as a service (early SAS model).

German WW2 aircraft were fuel injected so they probably used a governor attached to the throttle wire and pressure gauge to guesstimate the mass flow and therefore the fuel to inject. There's a front page HN submission - the fuel control system of those engines or the original Mercedes Gull-wing!

All of the hydro-mechanical stuff described were slowly transistorized (but kept analog) in the 70s. In the late 80s to 90s everything became digital except the transmissions that took longer.


>Does a 1980's Datsun have an ECU?

of course https://www.classiczcars.com/forums/topic/63812-1980-280zx-e...

picture https://www.electro-tech-online.com/attachments/4-ecu-full-s... a ton of Hitachi chips, big one on top seems to be some kind of ADC.

https://en.wikipedia.org/wiki/JECS build on BOSH license using Hitachi HD46802P (Motorola 6802 clone). Here reverse engineered https://github.com/eccs-reengineering/280ZX-Turbo-ECCS

Bosh shipped analog controlled d-jetronic in 1967 based on US patents/designs from fifties https://jetronic.org/index.php/en/d-jetronic/51-history.

Afaik first mass produced digital ECU was shipped by GM in 1978 (Motorola 6802 based). Ford also shipped some Toshiba ECUs in 1979. Bosh first fully digital ECU (Intel 8051 based) was https://en.wikipedia.org/wiki/Motronic in 1979 like Datsun/Nissan. Ford went full ECU in 1983 (https://en.wikipedia.org/wiki/Intel_8061)


You can do math with analog circuits. This was the original purpose of the opamp (operational amplifier) [1].

[1] https://en.wikipedia.org/wiki/Operational_amplifier


huh. So the distinction between analog and digital has nothing to do with whether the logic itself is binary, just whether the delivery of signal to gate is absolute or ranged? Something has to gate the signal, right? I always thought "analog" referred to processes that didn't reduce things to a binary at some step along the way...(?)


As far as the circuits are concerned, there’s no such thing as digital. For human engineers, digital is a convention. Well, technically, there are numerous digital logic conventions based on different voltage standards.

For example, you might decide that 0 volts is a logical 0 and 5 volts is a logical 1. If you get everyone to agree to this convention then you can build components that talk to each other. Unfortunately, it’s very difficult (impossible) to get to exactly 0 or exactly 5 volts. So instead you decide that anything less than 2 volts is a logical 0 and anything greater than 3 volts is a logical 1. This setup makes your circuits quite robust to noise.

To further improve things, you might decide that when you want to output a logical 0 you must produce a voltage less than 1 volt and if you want to output a logical 1 you must produce a voltage above 4 volts. This convention allows your system to continually correct voltages away from the undefined region (between 2 and 3 volts). A marginal input of 3.1 volts gets interpreted as a logical 1 and then output above 4 volts. This “self-correction” is what made digital computers the revolution they are.


so .. the only thing I could imagine making a system "analog" would be if each voltage were treated differently, rather than segregated into 0 or 1 as you just described. If a whole range of signal between 0 and 5 volts were directly output to something like a speaker system, that would be analog. I guess I'm wondering how this optical computer would get around the bottleneck of reducing everything to binary as you described with an electrical system.


You'd use an ADC.


opamps aren't binary


Digitizing the output of an analog computation doesn’t make it a digital computation. It’s still analog. The ultimate computation is not binary 1/0, it gets converted to binary after the computation. We may be able to save time or energy by changing representation.

Imagine a NxN matrix-matrix multiply. The computation part is (naively) N^3 multiplies. The conversion back to digital is only N^2 operations, and those operations may be much simpler than digital multipliers. If there’s a way to do the N^3 multiplies as analog, then we can potentially save a lot by converting to and from binary to enable the analog phase.


Analog will likely come back but for other reasons: Neural networks don't require precise calculations and Hintons forward forward networks put into hardware would be several orders of magnitudes more efficient, even without photons. "AI inferencing is heavily dependent on multiply/accumulate operations, which are highly efficient in analog."

If you know of any startup working on this let me know because I'd love to join the revolution.


Almost one year ago, Veritasium interviewed https://mythic.ai/ in one of his videos on YouTube: https://youtu.be/GVsUOuSjvcg?t=898.


They ran out of money last year.


Coincidentally, I recently posted a collection of neuromorphic job openings to LinkedIn: https://www.linkedin.com/posts/brian-anderson-6739ba25_neuro...

My team at Intel Labs is hiring, as are a number of well-funded startups.


Neural nets, AFAIAA, usually rely on the digital simulation of analog processes. NN weights, inputs and outputs are all continuous values. I suspect an integrated all-analog NN chip will be along soon, and might run more efficiently and faster than the digital simulation.

I don't see why it shouldn't use photonics.


One of the things that I fully not expect to be successful is optical computing. There are just a lot of academic groups that are doing optics and they like to invent new reasons why whatever they are up to is relevant. For physics reasons the integration density of optical compute elements is abysmal and will remain so forever. Other technologies like spintronics at least have the chance to work sometime in the future. There were projects on wafer scale optical computing already in the 80-90s at MIT Lincoln labs, so this isn't exactly a new idea either. We have a new group at our institute doing "Neuromorphic Quantum Photonics", they publish in high-impact glossy journals, doesn't change that it is in my opinion mostly hype and bullshit.


> For physics reasons the integration density of optical compute elements is abysmal and will remain so forever.

Could you give some details? Claims about "forever" often don't hold up. I guess you're referring to things like component size in relation to the wavelength of light used? One could use smaller wavelengths. Integrated photonics is certainly being done and also commercially relevant (in telecommunications). What integration density would you consider not-abysmal? How much does integration density matter if you have very low loss (which means low power dissipation, a huge problem for semiconductor electronics) and can just make big chips?

There is also research arguing that optoelectronics might eventually be very useful for computing, e.g. recently [1]. (Yes, this is by researchers who need to appear relevant. However, if we dismiss their arguments based on that alone, we can abolish all research altogether.) Why do you disagree? Again, you were talking about forever.

[1]: https://www.nature.com/articles/s41467-022-29252-1


> I guess you're referring to things like component size in relation to the wavelength of light used? One could use smaller wavelengths

The same issues that affect electronic VLSI manufacturing also apply to trying to use light on-chip. The semiconductor industry had to transition to EUV (13.5nm) light to make it work. But that has huge and inefficient light sources.

Photonics makes sense if one end of your system has light on it; if you're building a LIDAR system, or data transmission over fiber, or somesuch. I have not yet seen anyone doing computation at scale in light.


Visible light and near ultra-violet light has a wavelength ~400-800 nm, current gen transistors have a pitch of ~40 nm. This gets worse because scaling is actually either quadratic (2d) or cubic (future 3d integration). So we are talking about 100x to 1000x worse spatial scaling disadvantage at the moment. The only redeeming quality of light is wavelength multiplexing, but that is only useful for a subset of applications, like optical communication and (maybe) convolutions (see below).

Moreover even in a hypothetical scenario where we somehow were able to find materials applicable to smaller wavelength, the deBroglie wavelength of an electron is ~1000x smaller, than that of a Photon at the same energy. So in terms of integration density electrons will always have a 10^6 - 10^9 (2d - 3d) theoretical advantage over photons, which means that investment in electron based computation will have a much more likely eventual payoff.

Take for example https://www.nature.com/articles/s41586-020-03070-1, they have a bunch of projections for what they hope to achieve over time. The most fantastical figure they give is 50 Peta MAC / s, but this doesn't take into account the PCM programming time.

If you take a look at the supplementary material https://static-content.springer.com/esm/art%3A10.1038%2Fs415... it becomes clear that they currently have a much lower TOPS/Watt figure than current generation ML ASIC like the TPU and this neglects all the expensive experimental optical equipment they would need to miniaturise. So even in their most favourable comparison they are 5x worse. Most of these papers unfortunately are full of hype and claims like that.


100x loss in density isn't actually necessarily a problem. Optical systems have potential for much higher clocks (since there's so much less heat), and the actual logic on a cpu is tiny. If you can get 50ghz clocks, you can lose a lot of density and still win out (I'd take 10x single core perf over 10 cores any day of the week).


Even optical computing needs lots of standard electronics for storage etc. as you point out these consume most of the area, so we are talking about something that consumes 100x more area for a function which is not the main energy expensive thing in traditional computers anyways, memory and communication make up at least 1/3 of the energy budget, which doesn't go away by making the rest optical.

In fact reprogramming the optical non-linearities typically is much slower and energy intensive than retrieving and flipping some bits. Which makes non-static non time-multiplexed computation extremely slow compared to whatever the "best" case static scenario is.


I friend I had researching in optical computing around 5 years ago always said "Nah, not even near". Not that ain't gonna happen, but only basic interfaces seemed feasible at the time. (But I'm not familiar with the field and I could be wrong or not remember exact details)


More power to them if it works, you can do cool things with it for sure. I am continuously amazed what you can do in quantum optics etc. but it isn't anywhere near to practical except for optical communication.


Analog computers can perform matmul operations without data movement using physical properties such as conductance changes in a very small volume. If noise and random variation can be modelled successfully then in certain cases they are obviously going to be better (e.g. energy use in edge applications). The discussion is not specific enough to applications to be useful. AI data centres are not going to be using this stuff anytime soon for example. On the other hand, you do not want an NVIDIA GPU inserted into your body.


This topic cannot miss a post mentioning Bern Ulmann. Here a video, where's he demonstration the Juksowski profile. https://www.youtube.com/watch?v=nP84Dv01y4A


Analog computing sucks. It's inherently sensitive to noise and random variation in your devices that's basically ubiquitous with modern chip-making processes. Digital electronics actively reject noise at every step in a computation, albeit at the cost of wasting energy in the process. With analog, a more complex computation becomes exponentially harder.


There are forms of analogue computing beyond solving DEs with circuit theory. One of the more esoteric examples I'd heard of was using DNA microarrays for massively parallel SAT solvers: combinatorically generate a very large number of different, long single-strands of DNA using clever chemistry, flow over short pieces of DNA that encode your problem that are flourescently labelled, and then "simply" read off the result. Being able to use the fact that Avogadro's number is absolutely massive is a big speed-up for some particularly nasty NP-hard problems where you want to brute force it.

[1] https://www.nature.com/articles/35003155


One of the potentially large applications for optical analog computing is for training neural networks, which is an application where noise is a feature and is tolerable. People are already using noise (and low precision) intentionally for regularization and also doing things like intentionally not synchronizing GPU kernels for performance, which causes the inputs for the next round to be in a potentially random state when read, and noticing that the regularization side effects on the network are positive.


> inherently sensitive to noise and random variation in your devices

What is a drawback to most, can be a benefit for others.

Many like listening to LPs not because the quality is superior, but because the quality is worse.


What if randomness is desirable? What if volume of signals to process is so humongous, some (controlled) noise is actually beneficial? Analog computing sucks at processing digital signals, the same way digital sucks at finely approaching the analog domain. Even at light-speeds there's only so much a bunch of zeroes-and-ones can do.


> It's inherently sensitive to noise and random variation

Musicians often like old analog equipment because it creates unexpected sounds and sparks ideas (pun partially intended), and may have a nostalgic feel that's hard to emulate in digital. Maybe digital equipment can have randomness algorithms, but they probably have a learning curve.


Analog computers are like LP records: only good for nostalgia.


Depth of sound, long term storage, ease of navigation, there are lots of reasons people prefer vinyl that aren't based in nostalgia.


What do you mean by depth of sound? A vinyl record has a dynamic range of around 70db, where as a CD has a dynamic range of up to 144db, though in practice it is usually only 90-96db. Either way you shake that stick though, digital audio (uncompressed) has a much deeper sound.

Even in the recording studio back in the days of vinyl pressings all ran on studio grade magnetic tape with a higher dynamic range (not by much, albeit), but the sound that the artist was going for was typically hindered by analog recording media in a way that isn't true today.


Often vinyl records are much more carefully mastered when compared to their CD counterparts. Many of us are survivors of the loudness wars. https://www.yoursoundmatters.com/vinyl-vs-cd-in-the-loudness...


Ugh, this frustrates me. You are just saying that the _mix_ put on vinyl is better than the mix put on CD. That same mix put on CD would be even better due to higher fidelity of the format. Vinyl is a strictly worse format... unfortunately it's also an easier source of better mixes.


Not a vinyl purist, but the argument is that the same mix sounds different pressed into vinyl vs burned to disc because they reproduce the frequencies differently when read. You would have to engineer the CD mix differently to match the sound of the vinyl.


I mean... So you can't record high frequencies on vinyl... if you WANTED the CD to sound like vinyl you could intentionally alias the high frequencies. But why would you want to do that?

It's not really a case of them "producing different frequencies when read." It's that CDs are capable of recording and reproducing a broader range of frequencies than vinyl.


That's backwards. You mix, and then master to your format. When you master to vinyl, you inherently lose data because the needle will pop off the track if it's too loud or there's too much bass. You literally don't have to do anything like that when you master for digital. You just make it sound good and you're done.


I believe I read that when mixing for vinyl they presume higher grade speakers than those of CD owners so they optimise for the assumed audio setup of the format buyers.

Which is a shame for me since I have a entry-level-audiophile setup and just stream and have little desire in getting into vinyl.


That's false.


LPs are really bad at "long term storage".

You can read them without damaging them slightly. You can't make a perfect copy, so when the vinyl detoriates, you will not be able to "make a fresh copy".

CDs can be copied perfectly and can be moved to superior tech, e.g. SSD. You can continue copying a digital version for the next 1000 years without any loss.


Depth? You mean like a large range in available amplitudes (loud vs soft)?


Probably, and they're wrong.


Hope you aren't playing those records on un-conditioned cables.


No DRM and the fact that you actually own it is a big plus.


Like a CD?


Or an mp3, ogg, wav, flac


Fuck no. Vinyl is inferior to modern digital media in every single measurable way. I cannot believe this myth won't fucking die! You literally have to roll off the low end in a vinyl master so the needle doesn't pop off the track ffs. Not to mention the vinyl literally wears down every time you play the record.


Sounds a lot like "quantum computing".


Quantum computing is digital (with error correction), not analog. Analog computing can not scale *in principle*. Quantum computing can theoretically scale (it is not forbidden by the laws of Nature like it is for analog computing), it is just engineeringly "difficult".


From Travis Blalock (first real optical mouse) Oral History:

"each array element had nearest neighbor connectivity so you would calculate nine correlations, an autocorrelation and eight cross-correlations, with each of your eight nearest neighbors, the diagonals and the perpendicular, and then you could interpolate in correlation space where the best fit was. "

"And the reason we did difference squared instead of multiplication is because in the analog domain I could implement a difference-squared circuit with six transistors and so I was like “Okay, six transistors. I can’t do multiplication that cheaply so sold, difference squared, that’s how we’re going to do it.”

"little chip running in the 0.8 micron CMOS could do the equivalent operations per second to 1-1/2 giga operations per second and it was doing this for under 200 milliwatts, nothing you could have approached at that time in the digital domain."

Avago H2000 chip did all the heavy lifting _in analog domain_. No DSP, it was too expensive for digital domain (cost of first civilian handheld GPS receivers also doing heavy autocorrelation, 1998 Garmin StreetPilot was $400-550 retail).


As I understand it, analog computing is entirely impractical simply from a Information Theoretic viewpoint.

For a signal to convey 8-bits worth of information, it will need to have 256 distinct levels. It we want the signal to range from, lets say, 0 to 5v (which is already quite high), each level only has about 2mV range. This much can easily come from cross-talk, EMI and power supply noise. So all your logic/calculation will be wrong.

Once you start talking about 16 bits, it becomes entirely ridiculous: we now can only have 75uV range for each level. This is getting into RF interference territory - just receiving a phone call close to such an analog signal would disrupt it.

The way I understand it, there is simply not enough SNR available in our electronics (on die traces or PCB traces) for analog computing to work. Thats why we restrict the number of level we use in our signal: digital being just two level, but even with higher level-counts, we typically use 4 levels or 8. This is somewhat analog, but not really.

I am not an EE, so I am entirely open to being corrected on this.


Analog computing has been used by musicians for awhile now.

Synthesizers are basically analog computers. Bob Moog was an engineer whose genius was figuring out how to connect keyboards to lab equipment and how to hide enough of the guts to make the gear approachable to musicians. West Coast synthesists like Buchla took the opposite approach of appreciating the sound of analog computing for what it is.

Synthesizers tried to hide it for awhile behind layers of user interface, but especially with the Eurorack boom of the last decade or so you can really see that synthesizers are simply specialized analog computers. Lots of synth modules openly use the same terminology as analog computing: filters, amplifiers, multipliers, low-pass gate, sample and hold, sequencer, etc. Musicians like Hainbach use actual test equipment in their music.

Guitar effect rigs are also basically analog computers. They're just not used for numerical computation.

update: The Signal State is a Zach-like game where you solve puzzles by programming analog computers; in was inspired by Eurorack synthesizers.


No, synthesizers and guitar pedals are as much computers as a trumpet. "It translates the lip movement through the horn and amplifies the sound, thusly it's multiplying and therefore a computer!"

That is stretching the definition of a computer to absurdity, sorry.



It seems to me that punters overly excited to declare that everything we've ever done up to this point is wrong and there's a new paradigm coming that will obsolete everything, as well as people being critical of the new thing, tend to miss a very important aspect of digital computation that will prevent analog computation from ever simply "taking over": Digital computation is essentially infinite composable.

It does not matter how many gates you throw your 1s and 0s through; they will remain ones and zeros. While floating point numerical computations carry challenges, they are at least deterministic challenges. You can build things like an SHA512 hash which can ingest gigabytes upon gigabytes, with the entire computation critically dependent at every step upon all previous computations in the process, a cascading factor of literally billions and billions, and deterministically get the exact same SHA512 hash for the exact same input every time.

This property is so reliable that we don't even think about it.

Analog computers can not do that. You could never build a hash function like that out of analog parts. You can not take the output of an analog computer and feed it back into the input of another analog computation, and then do it billions upon billions of times, and get a reliable result. Such a device would simply be a machine for generating noise.

Analog computing fits into the computing paradigm as another "expansion card". It may take isolated computations and perform them more efficiently. Perhaps even important computations. But they will always be enmeshed in some digital computer paradigm. Breathless reports about how they're "coming back" and coming soon and taking over are just nonsense. (I speak generally, this walled-off article may or may not have made such claims, I dunno.) So many things about how digital computers work that you just take for granted are simply impossible for analog computers, structurally; something as simple as taking a compressed representation of a starting state for your analog computer is something you need a digital computer for, because our best compression algorithms have the same deep data dependencies that I mentioned for the hashing case.

Useful, interesting, innovative, gonna make some people some money and create some jobs? Sure. Something we should all go gaga over? No more than a new database coming out. It's going to be a tool, not a paradigm shift.


I think that the author is wrong.

Analog computer's weak point is the power supply, such that many companies making them ended up having to manufacture their own to very high standards, such as big capacitors with 0.1% tolerance. Reason is that you are using the analog voltage and thus poor power regulation leads to inaccurate results.

With newer analog computer setups more is integrated into the chip itself, making power supply issues much less of a problem.


On the other hand, making large passives on silicon is extremely challenging to do accurately, and relatively expensive in terms of area. Most power regulators rely on external caps for this reason.


Another interpretation of the name "analog computers" is that they compute by analogy - by simulating the problem in a different medium. When analog computers were created, the there were no digital computers to make the distinction between analog (continuous) vs digital (discrete). There were mechanical computers that used gears and shafts, and angular speed as the main variable; electronic computers that used voltages, there is a famous hydraulic computer that used water levels and rates of flow to represent variables, etc.

You start by examining your problem in mathematical terms and write down the differential equations that describe it. For example, you have a model of a car suspension with parameters for spring stiffness, damping etc. You put the model into the computer, and play with the parameters to see how things work. Not that different from modern simulations. The one advantage over modern simulations is that you might get a better "feel" for the system - or so the proponents of the analog used to say, before digital computers replaced them.


Probably the best 20 minutes you can spend if you haven't really heard of analog computers.

https://www.youtube.com/watch?v=IgF3OX8nT0w


gotta love Veritasium. Every video feels like time well spent.


isn't a quantum computer a kind of analog computer? Wikipedia says "An analog computer or analogue computer is a type of computer that uses the continuous variation aspect of physical phenomena" https://en.wikipedia.org/wiki/Analog_computer


Quantum logic gates perform discrete operations on quantum states of qubits. That is there's a finite number of quantum states that qubits can end up in (in the ideal quantum computer).



paraffin lamps may be coming back, too


The absurdity of suggesting optical computing is a good pathway to efficiency is that our brains efficiently use electrons and are doing just fine.


Hmm, I think that is a non-sequitur. How does the brain's use of electrons say anything about the efficiency of photons for computing?


Our brains are subject to very different design constraints. Wheels are very efficient, but nature doesn't use them because the environment and the exigencies of biological reproduction and repair indicate other strategies.

It really isn't a simple thing.


Interesting analogy, and I see your point. What I'm trying to say is, we know that there's way to perform certain computations that's orders of magnitude more efficient than we've ever achieved. We have a working example of it. And yet, we choose to develop a wholly different technology, from scratch, that's unproven, instead of trying to emulate or understand what already have.


We are far from having a model of how the brain really works.

Furthermore, i know it's a really common analogy but the brain is not really comparable to a computer. The brain is not programmable, it only have a single function which is "given x input, what output is more likely to keep the organism alive and well". It's a complex task, for sure.

But that's not a computer.

A computer is a machine on which you can run arbitrary programs. I can't plug some wires in your brain and program it to do what I want. It's not that the sockets for my wires are missing, it's just that the physical structure doesn't allow generic computing. If you really wanted to make an analogy, you could say that the brain is an electric circuit. You cannot program an electric circuit. It does what it's wired to do. You can't say that your electrical circuit is faster than a computer. It makes no sense.

So it's a false assumption to say that brains are faster than computers because it's just something that you cannot compare.


> The brain is not programmable

But that is not true, is it? You take a person and train him/her the right way and you will get a fighter pilot, or an equestrian or a poet. The difference between your fingers and the fingers of the finest goldsmith or cellist is thousands of hours of practice. And that didn't change their fingers, it reprogrammed their brain.

So yeah, you can't upload a new program to it with a USB port, but it definitely can be programmed.


No, training is not programming.

Training is feeding the "complex electrical circuit" some input data, repeating it as nauseam in hope that somehow, the machine manages to store and interpret enough of this knowledge to do something with it.

Programming is just throwing data somewhere into some memory and instructing the processor to interpret this data as instructions to follow.

There is no such thing as a processor in the brain that somehow would read some memory somewhere and execute a set of actions.

It’s just « eyes see cake. I know that grabbing it and eating it releases dopamine so I’ll send the required signals to get this into mouth » it’s not ./eat_cake -f --ignore-consequences

I doubt that computers would be as useful as they are if some coach had to train brain-based computers what is Excel and how it should work.


This feels a little bit like a rhetorical trick. You're right that there is _something_ similar about training a person and programming a computer, but I'd hardly call the two things directly analogous. It strains the definition of the word train to say you trained a computer with a python script and it strains the definition of the word programmed to say you programmed a person to play the cello.

Tech people see everything as essentially similar to tech, but its come to be my understanding that biological systems are a fundamentally different kind of thing than technological ones.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: