Hacker News new | past | comments | ask | show | jobs | submit login
The Art of Electronics, 3rd Edition, to be released April 2015 (cambridge.org)
227 points by jotux on Jan 28, 2015 | hide | past | favorite | 96 comments



I learned from the 1st edition, back in 1983, when I took the electronics course for physics majors. I think it's the best textbook I've ever used, in any subject. I fell in love with analog electronics. Note my HN user name. ;-)

What's happened with analog since then? I believe that opto-electronics is much more important today, as are switchmode circuits. Some things, like systematic noise budgeting, will never go away, even if we have to learn the behaviors of some new devices. Also, it's probably harder to get away with having just a limited understanding of high speed circuits and transmission lines today. But something I like about analog electronics is that physics will always be physics.

If you liked H&H, you might also like: Building Electro-Optical Systems: Making It all Work, by Phil Hobbs.


What happened? Digital and moore's law took over. The microprocessor continues to absorb every function that, at one point, made economical sense to do in the analog domain. I share your lament, but reality instructs us to condition signals and supply voltage properly, and let the micro take it from there. It's only going to become more true as time moves on, so learn programming. Get very good at it.


Upvoted. I definitely agree that many erstwhile "analog" functions have joined the dark side. ;-) And I've been programming digital / analog systems since the early 80s.

But I think that two challenges / opportunities remain for analog. The first is that advances in digital capabilities place new demands on analog. For instance, the possibility of higher sampling rates and bit depths requires us to go back and update transducer preamps.

Second, some archaic skills can still be lucrative if the number of practitioners decreases faster than the number of remaining applications. I'm still betting my career on the need for somebody to care about physics, while also knowing how to program. At my workplace, I am possibly the last remaining person who knows how to compute a noise budget that includes transducers, analog signals, and digital processing, in an instrumentation system.

Horowitz and Hill had a chapter, "Digital Meets Analog" that discuss ways that digital designers have to keep up with analog concepts. The ways that digital systems can go wrong are often analog.


I would still add another challenge for analogue: power harvesting. That's an environment where the amount of energy you can collect is so low that you can't simply afford to run a uC. However, you still need to manage that negligible amount of energy, because in the long run, it builds up, and then awesome stuff happens (switching a uC on, for example).

I bought the 2nd edition 15 years ago, I loved it, and I now cherish that useful book in my bookcase.


Really curious what your job is?


I work for a company that makes instrumentation. It's one of those "okay, now what am I going to do with a physics degree" kinds of things, but it suits me well. And I've bounced in and out of management. Today, there are engineers who are ahead of me in electronic design, but I'm good at figuring out the grand overview of how products work, and like to solve weird problems. Much of my programming is in the service of solving problems, rather than creating products.


My understanding is the high fidelity audio equipment are predominantly analog and does attract people of said skillset.


And just how many serial busses, wireless networks, HDD heads, flash level detectors, and DRAM sense amplifiers did those words of yours have to travel across to get from you to me compared to 10 or 15 years ago?

Analog only looks like it's falling in importance if you divide by the size of the digital industry.


Don't forget traditional RF. How do you build a microwave low noise RF preamp using a microcontroller? Or a 1000 watt solid state broadcast transmitter using a microcontroller?


As an electronics undergraduate most of us preferred digital electronics, analogue seemed like black magic. I'll always remember one of our analogue electronic lecturers scoffing at us when he had to cover a digital lab one day, saying digital electronics is just high gain analogue.


I am too young to have been there for the digital vs analog debate, but I understand that things being digital was not always obvious. The reason digital won is because, very crudely, to double the precision of a digital circuit you just give it wider registers (scale horizontally), while to double the precision of an analog circuit you have to more than double the precision of every component of the system (scale vertically). Digital was/is cheaper. But, as my electronics professor put it, "every circuit is analog at some level".


The big problem is not precision in the abstract, the big problem is that analog is just about impossible to replicate reliably en masse. You could build 1000 digital circuits and they'd all come out and operate exactly identical or not at all. Your analog circuits are going to come out in a bell curve roughly between un-usable and perfect with component variation and parts count as ingredients. You can mitigate this to some extent by very careful design and pre-assembly component selection but not all the way.

The way to deal with that in the analog world is an expensive process called 'trimming' where you use adjustable components to get you where you want to be. But trimming makes the circuitry sensitive to being bounced around and makes it more sensitive to temperature changes if you use the cheap version and the expensive version (digital circuitry masquerading as trimmable analog components) puts the horse behind the cart if you wanted analog then you're no longer getting it (but some kind of quantized version of analog).

For some circuits this is all more important than for others.

Finally, digital reduces the component count and so increases reliability and decreases cost. Software is hard to make and expensive but once you have it the marginal cost of reproduction is next to zero.

Digital, once the bugs are worked out has none of these issues and so lends itself much more to economic mass production.


I'd go even further and say that you can increase the complexity of digital systems exponentially, and create functionality that just can't even be imagined in the analog realm.

But another feature, not inherent to all digital circuits but certainly prevalent, is the ability to reprogram them. For me to change an analog circuit might take hours with a soldering iron. You can load a whole new "circuit" into a digital system at the push of a button. This in turn changes how products are designed. Analog functionality is used sparingly where needed, and gotten working early in a project. The digital system is left with some flexibility, e.g., allowing changes to made later in the project without drastic hardware revision. The mantra of the analog team, at the 11th hour of the project, is: "Can we fix this in the firmware?"


General purpose digital computation is often power limited (either from the supply energy or thermals of the device) and there is an (academic) resurgence of analog computation on data prior to digital blocks to reduce the power footprint of a part. Most applications of this are RF. Conditioning signals as you say these days and in the future isn't just bandwidth limiting and shifting the baseband.


This is a problem. Need a solution? Slap a micro on it. It's a shortcut but not always the best solution. I've seen circuits using a cheap micro, yes, but someone had to program it and a simple analog circuit would have done nicely, cheaper and possibly better and more reliably.

This thought that every problem is solved with a microprocessor shows no thought being applied to the solution, not brilliance by some engineer.


If you liked H&H, you might also like: Building Electro-Optical Systems: Making It all Work, by Phil Hobbs.

+1, definitely an underrated book. It applies to a lot more than just EO systems.


See https://en.wikipedia.org/wiki/The_Art_of_Electronics#Third_e... for all the pre-announcements since 2006. This isn't the first announcement. Back in 2009 or so Amazon was taking pre-orders.

It really is an excellent book. It was written as a guide for grad students who need to build instrumentation. A 25 year old book on electronic design that talks about specific IC part numbers in detail is way out of date, though. The approach used needs a refresh every five years or so.

It's less necessary than it used to be. We have so many on-line resources for building electronics now.


Interestingly, a lot of the parts discussed are still in wide use today. Much of the concepts are necessary for understanding circuits and how they work.


Some of the parts discussed are probably in wide use because they are mentioned in AoE. The uA741 effect.


That was basically the only opamp we were allowed to use in my undergraduate electronics for Physicists classes.


It's a terrible opamp for precision measurement. (Actually it's a terrible opamp for most things, except maybe student experiments.)

I have mixed feelings about AofA.

On the one hand it's funny and a good broad introduction to a lot of relevant concepts.

On the other it glosses over so much of the math you need to learn to be competent at design that it doesn't teach you nearly as much as it seems to.

When I first read it as an electronic engineering undergrad I thought this was a very good thing, especially compared to the standard textbooks that buried you in equations.

A few decades later I'm not so convinced. I think a good upgrade would be a series of much bigger books (or Wiki pages?) that keep the breezy hands-on style but also dig fearlessly into the details.


But it's an intro book. No single text can cover the introduction for the range com people with no background to people designing cutting edge circuitry. At some point you need a BS or MS in EE of course, but at this point you're out of the realm of AoE.

I heard one guy who I used to work with criticize AoE for having an 'incorrect' circuit (I think it was some kind of differential amp). It turns out that this guy had a MS in EE, and was designing a low-noise amplifier to work in GHz range and complaining that the AoE circuit diagram, which was only meant probably up to a few MHz tops, didn't account for some source of thermodynamic noise from the silicon transistors. Or something like that, long ago. Point is, if you're at that level of expertise you should not be using an intro book. The fact this guy even thought to use AoE speaks volumes of just how useful it really is.


I'm making my way through the 2nd edition now. As a software guy who occasionally writes close-to-the-metal firmware, I wish I'd read the book a lot earlier.

It's fun to wire up a few transistors and other components and see them behave the way your calculations said they would.

It's also fun to be talking to hardware engineers about firmware, and then drop into a discussion about the circuit at hand. Sometimes it makes their eyes bug out. :-) Sometimes you can even save money by suggesting a different approach that software can make better use of.


For the uninitiated this is considered, by many, to be the electronics bible. Sort of a TAOCP for electronics and circuits. The last version came out in 1989 and we're finally getting an update this year.


> Sort of a TAOCP for electronics and circuits.

Except people actually read The Art of Electronics :)


The smartest man I know gave me his copy, signed by Horowitz, and told me "Figure this out, and you're a double E". Four years and a half later, I was able to return it upon graduation. (Yeah, I took an extra semester...)

On a tiny shelf behind his tiny desk in his huge lab were a bible and a copy of AoE.


Well there is another $120 pre-allocated :-).

I hope they make an electronic version, although I would be OK with buying a second copy and getting it scanned if they don't.


Just FYI, in case it's not well-known, there's now an electronic version of the 2nd edition for Kindle. It's page-scanned (something Amazon used to be picky about because of file size), so the figures are nice and crisp.

http://www.amazon.com/Art-Electronics-Paul-Horowitz-ebook/dp...


Me too. All the engineering books I use week-to-week are PDFs in a directory on my computer. The dead tree versions (including AoE) sit on the shelf unused.

Make sure to click the "I want this title to be available as an ebook" link on that page.


And my jaded self would think the digital copy would be some DRM pile of crap.

I've dealt with "school books-but online" before. I've seen things like 'book explodes after 180 days combined with 'can only print 10 pages'.

I'll scan it myself or download it from someone who has. I'm just done with DRM. Never again.


Agree 100%, none of my engineering ebooks are DRMed. For some of them, I bought the text and then immediately pirated it off thepiratebay (rip) with a clear conscience.

Seriously, screw DRM.


Seriously. I like to study textbooks on the train, but can't carry much more than a tablet and laptop. I'd pay for the hardback and more, just for an eBook. Preferably in a format that let's me highlight and take notes.


Can someone with no electronics background at all pick up this book? If not what would be a good prerequisite book?


Yes, that's basically the purpose of this book. Though amazingly experts will also learn many things and find it useful too.

I took the Physics 123 class at Harvard, from which this book developed from the original course notes, and uses this as the textbook (the class is still taught by Horowitz who has encyclopediac knowledge of electronics and Hayes who replaced Hill when he left Harvard. These guys also wrote the accompanying lab manual which I also highly recommend). You had all sorts of non-scientific people in the class.

To give some context, one semester is split into two halves. The first half covers analog electronics, and ends with a lab where the whole class designed and built a system to take analog audio signal, pulse-width modulate it to an IR transmitter, broadcast across the room to another receiver circuit that demodulates, amplifies, and plays it. Designed by students that originally never knew what a resistor or transistor was.

Second half is all digital, starts with glue logic and ends with the building of a breadboard computer (68008, the 8-bit external bus flavor of 68008). We'd write assembly programs into an EEPROM and program it to do all kinds of things. Again, students that had no idea what a NAND gate was nor ever wrote a line of code.


I wonder if anyone knows if there is a comparable reference for mechanical engineering? Something that someone in a different discipline could use to get at least conversant in. I'm a EE who has been interested in building more mechanical things (especially how to compute things mechanically), but some of my biggest problems are know what search terms to use when trying to look things up. Like for instance I've been wanting something like a mechanical multiplier/mixer. There would be one rotating input shaft, two rotating output shafts (call them X and Y), and a "control" input which would direct the input shaft motion to the output shafts in proportion to the control. So if the control were at say the "zero" position, all the input motion would be transmitted to the X shaft, and Y would be motionless. When the control was at the "one" position, all the motion would be transmitted to the Y shaft, and X would be stationary. The control would be continuously variable, and it would be great if it also performed the sinusoidal conversion at the same time, essentially:

    X = Input * sin(control)
    Y = Input * cos(control)
...I imagine something like this must exist, and deriving it from first principles probably isn't exceedingly difficult, but knowing what it is called is another matter.

Getting back to the topic at hand, I suppose the book I'm dreaming of would cover things like gears/pulleys and their common/interesting combinations (like a differential), springs, thermodynamics, hydraulics, linkages, heat engines, etc.. And as long as I'm dreaming if there are also similar texts for chemistry and cellular biology/DNA/genetic engineering, I'd also purchase those.


Thanks for that I am getting into ardunio and was looking for good textbook - I have basic electronics back from my engineeing ONC and need sligly more up to date book than the priciples of wireless that I inherited from my dad - pre transistor age and still referes to capacitors as condensors :-)


Yes, but.... this is a tome. A bible. It covers everything. Its not a hold your hand, electronics for dummys, type of book. You will learn from it, and never stop learning from it. I would recommend getting the student handbook to go along w/ it, if you're truly starting from no electronics knowledge. The handbook has more examples and tutorial type stuff in it.


This is offtopic, but I need to ask this question and I fear I may loose the opportunity to field it to this audience of experts. I'm interested in two things and am wondering if this book, or something else, would be of use to me. Assume I have strong math background, and assume I understand the Maxwell equations. What book/resource provides a good understanding of the approximations (from first principles) needed to provide a reasonable simulation of say a (layered) circuit board and increasing frequency signals? How about if you went more complex and wanted to understand (and again simulate in software) say the behavior of 28nm CMOS circuits? Any help would be appreciated. This is more of a longer-term/backburner project for me, but I'd love to understand this aspect of hardware.


Have a look for resources on the finite-difference time-domain (FDTD) method for electromagnetic simulation. Also in use are the finite element method (FEM) or the method of moments (MOM). Wikipedia has a decent-ish rundown https://en.wikipedia.org/wiki/Computational_electromagnetics.

The Art of Electronics will not help you for simulation, it's a very good practical manual for analogue electronics design. It'll help you understand how a circuit works, but electrical engineers rarely need to solve Maxwell's equations when designing a circuit. It's not a bad idea to read through it however, I can't see many situations where you'd immediately dive into EM simulation without understanding how the circuit works first.


For the first, probably some textbook on SPICE simulation and/or upper-undergraduate EE texts, perhaps verging to RF engineering expertise. I used to need to worry about these things at My old job at MIT Lincoln Lab, but my boss said estimating true parasitic inductances and capacitances is way more of an art than a science in real systems. What you should look up and be famiar with are 'strip line' and 'micro strip', which take advantage of the layering and ground planes to get transmission lines with controlled characteristic impedance.

For the 2nd, you really need to get into solid state physics if you want to understand the details of CMOS, which entails some requisite knowledge of both quantum mechanics and statistical mechanics/thermodynamics. Maxwell's equations alone won't cut it.


I can only direct you to the simulation package we used at university, I have no idea if it is the best, but it was fairly good during the time of my studies.

http://www.sonnetsoftware.com/


Thanks everyone for the responses. I think it seems, for the depth I'm interested in, it'll require deep study on my part, instead of my spare brain cycles. Here's my thinking that prompted this, sorry for the brain dump and lack of links, these are startupish ideas.

In business you have either the forces of disintermediation/vertical-integration, or specialization. We've moved from "real men have foundries" to fabless, with consolidation of foundries. I think of ARM as a software company. I am a software person. The goal of software is to turn hardware X into X-as-a-service, à la Amazon AWS. Part of the compromise of XaaS is compromise. What you loose in specialization you gain in flexibility; you shift capex into opex and promote venture capital.

Okay, so what?

What is post ARM? Three are two opposing forces of modern compute. On the one hand you have Amazon with Annapurna, and Google with its custom switches, consolidating the datacenter into a service, a really big black box with a internet-facing API. On the other hand you have the free-wheeling world of IoT, with all the big players trying desperately to create walled gardens. My thesis: this won't work for IoT.

What does this mean for future hardware? Well, think of what it would mean to turn something into a XaaS. Consider the foundry as a VHDL/Verilog to silicon. Simplify the frontend and the backend, i.e. limit/streamline the HDL and the output geometry(TSSOP/BGA/etc), in order to increase yield. The software would look like mix-and-match core (ARM/MIPS/lowRISC), pick on-chip bus, pick SRAM, pick memory controller (hey, cool you're dropping all DRAM controller for several NAND controllers and a small on-chip flash), pick accelerators, ethernet, etc/etc. Of course this is essentially already the case in hardware, except for the still large upfront capital expense.

But consider something else for IoT, something for the future electrician/carpenter/plumber. That is, change the customer from the end user to experts in the trades or proficient DIYers. To do this a startup would specify the physical/mechanical/electrical/thermal/acoustic/etc properties of modules, be it li-fi, smart sockets, servos, etc and make software to mix-and-match them into devices by these trades experts to solve problems. For example, someone who installs blinds can put together a modules to build something that automatically controls the blinds. To make this work requires some serious cross-disciplinary thinking. The backend would have this modules fully openly specified with factories wherever in the world competing to build them. The frontend would be educating and marketing to tradespeople as a way to make their practices more lucrative and increase their fees. The goal would be this software with lots of pre-designed module combinations, and tips and tricks for "blinging" up your home. To begin, the startup would have to design and manufacture its own modules to show viability, and then drop out of the picture and earn money on royalties collected from being interoperable with their design software.

Yes, if only I had a few megabucks lying around... :)


We have gone fabless but this hasn't altered the capital equation as much for hardware as one might hope. Chips tend to become 'pad limited' and 'dissipation limited' so there is a limit on what can be done in a certain volume. Some of the more creative stuff isn't about chips, its about interconnects. Which gets us to #2.

The "Internet of Things" isn't really an Internet, the disruption is that everything is a network and yes, Sun had it correct when they said the network is the computer. What you have is a collection of agents which cooperate to achieve a commanded objective. No one cares about 'smart dust' what they care about is the transformative aspects of real time contextual data. The IoT is about creating adapters which convert ambient information into data that can be collectively consumed and processed by computers. A billion barometers on smartphones taking samples of the pressure where they are, combining that with a set of GPS coordinates and transforming ambient data (air pressure at a known point) into a consumable dataset. Which when observed over time can inform on larger processes such as weather fronts. None of that needs "new chips" but it can benefit from easier assembly of existing capabilities.


If you threw out an off topic post and actually put contact information in your profile, and I had something to add, I would email you and nobody here would care (although your off topic post might get downvoted into oblivion) but submitting an off topic post with no way to contact you is not very useful.


"with no way to contact you"

Why doesn't the "reply" button suffice? Other people may be interested in the answers.


I've made changes to my profile, I hope they're useful. I also voted you up to counter the downvote. Sorry, but it's true, others could be interested in what you have to say.


It is a fantastic book. I'd also like to recommend Practical Electronics for Inventors by Paul Scherz and Simon Monk (http://www.amazon.com/Practical-Electronics-Inventors-Paul-S...).

I'm in the crowd that's getting into analog electronics and digital circuits backwards, as it were, by starting with programmable microcontrollers (Arduinos etc.) and moving outward from them on the circuit board. I found the latter book was particularly well suited for self-learning. It is also huge and, as far as I can tell, vastly comprehensive. The writing is clear and concise. Explanations of concepts often draw on analogy, classical electrical theory, and quantum physics alike. This multitude of approaches has helped me grasp the fundamentals more firmly than other books. It also keeps an eye on practical applications. Sections on, say, power rectifiers or op amps or timers or debouncing circuits or whatever all show you many variations on a theme, with discussion of what you would want to use in which situations.

Also, if I'm speaking to anyone else like me, software engineers who want to know hardware, buy all the books you can, but get an oscilloscope. I waited far too long for this purchase. I wouldn't write code without a debugger; this is the hardware equivalent. I recently got this little Rigol model: https://www.adafruit.com/products/681 . It costs the same as a few big electronics books, and it's the difference between stumbling around a room in the dark and having illumination everywhere.


The Art of Electronics, combined with the Student Workbook are excellent books to teach yourself electronics.

I guess today you'd use electronic simulations on a computer rather than real components on bread-boards with real test equipment. Perhaps some HNer is involved with this type of simulation software? I might be worth creating "labs" that can be used while reading tAoE?


I think the best analogy I can give for doing electronic labs online is the way cisco certification tests were provided in a very limited simulation presumably using regex's to see if you get it "right" (at least this is how it was a decade ago) vs getting time at an actual router command line. Its a different experience to do something real vs a simulation with pre-determined conclusions and situations.

I'm struggling to think of a programming analogy, comparing getting a real live REPL to getting some kind of not-REPL training environment. I don't think there really is anything that bad in all of program writing.

What simulation is really good at is optimization. Its a useful skill but not the only one. Maybe a good programming analogy would be ripping all the "write a program" assignments out of a CS curriculum and replacing them all with profiler exercises. So rather than writing your own bubblesort and quicksort, you'd just run a profiler on someone elses sorting libraries and compare the numerical results to get the expected result from the book.

A really good car analogy is I'm old enough that when we did Drivers Ed we had simulation where driving scenes were projected on a screen and we optimistically pretended to drive a car, vs the behind the wheel section of drivers ed where we actually drove a real car around. Its kind of useful, kind of, but I don't think you can really learn to drive a car by watching carefully crafted movies of someone else driving.


I studied the Cisco networking materials 5-10 years ago but never actually got round to taking the tests. Dynamips was a router hardware emulator for the 1700, 2600, 3600, 3700, and 7200 hardware platforms that booted up actual Cisco IOS images.

With GNS3, it was possible to set up complex network topologies of dynamips routers that would have cost thousands of dollars in real hardware without losing any of the realism as these were essentially virtualised routers.

I haven't kept up with the progress here but Cisco certification may not be the best example.


This is a book that they should introduce freshman year to all ee undergraduates. For some reason, the traditional first-year curriculum mandates the same high school calculus and physics that bored the better students in their teen years. But in my not so humble opinion, this is the type of stuff they should start out with.


I have a MS in CSCI, PhD in cognitive psych, but know nothing at all about electronics. Is this a reasonable first book? I ask bc somebody compared it to TAOCP which I would not recommend to somebody with zero CSCI/programming skills.

EDIT: thanks for the feedback, I'll look elsewhere to get my feet wet.


There is a lot of good introductory type material, but the book treats many slightly advanced topics as though they are casual, and skips around topics plenty. An example would be how quickly (within the 1st chapter) Thevenin circuits come up. I remember taking an entire semester to get to these in my EE coursework.

Many may say otherwise, but I would argue that this is a better reference than it is a book to learn electronics from the ground up, but that doesn't mean that you can't use it as a guide as you learn and look elsewhere for missing info.


I disagree with the below and would recommend this book.

See my other comment on this thread for more details. But basically I took Physics 123 at Harvard, and AoE sprang from the professor's lecture notes. When I took the class in the late 90's, It was taught by Horowiz and Hayes, as Hill had left previously to do his own thing.

The class had many people from non-hard-science disciplines who didn't know anything about electronics. I specifically remember a couple of psych students. They all did great, see comment above on what we build at the ends of the analog and digital halves of a one-semester course.

The student lab manual written by Horowiz and Hayes, from which we did our class labs, also gives great context and a hands-on plan to learning the material.

I also knew a few artists in the Boston scene that used this book to build actual electronic items for performance art shows, including robo-mechanical drum machines and analog synths.


If you are looking to tinker on projects, kits, etc., other tutorials/books may get you the basic starter info faster. You don't need a deep knowledge to just start playing around.

However, if you are looking to gain a serious academic level of familiarity w/ electronics to the point where you can design, build and analyze your own functional electronic systems, then this is the book. It will give you the foundation you need to build any analog, digital, or hybrid system.

Some fun examples of things we built when I took the class: AM/FM radio, microphones, audio speakers, amplifiers, analog to digital converters, and a breadboard computer built and programmed entirely from basic component parts.


Getting Started in Electronics by Forrest Mims III.

It's got happy little pictures of electrons. :)


No, don't start with this book. Start with something focused on beginners like Make: Electronics by Charles Platt.


AoE is comparable to TAOCP only in legendary status. As a point of reference, AoE is the standard textbook for teaching practical electronics to undergraduate physics students at many universities (e.g. UC Berkeley).


Hope it's for real this time. It was supposed to be out in early 2014

http://www.eevblog.com/forum/chat/the-art-of-electronics-3rd...


Sounds like it. Win Hill mentioned on Usenet a few weeks ago that it was really done and in the hands of the publisher.


I can't wait to get my hands on it.


Looks like Adafruit will have the book in its shop too: https://www.adafruit.com/products/2356 Can see some better pics of it there.


One of the coolest experiences I had in grad school at MIT was taking professor Horowitz's Physics 123 class at Harvard, and reading this book cover to cover in a semester. Pain!

FYI, AoE covers BOTH analog and digital circuits in great detail.

You don't need to know how a computer works in that level of detail, in order to program it. But having started my education w/ high level languages like Java, C, C++, etc... it was fun to work back down to voltages, capacitors, buses, clocks, EPROM, transistors etc, and see how it all comes together end-to-end.


If they mess with transistor man, riot.


Own this book previous edition.

What is good resources for learning electronics online? Are the online schools like Codeschool and Codeacademy with tutorials anywhere that are good that you can recommend?


There usually are few electronic related courses on Coursera. Haven't done any of them yet, but I'm satisfied with the quality of their other courses. Check "engineering" category here:

https://www.coursera.org/courses?languages=en&categories=ee


I was impressed by https://6002x.mitx.mit.edu/


Never heard of this but the comments here are piquing my interest.

I'm currently reading Digital Computer Electronics (1977 edition, current here: http://www.amazon.com/Digital-Computer-Electronics-Albert-Ma...).

Would others agree this might be a nice follow up for me, especially if I'm looking for more current material now that I (am starting to) get the fundamentals?


Indeed. The 2nd edition companion Student Manual by Tom Hayes (which will also be updated) used in Harvard's class covers building a functioning computer 'from scratch'. Lots of practical labs and discussion in it.

Also, the AoE 3rd edition has a new chapter on microcontrollers.

I'll see if it's OK to post a table of contents of the new book here.


Best electronics book ever ... Updated but even the older versions contain relevant theory (though I gues when the 74xx parts are obsolete that would change).


Indeed - the second edition was very helpful for me (An embedded SW/crypto person by day doing basic electronics and ham radio as a hobby) during the early 00's.

I didn't see a chapter list. I wonder whether FPGA technology will be touched on at all? some of the modern small FPGAs are incredibly useful when paired with a microcontroller or the newer uc++ boards like the edison. Then again, so many new toys are available (DDS comes to mind for radio) that I imagine a lot of newer tech is out of scope for a book that focuses on the fundamentals. There's always the AARL books :)


By far the best book written on the topic. I would argue it has all the content of Computer Engineering degree in one book.


I always wanted a copy of this when it was recommended on my Comp Sys Engineering B.Eng but never could afford it at the time.

I'm tempted to buy a copy now even though the closest I get to electronics these days is configuring my router.

//edit//$120! It was £20 when it was recommended on my course in 1990 (IIRC).


Inflation since 1990: a little more than 2x. Exchange rate: about 1.5x. So £20 in 1990 is something like $60 now.

So it's gone up by about a factor of 2. That's fairly sad, but it's not as spectacular as the 20:120 ratio your comment suggests...


Well, it looks like it's been enhanced quite a bit. Not saying it's not worth it, but it has jumped out of the "impulse buy" category.


Random question that's been bothering me (since I might have the attention of electronics experts right now)

If I charge a simple parallel plate capacitor, and then pull they plates apart, how does that affect the potential energy stored? Would it be hard to pull apart?


Let's set aside equations and think about this conceptually. To be clear about the question here, I'm assuming that you have put a fixed charge onto each plate (equal and opposite) and then isolated them (so the charges remain constant). Here are the key ideas that are significant for this:

* Assuming the separation between the plates is always small compared to their diameter/size, parallel plate capacitors give rise to a very simple electric field pattern: it is essentially zero everywhere outside, and it is uniform throughout the region between the plates with a strength that is independent of the separation. (Technically, it depends only on the area charge density: charge per unit area. All of this can be deduced from the electric field pattern of an infinite charged plane.)

* Capacitors store their energy within that electric field between the plates. All electric fields carry an energy density proportional to field strength squared, so the total energy stored in a uniform field is proportional to the volume occupied by that field.

So for your question, I can combine these ideas to recognize that pulling the plates apart will result in a larger volume between the plates, and therefore it must result in more stored energy (because the strength of the field in between stays the same as you pull). That energy has to come from somewhere, so I can deduce that I would have to put energy into the system while pulling: it would indeed be hard to pull apart. (The level of "hard" would depend on a lot of factors.)

(If you pull far enough so the separation isn't small compared to the plates' diameter anymore, you'll pretty quickly reach a case where you can approximate each plate as a point charge. Pulling those opposite charges apart clearly requires work, too, though the force gets smaller as they get farther apart.)


Mind you, you might instead keep the voltage constant (rather than the charges), perhaps by keeping both plates plugged in to a battery the whole time. In that case, the story changes:

* Voltage is (morally) equal to electric field times distance, so here if the separation between the plates doubles then the electric field must be cut in half.

* The plates' area is fixed, so the volume between them is proportional to their separation.

* The same fact from before about energy density being proportional to electric field strength squared applies, and total energy is still energy density times the volume between the plates.

So combining these ideas, we can see that the total energy stored between the plates will wind up decreasing as the plates are pulled apart, because the decreasing field winds up being squared when finding the total energy.

That seems very strange! Opposite charges attract, after all, so you'd still expect that you would need to do work to pull the plates apart. The subtlety here is that the battery's stored energy is changing in this process as well. (Let's assume a rechargeable battery for the moment.) As the plates separate, the charge on each plate goes down in proportion to the reduced electric field in between, so there's suddenly a lot of excess charge that needs to go somewhere. That means that the extra positive charges will be forced back into the battery's + side and the extra negative charges will be forced back into its - side. And that process stores additional energy. I haven't done the calculation, but I assume that this increase in energy will more than compensate for the decrease of energy in the capacitor itself (in exactly the right proportion to allow for the work of pulling the plates apart).


Thanks. This is starting to make sense. So I guess if you connect the two plates with a conductor after they were pulled apart, the increased energy would take the form of higher voltage?


That's right: the voltage in this case will be proportional to the separation between the plates, so if you discharged them across a light bulb it would (briefly!) glow much brighter if you pulled the plates apart first.


(Also what's the best way to learn the formulas so I can figure out these kind of questions for myself?) Buying an advanced Physics textbook seems too daunting.


As someone who regularly assigns the question you asked as homework (in an advanced physics course, mind you), let me put in a word of advice: don't think of what you're aiming for as "learn[ing] the formulas". In my experience, that mindset (which fits so very well in most high school science classes) is the single most common stumbling block for college-level physics students.

Equations and formulas exist purely in service to concepts. If you can't tell a story about "what's really going on" (qualitatively) without reference to the equations, then (in most cases) you probably shouldn't try to use the equations, either. I see student after student try to solve complicated problems via "equation hunting", where they just dig through their notes or the textbook looking for formulas that have the right variables in them, and then look for ways of combining them to find an answer. (Sometimes their thinking is a step more sophisticated than that, but it's a characteristic pattern.) Students start to become experts in physics once their mental model of the subject transforms from a jumbled pile of independent equations into a network of concepts with equations like little neurons binding them together.


Yes, that's an excellent point. So I guess the follow up question is how to learn the qualitative ideas behind electric fields?

I've got a lot of inventions I've thought of related to static charges, and I'm trying to basically figure out why they wouldn't work. Maybe I can email a couple over to you and you can point me to the right concepts? (email in my profile)


Alas, the trouble is that it's tremendously hard to learn (or teach!) the qualitative ideas without also learning the equations. (The only counterexample to that I've ever seen in physics is Feynman's amazing little book "QED", which isn't at all what you need here.) So I've got nothin' for ya. (And while I do enjoy conversations about this sort of thing, obviously, I can't possibly sustain one with my job these days: I'm already going to regret the time I've spent on this one. But it was fun.)


There's no substitute for actually working through the equations and math, but working with simulations can help drive concepts home. For example, PhET has a virtual capacitor lab (it's a Java applet).

http://phet.colorado.edu/en/simulation/capacitor-lab

If you have multivariable calculus under your belt, then don't be too afraid of jumping into something like Griffiths E&M.


The voltage wouldn't change, but the energy would decrease. The energy stored in a capacitor is a potential energy, which means that the energy depends on the orientation of the capacitor's plates. The decrease in energy is equal to the work required to pull the plates apart.


Intuitively it seems like the energy would increase because you're putting in work to pull them apart, no?

And you'd still have opposite charges on the two plates so there would always be some energy from that?


FYI Amazon appears to have it for $12 less, including shipping: http://www.amazon.com/The-Art-Electronics-Paul-Horowitz/dp/0...


Ah, do they mean April 1st?


Amazon says not until the 4th July in the UK.


I wonder if they're going to do the same for the lab manual? I have a copy of the 2nd addition and there are instructions to mail a physical paper letter to addresses for parts.


I was lucky enough to find a copy of the 2nd ed. in the closet of an apartment I moved into during college. I really need to actually read it sometime!


I loved that book. Glad to see an update. It has much practical advice that you don't get in the classroom.


At last!


can I pre-order at amazon? i have the 2nd but would like to get 3rd, preferably in e-book format though

update: it's on amazon, just pre-ordered one copy for $108


can't wait for this. Time to step my Electronics Design game up.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: