I've always wondered why most people seem to draw flip-flops with the crossed wires and both gates pointing the same way, when I think this representation makes it far clearer:
When both inputs are low, the NORs are equivalent to NOTs and you can see they form a storage loop. When one input is high, it forces the loop into the corresponding state.
That said, I'm disappointed this article doesn't show the transistor-level schematic, because trying to read from a breadboard is extremely difficult.
The reason it is not usually drawn that way is it is a cardinal sin to draw logic gates 'backward' on a logic schematic. It is a schematic schematic equivalent of 'goto' and makes schematics confusing to follow. For a simple inverter in a flipflop it isn't so bad, but when you are inside of a larger schematic it is better to stick with the best practice of signals flowing left to right for all symbols.
PS - For what its worth, that image has a transparent background, so when the imgur viewer displays it you get black traces on a black background.
The whole point is that the signals can't all flow left to right in any reasonably nontrivial design, because the latter will almost always have feedback, and at its fundamental level that is how static memory works.
Even if you draw both gates facing the same way, there is feedback and you still need to follow the signals the other way; but instead of simply turning one the gate in the direction its output is actually going, and showing that structure more clearly, you introduce the extra ugliness and confusion of crossing signals.
Sorry I may have spoken inaccurately. Wires can carry signals right-to-left (as you mentioned - this is necessary in any circuit containing feedback) but the _symbols_ should be drawn left-to-right in a digital logic schematic.
Certainly there are different fields that follow different rules, for example in schematic representation of feedback systems the feedback blocks are often drawn right-to-left. They get away with it because their schematics are generally much simpler--usually a dozen or so blocks, compared to hundreds or thousands in a nontrivial digital circuit schematic.
Also - I am not sure I understand your comment about 'extra ugliness and confusion of crossing signals'. Flipping the inverter backwards does nothing to remove the signal cross, it just moves the cross outside of the region you showed. Note how one of the inputs to your flipflop is now on the right hand side--in most cases the crossing will reappear when you connect the rest of your circuit.
Your image appears as black on black, you might want to re-upload it or edit the URL. I could only see it by dragging it against a non-black background.
Because if you have an input on the right and you combine this with other digital logic you are going to end up with crossed wires backtracking to that input on the right somewhere, I would guess.
I once made a single-transistor latch by accident. It acted as a single bit of memory and retained its value for weeks until I got bored with the project.
I had been making magnetic snap-together circuits, so I had a bunch of small PCBs with simple 2- and 3-pin footprints and holes that I soldered neodymium disc magnets into.
I put a big TO-220 N-fet on one of them, and stuck it to a laminated whiteboard so that the magnets stuck without shorting together, then I hooked it up to an LED as a simple high-side switch.
When I bent the transistor so that its metal plane rested against the magnetic whiteboard, its gate would latch after briefly tapping either V+ or ground to the magnet which was connected to the pin. When the transistor's metal plane was perpendicular to the board, it didn't latch. Disconnecting and reconnecting the LED didn't perturb the 'saved' value, and neither did removing power overnight. And the same thing happened with a similar P-fet connected as a low-side switch.
It probably wasn't a "real" latch; it was a very over-sized transistor with low gate capacitance, and I didn't try it with something like a 3904. I think it might have had something to do with the principles behind nonvolatile ferroelectric RAM, but I never did get to the bottom of it.
I think you just made a single bit of DRAM. It's surprising how long charges can stay around given suitably dry climate and insulators, and LEDs don't require a lot of power to light up either. A TO-220 package suggests a power transistor, so it will have substantially more gate capacitance than a typical logic-level one.
FWIW I'm told that decades ago latches were implemented as a tristate driver followed by an inverter or buffer. The source & drain cap, along with gate & wire cap, acted as the memory.
An even more interesting exercise would be to implement a DDR5 driver circuit for the 1 bit of RAM. Typical DDR5 interfaces take hundreds to thousands lines of Verilot/VHDL so quite a few transistors will be needed.
In uni, I learned digital logic from Brown and Vranesic[1] who explicitly differentiate gated latches from flip-flops, the latter being defined as:
> A flip-flop is a storage element based on the gated latch principle, which can have its output state changed only on the edge of the controlling clock signal.
I also just pulled out my Fairchild Pocket Designer Guide (published circa 1985; inherited from a retired former colleague) which explicitly differentiates 74/54 series flip-flops from latches both in section and symbology. So there's at least 35+ years of industry convention without citing standard.
To cite one industry standard, from ANSI/IEEE Std 91-1984[2] § 4.2.1:
> Cm should be used to identify an input that produces action, for example, the edge-triggered clock of a bistable circuit or the level-operated data enable of a transparent latch
...or from § 5.9:
> The symbol for a bistable element (for example, a flip-flop) does not contain a general qualifying symbol. ... When a bistable element is controlled by a C input (Symbol 4.3.7-1) it is necessary to indicate whether this element is a latch, or an edge-triggered, pulse-triggered, or data-lock-out bistable.
In fact, symbol 5.9-2 labeled "D-type latch, dual / Part of SN7475" is distinct from symbol 5.9-7 labeled "Edge-triggered D-type bistable / Part of SN7474"...the former being what the blog discusses.
You're correct. A flip-flop responds to an edge, a latch responds to a level.
However, let's go back more than 35+ years, to a simpler time. To the 1960s. To the dawn of the TTL era. Texas Instruments made a device called the 7473. It was a J-K Flip Flop. But it responded to a pulse, not an edge. Look at the function table in the datasheet: https://www.ti.com/lit/gpn/sn54ls73a
As a kid trying to teach myself TTL I never did understand WTF was going on. And this screwy behavior got fixed when TI did the 74LS73.
The data sheet makes clear the limitation, but either that text didn't exist back then, or I just didn't grok the significance of it. To wit: For these devices the J and K inputs must be stable while the clock is high.
So you're correct for at least the 35+ most recent years. :)
But there couldn't be rules without exceptions. :)
> However, let's go back more than 35+ years, to a simpler time. To the 1960s. To the dawn of the TTL era. Texas Instruments made a device called the 7473. It was a J-K Flip Flop. But it responded to a pulse, not an edge. Look at the function table in the datasheet: https://www.ti.com/lit/gpn/sn54ls73a
The 7473 next-state truth table in this datasheet is symbolically misleading; the specified timing constraints on p. 4 make it a lot more clear, and it's consistent with IEEE Std 91 terminology cited.
To be sure, 7473 is indeed an edge sensitive device; IEEE Std 91 references this as the pulse-triggered flip-flop--a.k.a. master-slave flip-flop--and its description on page 1 of the referenced datasheet corroborates this (my emphasis):
> J-K input is loaded into the master while the clock is high and transferred to the slave on the high-to-low transition.
In other words, the internal output of master stage is opaque, while slave stage output Q/Qnot does not change until the falling edge, which is quite distinct from the output behavior of a latch.
> To wit: For these devices the J and K inputs must be stable while the clock is high.
Reading the datasheet further, the 73A variant apparently improved upon the original 73 design by allowing for input change after the rising edge (i.e. while clock state was high) so long as the specified t_su = 20ns min setup time before falling edge was satisfied. Also observe the 73A's 0ns min hold time after falling edge in conjunction with no min CLK low pulse duration; this clearly allows for much faster operating speeds by exploiting clocks with greater than 50% duty cycle. In contrast, the 7473 was capped at less than 15MHz = 1/(t_whmin+t_wlmin) = 1/(20ns+47ns) per specified timing constraints.
P.S. Props for teaching yourself TTL as a kid. I recall my pops (who's in his 50s now, if that's any indication of my age) once tried explaining clocks to me as a "computer literate" teen and that went waaaay over my head at the time. The magical allure of it all ultimitely led to the whole EE thing today.
Yes - I think that is what DRAM is at its core - and requires more refresh to keep the charge on that capacitor - versus SRAM which is a more complex latch transistor based approach (which also happens to be faster and less power intensive - but also more expensive per bit).
It's great to add relevant information, but in the future, could you please do it in a way that greets and expands on what you're replying to, rather than one-upping or putting it down? The two styles of responding have opposite effects on discussion: one opens it up for further exploration, while the second constricts it or closes it. In improv, that is called "blocking": https://improwiki.com/en/wiki/improv/blocking. You probably didn't mean it that way, but intent doesn't communicate itself. Since a comment's impact on future discussion is determined by how others hear it, the burden is on the commenter to disambiguate [1].
The value of an HN comment is its impact on current and future discussion, or (to put it in a pseudo-technical way) the expected value of the subthread it forms the root of [2]. I've been struggling for a way to explain this that doesn't sound smarmy (like "be nice" or "tone"), since it's not about being nice. It's about what leads to richer improvisation and curious conversation, which is what we're trying to optimize for here [3].
Edit: elsewhere in this thread are some great examples of opening-up responses:
If you ask yourself and sense into what kinds of responses such comments invite, you'll get the spirit of what we're going for. I don't mean you personally—I mean all of us. This is a community project.
Please see my response to other comment and stop chasing me. Jesus Dang, it feels like you have something personal with me lately. If you want a date, just ask for it instead.
It isn't personal. I don't remember your username. There are too many to remember them all; it looks like I've posted hundreds of comments since the last time I replied to you.
but that's not what he's building, he's making the "gated D latch" from your reference - something that's much more useful in real-world circuits than the minimal 2 transistor flop that you call out
He said right at the end: "Here's an overview of the entire 1 bit of ram. You can reduce the size and squash them down or even put them on the prototyping board. Make eight of these to build a byte of RAM."
My approach is halving the number of transistors for the same byte
https://i.imgur.com/cwZe7Zf.png
When both inputs are low, the NORs are equivalent to NOTs and you can see they form a storage loop. When one input is high, it forces the loop into the corresponding state.
That said, I'm disappointed this article doesn't show the transistor-level schematic, because trying to read from a breadboard is extremely difficult.