I built a small multitasking kernel with a friend with a 68000 (m68000) in college. We implemented it on breadboards, I think with 30 or so feet of jumpers [0]. I had very little prior embedded experience, so it was trial by fire.
It was a wonderful introduction to how kernels work (or at least concurrency and scheduling) at their most basic level, without having to deal with the complexity of virtual address spaces, memory protection, or the byzantine bring-up dance of register prodding that x86_64 needs. It prepared me well for my operating systems class the next year, and as far as I can tell, was the eye-catcher project that got me an internship on a team doing kernel development the following summer.
Very cool! Do note that the Wikipedia page is about the 6800 (Sixty-Eight Hundred), though, not the 68K. A generation earlier, 8-bit, far less popular in end-user applications than the 68K. Still a good processor.
I think the x86_64 chips are fairly unique in terms of how hilariously awful the bring-up process is. I'd hate to be a microcode engineer on one of them.
So much state. Then throw virtualization into the mix.
They’re only terrible because they needed to be backwards compatible with 32-bit x86 code. Once they’ve been bootstrapped into pure 64-bit mode, they’re a bit better.
The 8080 lacked this, but even though the 8086 was more or less based on the 8080, this probably inspired the 8086 LOOPNZ instruction. Same idea, hardcoded to use the CX register instead of B on the Z80. The unoffical nicknames for the AX, BX, CX, DX were accumulator, base, count, and data. CX was used for the count for all looping and string move instructions. Similarly, BX was special purpose for indirect operations and AX/DX for certain logic and arithmetic instructions.
The IBM 360 had BCT back in 1964. Rumor had it that it was created for Fortran DO statements (“for-loops”), but IBM’s compiler never emitted it.
“The BCT instruction subtracts 1 from the value of the contents of the target register specified in the first argument. If the value in the target register after the subtraction is zero, no branch occurs. Otherwise the program branches to the specified address.”
DJNZ only does the decrement and branch if zero part. That's implicit in DBcc, which also initially checks the condition and is equivalent to NOP when true.
The 6809 had some hobbyist/consumer prominence on this side of the pond in the TRS-80 CoCo - and some European similars. And there was a multitasking OS written for it - OS-9.
OS-9 has an interesting history in its own right. It was ported to a wide range of subsequent architectures. All sorts of applications - Fairlight synths, Philips CD-i, most of the traffic lights in the US in the 80s and 90s to name a few.
The 6809 was used in a production gadget, the Vectrex, the only vector-graphic consumer videogame console. It came with a screen that worked like an oscilloscope -- no raster, just an electron beam sweeping along the line. So, no jaggies.
As it was necessarily monotone, games came with clear plastic color overlays for the screen.
6809 was notable as the first 8-bit microprocessor with a multiply instruction: 8x8->16 bits.
The 6809 was an answer to the 6502. They needed something reasonably “better” than the 6502 to justify the price tag. Unfortunately, for them, the 6502 and it’s variants were too entrenched at that point.
Based on your description, I expected to see a picture of 30 feet of breadboards filled with jumpers (https://en.wikipedia.org/wiki/Jumper_(computing)), maybe used as ROM or something. Your actual project was unfortunately much less ridiculous.
This article refers to the 6800 which isn't the same.
Also, that's a 68008 (with the 'simplified' 8bit data and smaller address bus). Both somebow 8bit :-P
I built a multi tasking kernel for the 6809, which was an extended version of the 6800 rather than the 68000 series, which is quite a different kind of a CPU. It was fun! Also created a double sided PCB using laser printers and then ironing the layout onto photo etching PCBs.
There was also OS9 (Microware, not Apple). It was very Unix-like, but not fully POSIX, and very poorly documented. After initial development on the 6809, they released a port for 68K.
In college in NZ I wrote a simple compiler for the 6802 with some friends, it fit in 2k (just) .... we called ourselves "uSoft" (with a greek mu - but we were cross-compiling from cards, no room for a mu) .... the next year we heard of some jokers in the US who were using our name, and pffft! they only had a basic interpreter, so lame!
Needless to say the jokers in the US became multi-billionaires, we were stuck on the other side of the world with no one to sell to, and no real knowledge of the marketing we'd need to bring our code to market - if only we'd incorporated we could at least have sold the name :-)
Well...200k in 1975 $'s. And MOS was a startup, basically. So not a trivial amount. But yeah, a big part of the initial pitch was "you can use your same hardware design but replace the $300 CPU with our $25 CPU".
That's seriously ambitious (tips hat). There were a number of folks that came up with dual processor designs back in those days playing on the observation that most 8-bitters (and many 16- and 32-bitters) could never utilize more than 50% of the available memory bandwidth. There's an NS32000 application note somewhere that describes such a design, and NS had datasheets for an NS32132 that was an NS32032 with added some support for such a system. I dunno if the NS32132 ever shipped, however.
There was also an argument that the 6501 was built as a sacrificial lamb so that when Motorola inevitably sued them, they would be able to keep the 6502 out of the case.
The 6501 and 6502 were developed simultaneously and the 6502 was released a month later (Aug 1975 vs Sept 1975). Both well before the lawsuit began, yet alone concluded.
The Heathkit/Zenith ET-3400 trainers with 6800s, and the accompanying Heath/Zenith coursework, were fantastic in 1982. 50+ of us completed it that year, the class final was bit-banging the tune of "Anchors Away" as the instructor was a Navy officer and educator, retired to civilian teaching. I later learned machine language on broken superscalar mainframes as bit-chaser, but the 6800 were simply fantastic devices and prepared me well. Flat, shared memory, von Neumann architecture. Very nice op codes and indexing, as I recall. Ill have to go back to my coursework and reminisce...
I have an ET-3400 on the shelf behind me! I was just playing with it the other day.
After watching Jason Turner's CppCon talk on writing an i386 to 6502 assembly translator [1][2], I started working on a fork that would target the 6800. I only got about 3 instructions working, but that's really all you need for some really simple test code with optimization turned to the max. It also turns out that someone wrote a fantastic emulator specifically for the ET-3400 trainer [3], and I managed to get my application running on it!
There is something special to me about the idea of writing modern C++, and compiling it for such early microprocessors. The 512 bytes of RAM is a pretty big limitation though. I wanted to try and emulate an EEPROM using an Arduino or FPGA, but got stalled out on the project. From time to time I like to browse through the LLVM backend documentation, but I can't seem to commit to trying to build a backend.
I'll mention this even though it isn't exactly the Motorola 6800: I've been doing a lot of work recently with the Hitachi 6303, which is a member of Hitachi's family of Motorola 680x alternatives. The Hitachi 6303 is featured in a lot of 80s Japanese synthesisers, particularly Yamaha's DX/TX range. The Motorola 680x series also features in the Ensoniq ESQ family of synthesisers, probably many more.
I became acquainted with this architecture disassembling the Yamaha DX7 firmware: https://github.com/ajxs/yamaha_dx7_rom_disassembly
It's a great instruction set to work with. It's my first experience with 8-bit programming, and I found it very intuitive.
This was my first microprocessor: I developed assembly language for it and the 6809 using Motorola's Exorciser development system (which was already old in the mid 80s when I used it). Here is a simulator I wrote for it, in case you want to try it in Linux or Cygwin:
With the optics of _compiled_ code, how do the various 8-bitters stack up? (6809/6811, 65C02, Z80, H8, ...)? One would have to account for the frequency allowed by the ISA at iso-technologies (which makes including AVR somewhat tricky).
I only have experience with Z80 and 65C02 and I believe the consensus is that a 4 MHz Z80 beats a 2 MHz 65C02, but neither is a particularly nice compiler target.
Very cool. While obviously not ideal, the results are probably accurate within a small factor. Unfortunately there's no assembly version for 65C02 but Z80 does surprisingly well in this test.
I muse what could be done with modern cross-compiler (SAT solving for optional code sequences?) A llvm backend for Z80 has recently kicked back into gear: https://github.com/jacobly0/llvm-project
I ran the C version of this benchmark using llvm-mos's Clang for the 6502. The results:
21.4 seconds
5793 bytes
Which is middle of thepack for the Z80 benchmarks, but well below the 6502 ones. We're also using a slightly tweaked embedded printf written in C, so this could probably be improved somewhat there, sans any compiler changes.
The 6809 had two 16 bit index registers, PC-relative addressing, and upper half of the stack and data page address was taken from special purpose registers instead of hardwired. It should be a fairly straight forward compiler target. On the other hand it was late to the game, expensive and not (much) faster than 8bit microprocessors. The 6502 very cheap, fast enough when it came out, but a really annoying compiler target.
I had a TRS-80 Color Computer when I was kid which had one major drawback: it could only display 32 characters across the screen compared to 40 characters for the Apple ][, C64 and most others at the time.
The Coco could run an operating system called OS-9 which was Unix-influenced and came with a good C compiler and also a bytecode interpreted structure basic called BASIC09.
I know C compilers were really popular among CP/M users running the Z-80 and 8080 chips and also on the IBM PC which had a segmentation system to reach beyond 64k that I thought felt really elegant in assembly language but was awkward for compilers.
Where OS-9 had all the above beat was that it was a real multitasking OS and I had two terminals plugged into my coco in addition to the TV console and could use it like a minicomputer.
When I switched to an IBM PC AT compatible my favorite programming language was Turbo Pascal which adds everything missing from Pascal to do systems programming. I switched to C when I went to college because that was supported on the the various UNIX workstations they had.
The 6809 was nice, but I think the CoCo otherwise was crap. Aside from the 32 column display and awful color scheme, the built-in serial port was bit-banged. This meant that floppy drive and serial access could not happen at the same time. This is very relevant when trying to use OS-9.
There was an external serial port as an option, but there was only one slot. So you also had to buy a slot expander (a Multi-Pak).
Note that hardware flow control makes the bit banger a lot more reliable than it would be otherwise.
I had the bit banger connected to a compact printing terminal from DEC that ran at 300 baud and had an acoustic coupler so you could log into 300 baud services with nothing but the terminal. There was not a lot of risk that this device would overflow your buffers.
I had the multi pak and the external uart. The entry level price of the coco was low but I think I got most of the peripherals available for it, particularly the disks were crazy expensive. Adding it all up I must have spent more than I spent on the AT clone that replaced it ($1200)
In most ways the C-64 was a great machine but boy was the disk drive slow.
Just for modern perspective... you can use SDCC. It's an open-source optimizing C compiler targeting small microprocessors like the Z80 and various others. The project itself is terribly run--the maintainers recently pushed out an ABI change which broke everyone's code, but released it as a minor version bump. This ABI change did speed up the code, but that's small consolation to anyone who ended up with broken code.
IMO the Z80 is a lot nicer compiler target than the 6502, because the stack pointer is 16-bit and it's much easier to use the stack in general.
There are a couple C compilers for 6502 (like cc65 and WDC's C compiler) but they're not quite as good as SDCC, as far as I can tell. They're also not as actively maintained.
> The project itself is terribly run--the maintainers recently pushed out an ABI change which broke everyone's code, but released it as a minor version bump.
I don't think your comment about a breaking abi change on a minor point release is fair. Perhaps you have misunderstood the release number scheme? Every year around the first quarter they have one major release, ie 3.9, 4.0, 4.1, and now 4.2. A minor release would be like 4.2.1. There is no significance to the major digit, ie 4.0 was just the release a year after 3.9 and not otherwise special.
I'm not affiliated with the project, but I think would be unfortunate if someone was turned away using or contributing to the best or only opensource tool chain for a number of processors (eg the paduak family) because someone claimed the project was terribly run.
The 6809 is ridiculously suitable for running Forth, because you've got two stacks, a 16-bit accumulator, and you can implement NEXT in two instructions taking about three clocks each.
C compiled for the 6502 would use the zero page (first 256 bytes of RAM) as registers, and use the actual registers just to run instructions of a higher-level abstract machine that, e.g., understood 16-bit numbers. Kind of like Xerox Alto, in that particular way; on the Alto, only device drivers were coded native.
There was a C compiler that kept one of the 6502 index registers zero at all times, just to have a zero handy.
This use of a little interpreter to provide a higher-level instruction architecture to program to was really, really common in the 50s and 60s. The Apollo AGC computer that landed on the moon was mostly programmed that way. It seems surprising that with memory so tight, they would use up so much of it for the virtual machine interpreter, but instructions for that could be much more compact than native code. It made a slow computer even slower, but they all felt fast back then.
Steve Wozniak burned a little interpreter like that into the Apple ][ ROM, just smart enough for dumb jobs like copying blocks of memory. It used a reserved fragment of the zero page as its registers.
These processors were designed for assembly programming, hence their very CISC-y ISA. Some 8-bitters designed for compiled languages would be the AVR family.
That's too generous. They were primarily designed for what was easy to implement and what _could_ be handled in assembly. Also, 8080 was an evolution of the 4040 (and that owed much to the 4004). Z80 had to be compatible with the 8080... I know less of 6502 but most of them were designed about what could be done, not what would be easy to program.
I actually only recently learned that the 8008/8080 was in fact not based on the 4004/4040 instruction set. They used the same numbering system, but the ISA has little to nothing in common with it and in fact the 8008 project was in started before the 4004:
Thanks I might misremember. Regradless, 8080 is still awful and they could have sone better with more foresight (which is easy to say so many decades later).
When you’ve got hardly any registers you don’t have any choices how to allocate them. What’s maddening is a chip like the 8086 where you have enough registers that how you allocate them matters, but still very little space to work in. You are left working hard on a register allocator that is still not going to be very food.
In theory, a smart compiler could make heavy use of the DP register (6809 feature) and then local variables within a compiled function could access these "fast" global variables instead. This was a common pattern when coding in assembly, and it's much faster than accessing variables off the stack. The function wouldn't be reentrant, but a compiler pragma could be used to enable/disable the DP mode. Declaring the local variables as static should be sufficient, however.
The WDC C compiler for the 65816 takes advantage of this (relocatable direct page). And the relocatable stack. In fact I believe what it does is relocate the stack to at least partially overlap the direct page.
Back then, memory was only one cycle away, so was practically registers. This is why the 6502 zero page was so important. 6502 instructions took very few cycles, so you could move a zero-page byte to the accumulator in 3 cycles, sometimes 2, and is why a $25 6502 could match a $200 Z80.
Z80 had a faster clock, but instructions took loads of cycles. Nowadays that is OK, but Z80 did not pipeline. It had fancy looping instructions, but they ran slower than the loop would have.
a lot of pinball machines are based on the 6800. i've been really impressed by one project that replaces the 6800 with an avr by just wiring up all the relevant pins and holding the 6800 in halt:
The Space Shuttle Main Engine built by Rocketdyne uses redundant M68000 processor to control the engine. I would say I was lucky to have chance to work on a system that has a great function such as SSME
If you are interested on programming something like a 6800 or 6502, but would like to make a practical device and not just run in simulation, take a look at the STM8. It's a very widely used 8-bit embedded controller and architecturally very like a cleaned up and improved 6800.
The STM8S discovery board for this is $8 and in stock at ST and Digikey.This includes the target system and the ST-Link programmer. There are free commercial tool chains and the open source SDCC toolchain.
The switch was 6809 to 68000 with 8 bit bus (like the TMS9900 in the TI99/4A) to keep Raskin happy that a machine was still cheap enough with only 64KB. The goal was to share some of the Lisa software. But having a very low cost Lisa got Jobs excited and he ended up taking over the project.
One of the fellows at high school talked about and I think brought along a DREAM-6800 computer [0]
It was published as a project kit by the Electronics Australia magazine. That said the Apple ][ and TRS-80 seemed way more functional. It wasnt until a few years later at Uni that I got to really enjoy low-level programming and working directly with I/O.
(I think the guy with the DREAM-6800 ended up making a motza making poker machines)
Where does “6800” come from? I loved my Amiga and it had an MC68000 so I guess they just tacked on a zero for the 16-bit generation, but how were their series originally named?
The university I'm attached to used 68HC11-family dev boards for their intro embedded course until the early/mid 2010s, with a smattering of other platforms like 8051 derivatives in advanced classes, and have switched to ARM (Small Cortex M ARMs, typically TI TM4C123 TivaC boards) for almost all our embedded content since. Plus a few little Arduino based activities with the freshmen, though the form of that has changed over the years.
The intro to embedded systems course used to be only taken by Computer Engineers around their Junior year, since ~2017 we do it a semester or so earlier and make the EEs take it too as part of a streamline; EEs no longer take the computer architecture course, and the new embedded course covers some basic architecture concepts.
We still do the beginning of the semester in Assembly (we really only show them ARM Thumb) and the later part in C.
Possibly the most revelatory thing about that course is that the low-level view means we find out and try to correct that most of our students (as second semester sophomores who have in theory passed at least two programming courses and a digital logic course) haven't the slightest idea what code actually means/does.
Example: Every semester I've been involved, when we transition from assembly to C, we give a simple assignment to sort some arrays of (well documented) structs by a specified field and order, given the address where first element starts and length in elements, in both C and assembly. They are handed a starter project with a lightly-obfuscated object file that sets up the arrays, calls two provided function headers for them to fill in, then tests if the sorts succeeded. Details get changed every semester because students cheat compulsively on programming assignments, but it's always set up to be easy, the structs they handle in ASM are always 16 or 32b in length, stored aligned, etc.
Many of them... struggle mightily for two weeks because they haven't actually retained anything about number representation, memory (size, layout, byte addressing), arguments, the difference between a value and a pointer, and so on. The course staff spend weeks doing patient remediation around that point in the semester. At least we get a chance to make another pass over that material and more of them get it after.
My CS programme (class of 2003) had one semester of assembly programming, either 68HC11 or 8086. I took the 8086 flavour and it was definitely designed around dev-board style development rather than expecting a rich BIOS/OS support environment.
There was a more embedded-focus "Computer Systems Engineering" degree option which involved a lot more assembly..
I had a similar course using the 68HC11... but that was in 1991 or '92. I am (pleasantly) surprised to hear that this kind of class is still running. Haven't done embedded for a long time, so I don't know if the 68HC12 is current or not.
Wouldn't worry thouhg. Even if that chip is not in wide use now, the skills you learned are transferrable to any other.
Be careful -- they're rapidly going obsolete. No one wants to fab old EPROM processes, and no one wants to package PLCCs anymore. If it's neither of those, I think you've got a bit more time.
There are "compatibles" out there, and some of them are very good indeed, but they're not without their hassles.
One-off hobbiest/repairman/desperation quantities will be available for years. It's industrial, new-build quantities that are going to be problems.
And, yes, the 68HC11 was on December's EOL list, so I hope anyone who was using it in products has a plan! (Or hire my company, we fix things like that... not that it's the most fun work in the world, we have other things we'd rather be doing....)
The 6800 definitely is a classic, yes. The classics included at least 6800, 6502, 8080, and Z80. (To some extent 8085, but as an "expanded" 8080 the Z80 dominated.) The four of them powered the most iconic systems at the start of the microprocessor "takeover" at the end of the seventies (at the same time as I entered my electronics education). The 6800 was kind of left behind by the others eventually, but was used in e.g. the SWTPC 6800, and also in a lot of minicomputer (e.g. DEC) peripherals at the time (including variants like the 6802 with a little on-board RAM)
You could run CP/M on it, of course, but Digital Research didn’t see the point in creating 8085 or Z80 specific versions of CP/M, so they just ran the 8080 version.
The Cosmac ELF.. yes, I too was aware of the 1802, and there are of course other important microprocessors from that time period. But those four covered a lot among themselves - in particular the trio 6502/8080/Z80.
But of course there were others. The Texas TMS9*, the 6800 already mentioned, then there's of course the 6809 (Tandy Color Computer, Dragon (basically a clone), the 6809 variant of the aforementioned SWTPC. Or we can as well find a list: https://en.wikipedia.org/wiki/Microprocessor_chronology
It has very few general-purpose registers (A and B accumulators, IX and SP indexes), and it was designed when few (if any) mainstream processors were pipelined. Fewer addressing modes would mean more intermediate values to store, and more cycles spent in calculating their values.
If you have plenty of registers and a pipelined processor with a decent bypass network to get intermediate results available earlier, then it makes sense to simplify the addressing modes (to increase frequency and/or shorten the pipeline). However, the 6800 had neither many registers nor a pipelined ALU.
But, they pretty commonly used macro assemblers, so they could have used macros to emulate more complex addressing modes. I think code density and performance had a bigger impact than developer ergonomics on the decision to implement the more complex addressing modes in hardware instead of assembler macros.
"You kids and your RISC! Back in my day we had seven addressing modes and we liked it!"
Everyone nowadays takes it for granted that you can use software to write software. There are still lots of graybeard programmers around who had to make do with punch-cards when they were learning.
On mine, I had a cassette like tape drive that had some CNC programming tools on it. Another tape had various utilities and some crude simulation programs.
User data was on paper tape, usually source code, or finalized G-code, or plots one might want to reproduce. Frankly, I love paper tape. And I have had to read it, patch instructions in with one of those little machines with all the hand push button punches and index gear to keep it lined up. Same for damaged tapes.
And my first machine language programs were hand assembled from blurry, photocopied data book pages mooched from the local university.
Sidenote: Moto was cool. I asked about documentation later for the 6809, and was able to have my parents take me to a local office where I got to chat with an engineer and left with a pile of docs, and reference databooks!
I did not get an assembler of my own until I mowed a lot of lawns and bought MAC/65 for my Atari. Prior to that, I was typing stuff into the mini-assembler on the Apple.
I was a kid, 14 for that stuff.
Later at 19 I ended up working in small shops using Tektronix hand me down gear while attending college. Super glad I fell into that experience frankly. Those Tek computers were odd, but well conceived and more powerful than one might think 8 bit stuff could be. The people in that shop made some impressive stuff essentially laid out and programmed on a 6800 CPU found in the Tek storage tube terminal / computers. They were interesting designs.
One could get one and just set it up for serial comms and use it as a weird but capable text display and or, graphics display like a paperless plotter. Xterm has Tektronix mode to support that even today.
Add some ROM and RAM, and peripherals, and then it was a powerful, technical computing micro computer. Disk drives, cartridge tape, paper tape read and punch, plotter, joystick, and off you go! Never did get to use a disk. But that fast cartridge tape drive and paper tape worked better than expected.
What I find interesting is younger people are checking this stuff out and or building their own gear. 8 bits is enough to really do stuff and understand the entire thing. While not practical given what we have today, it all is still educational in a way that appears to remain potent.
For my own fun reasons, and some product development, I have the luxury of...
I keep an Apple //e Platinum on my work bench. And I use it to do electronic projects the same way I did as a kid. Good for simple prototypes or to understand a sensor, do comms. When my current project slows, I plan on making a card with a Propeller chip on it to make a cool dev station that works like an Apple with command line, just type a line and go BASIC, as well as self hosted compiler and assembler... good times. And practical. People I work with and I have done a couple designs. It all works just fine and it is simple. No updates, no OS, just lean and mean.
> Back then people wrote assembly language themselves.
Not only this. It made for more compact code, and memory was very expensive. So expensive that 64kB was practically unheard of. In 1980, 4kB would cost $100 or so. If I read it correctly, the original board with the 6800 was shipped with 128B.
128 bytes is a lot! Seems absolutely insane small now, but back then every bit mattered. And being so close to the hardware meant being able to do things super lean and mean.
On the KIM 1, the RIOT chip came with 128bytes of RAM, and Rockwell, MOS, and others also later packaged the CPU with small amounts of RAM, timers, and other handy things. Early origins of system on chip designs. And there was a jumper or solder pad one could use to sort out address decoding should more RAM be added.
128 bytes, probably split between zero page and the stack, can do a lot!
For perspective, the Atari 2600 had 128 bytes of RAM in its RIOT chip and that was the total system RAM!
Atari 2600 Space Invaders fit into 4K ROM and the 128 bytes of RAM.
No lie! I recall paying $200 for a Godbout 12KB (static RAM) board kit (96 1K chips to hand-socket) for my H-8 (8080; Heathkit included no RAM). Later added another 12KB for an incredible 24KB total. Only a few years later at a swap meet I landed two 16KB manufactured boards for only $50!
Another way of thinking about it was that things were done with the index registers and addressing modes, that would be represented by pointers in a language like C.
That is true, but the same could be said of even an extreme RISC ISA that only has an indirect addressing mode. Taken by itself, it doesn't help explain why late 1970s processors tended to have a lot of addressing modes.
It was a wonderful introduction to how kernels work (or at least concurrency and scheduling) at their most basic level, without having to deal with the complexity of virtual address spaces, memory protection, or the byzantine bring-up dance of register prodding that x86_64 needs. It prepared me well for my operating systems class the next year, and as far as I can tell, was the eye-catcher project that got me an internship on a team doing kernel development the following summer.
The instruction set is also a dream. Super CISC-y, yet more enjoyable (and IMO easier to grok) than x86. Take a look: http://wpage.unina.it/rcanonic/didattica/ce1/docs/68000.pdf
My favorite is DBcc - "Test condition, decrement, and branch". All in one instruction.
[0] - https://i.imgur.com/MKD7wTv.jpg
Here's the code - I have no idea how it works anymore, and I believe it's incomplete compared to what I had running. The complete code archive I think is lost to time - https://github.com/dymk/68k/blob/master/projects/libraries/l...
My friend wrote up a much more comprehensive document on the build:
https://github.com/ZigZagJoe/68k
https://docs.google.com/document/d/1ejW_Ist19tIXeA5HtEWixaLo...