Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, but if your program required no more than 64KB of data, then you could use near pointers everywhere, halving the pointer size from 4 bytes to 2 bytes.

Also, even if the program as a whole required more than 64KB of data, if you knew you required no more than 64KB of data for objects of type X, then you could use 2 byte pointers for all type X objects, with a fixed data segment. X here might, for example, be strings.

When you only have 256KB of RAM to begin with, the odds that all data overall, or at least all X object data for some X, will fit in 64KB is quite high. And if you have a lot of pointers, halving their size makes a big difference when you have so little memory.



But segmented memory isn’t a requirement, that technique works just as well with linear address space. You use a single 32-bit base pointer and then store 16-bit offsets for your data. We used that all the time on 68K and other architectures.


The difference is that 68K is designed as fundamentally a 32-bit architecture. (Even though the original implementation was physically 16-bit.)

Whereas 8086 is a 16-bit architecture with an extended address space.

The use of segmentation to enable a 16-bit architecture to address more than 64K was not original to the 8086, many 16-bit minicomputers (e.g. the PDP-11) used the same basic idea, although the specific implementation Intel chose was rather unique

Part of why the 8086 was 16-bit not 32-bit, was to make it easier to port software to it from the 8080, which was an 8-bit architecture with 16-bit addressing. It also was likely one of the reasons why the 8086/8088 was cheaper than the 68K, which is part of why IBM chose it over the 68K for the IBM PC


I heard opinions that 8086 won for IBM PC project because it was available for longer and more proven, with more options for second-sourcing.


That would be fact - AMD was the second source for the 8088. (Licensing IP from Intel.)


Motorola 68000 also had second sourcing options, just like 8088 - but 8086 was released a year earlier and built up on well-known successful design of 8080, even if they weren't exactly compatible. It built confidence about Intel's ability to deliver, even when 8088 was released the same year as 68k, as it was just bus-narrowed cheaper version, not an entire architecture.


Which reminds me - the bus-narrowed version was important, because they then could use cheap and off the shelf 8-bit components for the rest of the chip set.


Both required a bus demultiplexer, but indeed 8088 was cheaper in a bunch of places (both 8086 and 8088 could run 8bit data transfers, though)


Near pointers could exist in non-overlapping segments couldn't they? With 254KB you'd use 4 segments tops. There was some valid reason to have them overlap, but it quickly became obsolete.


> There was some valid reason to have them overlap, but it quickly became obsolete.

Under DOS, the (primary/initial) code segment acts as the PID. So overlapping segments enabled less memory fragmentation and more memory for child processes. If your program used less than 64 KB, then the part of 64KB it didn't use could be made available to child processes to use. Important in the early days, where some programs were structured as a main executable (which displayed the main menu), and then separate child executables for each menu item–the main menu executable would stay in memory while the child executable ran. (Overlays were more efficient, but also more complex for the developer.)

People call DOS a "single-tasking" operating system, but that wasn't entirely true – processes could spawn child processes, it is just the main thread of the parent process was suspended while the child process ran, unless the child process turned into a TSR. Also, child processes could use interrupts–or even just CALL FAR (if you could get the target address somehow)–to make upcalls to APIs exported by their ancestors, which is a way ancestor processes could be reinvoked even while suspended. COMMAND.COM's INT 0x2E API is an example (something I wish Unix shells had–a child process can modify the parent shell's environment, by calling an API to make it run a built-in command).

Not standard terminology, but personally I think it should be called "semi-multitasking". CP/M-80, by contrast, was a truly single-tasking operating system – only one process could fit in memory at a time, and instead of TSRs which were just normal processes which had returned to their parent without being removed from memory, you had RSXs which were a radically different type of thing from a normal program. (Normal programs are allocated at the bottom of memory, RSXs from the top down.)

(Did CP/M-86, CP/M-68K, CP/M-8000, get closer to MS-DOS in this regard than CP/M-80 did? Obviously MP/M went further than mainstream MS-DOS ever did.)


This ability to load child processes and TSRs is mostly owed to the CPU architecture, not to the design of MS-DOS - CP/M-86 had it too.

With a single flat address space, programs need to be either position independent (not possible at all on 8080), load at a fixed address, or include some type of relocation info. The DDT debugger in CP/M started with a "loader" which moved the rest of the code to the top of memory, aligned to a 256 byte "page" boundary. There was a bitmap of which bytes of the code referred to a page number (i.e. the high byte of an address), so those could be fixed up. With MP/M, every program had to be in such a format, minus the loader, which became part of the operating system.

x86 memory segmentation is sort of a middle ground between that, and virtual address spaces for each process. No security of course, but at least some degree of isolation in case of bugs, and the convenience of having fixed addresses within each segment.


> This ability to load child processes and TSRs is mostly owed to the CPU architecture, not to the design of MS-DOS - CP/M-86 had it too.

Did CP/M-86 support child processes though?

Also, did CP/M-86 support TSRs? I thought it supported RSXs which just like CP/M-80 were in a different format and manipulated by different APIs than normal programs, whereas under DOS any program can turn itself into a TSR at any time, you can't tell when you start it whether it is going to turn itself into one (unless you know what its code does, of course)

Looking at https://www.seasip.info/Cpm/bdos.html#144 I see BDOS function 144 (P_CREATE), for creating subprocesses, was implemented on MP/M and Concurrent CP/M – but I don't see any mention of it being implemented on (non-Concurrent) CP/M-86.

CP/M-86 1.1 (but not 1.0) supported BDOS function 47 (P_CHAIN), for launching a new program – but like exec() on Unix, or the BASIC CHAIN statement, it terminated its caller before launching the new program.

That said, PC-DOS/MS-DOS 1.x didn't really support subprocesses either. There was no documented API for creating them. The program loader was actually in COMMAND.COM, and while it called the DOS kernel to allocate memory for a new process (the undocumented API INT 0x21,0x26), COMMAND.COM was responsible for actually loading the .COM or .EXE from disk into the process' memory (including relocating .ees), so creating a child process required duplicating a large part of COMMAND.COM's code. In DOS 2.x, the image loader was moved into the DOS kernel, and a public API to spawn a child process (INT 0x21,0x4B) was added. COMMAND.COM also implemented its own API to start a child process (which would be a child of itself not the caller), INT 0x2E, but I don't believe that was there in DOS 1.x either.

> With MP/M, every program had to be in such a format, minus the loader, which became part of the operating system.

You are talking here about the CMD executable format?


I'm fairly sure that the CP/M-86 documentation says that a chained program returns to its caller, not the CCP. And BDOS function 0 with DL=1 will exit while keeping the program resident in memory.

MP/M used PRL files: https://www.seasip.info/Cpm/prl.html


> I'm fairly sure that the CP/M-86 documentation says that a chained program returns to its caller, not the CCP

From my own review of https://bitsavers.org/pdf/digitalResearch/cpm-86/CPM-86_Syst... – the section on CHAIN (PDF page 160) says CHAIN frees the memory of its caller, which isn't consistent with returning to it after execution is complete.

However, it also describes (PDF page 60) function 59 (PROGRAM LOAD), which can be used to load a child process. Like DOS 1.x, spawning a child process is somewhat of a manual process, you apparently have to use function 59 to load the CMD file, functions 51 and 52 to set the default DMA base and offset (which CCP normally does for you), and then manually do a far jump to the start of the program. At least, unlike DOS 1.x, the image loader is in the BDOS kernel not CCP.

But, this is not unique to 8086 CPU architecture, this function is also implemented by CP/M-68K–which (I believe) relocates all executables as it loads them – see PDF page 106 of http://www.bitsavers.org/pdf/digitalResearch/cpm-68k/CPM-68K... - (it is also implemented by CP/M-80 3.x, but there it can only load RSXs not standard executables.) And I believe CP/M-8000 implements it too

> And BDOS function 0 with DL=1 will exit while keeping the program resident in memory.

Yes, you are right, I wasn't aware of that. Now I read more about this, my impression is RSXs are CP/M-80 only, and CP/M-86, CP/M-68K and CP/M-8000 use TSRs instead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: