Assembler is still popular for IBM mainframes. The current version has been around since '92 and is called High Level Assembler.
It's popular partially because people have codebases that they started writing in the 70's or 80's in assembler that they maintain to this day because it's cheaper than switching it all over to a new language. Pretty much the same reason that COBOL is still around.
z/OS (the OS that runs on IBM mainframes) also exposes a lot of it's functionality through HLASM, so it's far more convenient to use than x86 assembly.
For whatever reason, C also never really caught on as ubiquitously as it did in the PC world. Probably because IBM themselves generally used their proprietary PL/S language instead back in the 70's and 80's.
This is fascinating, I didn't know there was a such thing as a high level assembly language, but IBM High Level Assembler has IF/ELSE/ENDIF, and several types of built in loops. I wonder how similar it is to writing in C. One thing this page doesn't mention is structured data types, I suppose these would still have to be implicit like in other assembly languages.
I used to write assembly language programs back in the 70s while working on process control computers (Texas Instruments 990, TI 980, TI 960 etc). At one point I was using an assembler that supported complex macros (macros that could be expanded into other macro definitions and supported counters and so forth) so I developed a library of macros that supported nested it-then-else and loops. They made the code a bit easier to read, but it was probably not worth the trouble.
The problem with a high level assembly language is that it really isn't very high level; your program still rests right on the hardware for a reason, and usually that reason is a concern about using registers and instructions very carefully for performance or interacting with hardware at ring 0 level where you are managing the virtual memory page table or handling network device interrupts or system IPC and so forth.
In my experience (as an IBM AIX kernel architect, virtual memory architect, and distributed file system design), sometimes one needs assembly language, but it was always a relief to get up to the level of C programming where the programming teams were much more productive. Much OS development has been done with C and it really was the best choice for most of the kernel work going on back then in my opinion.
AIX was an interesting project. The hardware didn't exist in final form while AIX was being developed. The challenge for our group was developing/porting a whole OS, the kernel and user space code, that would run on hardware being developed at the same time. IBM's language PL/1 was an important mainframe language, but seemed a poor fit for systems programming. However, IBM had state of the art compilers for it and a strong research interest in compilers for RISC machines (like the POWER processors, the first of which outside of IBM's research processors would run AIX 1); so they took the 80% of PL/1 that seemed useful to systems programming and wrote a compiler for PL.8 (.8 of PL/1) to run on the hypothetical RISC system my group was developing.
We were developing a Unix system on the RISC hardware, but we didn't have a stable target (page table sizes, floating point hardware traps, etc.) and couldn't afford to wait for the hardware before starting development. The approach my group took was to write the lowest level parts of the kernel in PL/.8 so that as the hardware changed the compiler could be tweaked to take advantage of it easier than rewriting low level assembly language code. The high-level parts of the kernel (coming from licensed Unix code) could then be mated to the low level code and wouldn't be affected by the changes in the hardware that happened over time.
I wasn't in charge of these decisions, so I don't really know enough about them to say that this was better or worse than just using C and assembly language as is normally done in most OS development, but I do see some of the trade offs that had to be made.
An aside on higher level system programming languages, I know that some on HN say that C is a terrible choice for OS development. Perhaps there are better choices (now), but I see things a bit differently. At the time there were not obvious choices that were better. We didn't have Rust or even C++. We had C, Pascal, MODULA, PL/1, and a few other unlikely choices (e.g. ALGOL-68, LISP, JOVIAL). C is a big improvement over assembly language, but it isn't clear to me that PASCAL or MODULA, or LISP or the others available back then were better choices than C. Unix became a kind of proof of C's suitability as a OS development language. Before that, PL/1 had been used to develop Multics, but Multics failed as a commercial OS (despite it's subsequent influence on OS design). C was simpler than PL/1. Algol had been used by Burroughs, but it was a non-standard version of Algol specially designed to work with the rather novel hardware.
C is flawed but none of the other candidates for a language higher level than assembly language for system programming was without flaws and they hadn't produced something like Unix. The C used in the Unix kernel was the real K&R C; it was the same language that ran on many platforms. Other attempts at a high level systems programming language based on Lisp, Smalltalk, Pascal, Algol, and IBM's proprietary subsets of PL/1 were all languages modified for the hardware they ran on. C seemed to be just low enough to work for most of the kernel's requirements without special extensions.
I always appreciate pjmlp's comments reminding HN readers about Pascal or Modula. I liked those languages; I'm very familiar with them. I still think C was the correct language for system programming in the past. Today, I'm more interested in seeing what happens with Rust for kernel development and Go for non-kernel systems programming.
Also interesting to learn that PL.8 also had a shot at the Aix kernel. I got all the PL.8 papers I could get my hands on.
Regarding UNIX and C's adoption, I think that had Bell Labs been allowed to go commercial from day one with UNIX and history of C's adoption would have been quite different.
The IF/ELSE stuff is similar to the preprocessor macros people write in C. They basically generate HLASM code on the fly based on certain flags being passed to the program and what not.
If you're curious what a simple program ends up looking like, I've got one I wrote that copies the contents of one file into another file up on GitLab. Lots of loading registers and what not.
Thanks, this is pretty interesting to read through, and your comments are very helpful. I didn't realize this language has no comment syntax, but I guess it makes sense since each opcode probably has a fixed number of parameters and anything after that can be safely assumed to be comments. Neat stuff.
NASM also has macro capability, though I'm not sure how it compares to the others you mentioned (EDIT or to gas). On the plus side, it's available on Linux.
Also not sure, they are supposed to be quite good, but by the time NASM came around, my focus was no longer on pure Assembly programming, so I never used it in anger.
It's popular partially because people have codebases that they started writing in the 70's or 80's in assembler that they maintain to this day because it's cheaper than switching it all over to a new language. Pretty much the same reason that COBOL is still around.
z/OS (the OS that runs on IBM mainframes) also exposes a lot of it's functionality through HLASM, so it's far more convenient to use than x86 assembly.
For whatever reason, C also never really caught on as ubiquitously as it did in the PC world. Probably because IBM themselves generally used their proprietary PL/S language instead back in the 70's and 80's.
https://en.wikipedia.org/wiki/IBM_High_Level_Assembler