I don't know how much it's really used these days, but when I've done asm stuff before if you have tight timing requirements in assembly, say you're bit banging an IO for some arcane serial protocol a la WS281X LED series, where it doesn't use a hardware supported serial protocol like SPI (where you can just DMA what you want to send to the controller) you then have to implement this protocol manually, in code, switching the IO pin on & off according to the timing requirements of said serial protocol/what data you want to send.
For the purposes of this, the nop instruction is useful as it's a great way to delay the processor for 1 instruction at a time and given you know the clock speed and therefore instructions per second you can:
* Set the IO high
* Use nops (or other instructions) to waste time
* Set the IO low
This is very useful in situations where the timing needs to be too precise to use interrupts, which are somewhat unpredictable. Obviously this means an entire CPU is held up running this code.
Another timing usage is for generating a signal to display a picture on a CRT using a microcontroller. Plenty of others as well.
Personally, though, I would never bother. It's always better to get a dedicated chip/controller for something like driving those stupid LEDs (or just get an SPI LED) or to generate a TV signal.
Reading up on it further, apparently it is also used as a way to reserve space in code memory, I imagine for self-modifying code (ie fill with nops which can be replaced with actual instructions by your code, depending on code execution). But I've never actually done this myself.
For the purposes of this, the nop instruction is useful as it's a great way to delay the processor for 1 instruction at a time and given you know the clock speed and therefore instructions per second you can: * Set the IO high * Use nops (or other instructions) to waste time * Set the IO low
This is very useful in situations where the timing needs to be too precise to use interrupts, which are somewhat unpredictable. Obviously this means an entire CPU is held up running this code.
Another timing usage is for generating a signal to display a picture on a CRT using a microcontroller. Plenty of others as well.
Personally, though, I would never bother. It's always better to get a dedicated chip/controller for something like driving those stupid LEDs (or just get an SPI LED) or to generate a TV signal.
Reading up on it further, apparently it is also used as a way to reserve space in code memory, I imagine for self-modifying code (ie fill with nops which can be replaced with actual instructions by your code, depending on code execution). But I've never actually done this myself.