Location: Chicago
Remote: Yes, but would consider a hybrid schedule
Willing to relocate: Yes, to San Francisco only.
Technologies: Typescript, Python, Rust, Vue, GCP/AWS, Postgres, C++, Sentry, Bash, Kubernetes
Résumé/CV: https://drive.google.com/file/d/1swAhYAirxFgDnzxyRlYDAqqVdp8oOPBD/view?usp=sharing
Email: csmccarthy0@gmail.com
Github: https://github.com/csmccarthy
Hi there! I'm Cami, I'm an (organic chemist turned) full stack developer with 4 years experience working in a fast-paced startup environment. Emphasis on full stack: I've been responsible for architecting cloud solutions, spinning out microservices, hardcore DOM manipulation packages in the browser, writing performance sensitive backend APIs in Python/Rust/C++, administrating databases, you name it.
In terms of roles, I'm looking for either full stack or fully backend positions. I've previously worked in the finance sector, but I'd prefer to branch out into something different.
Thanks for taking the time to read this, hope to hear from you soon!
Seconding reading Crafting Interpreters but writing with a different language. I took it as an excuse to hone my Rust skills, as well as learn the behind the scenes of the compiler. I don't think I would have learned nearly as much if I didn't have to read a paragraph, stop, and then really thoroughly examine the ideas/assumptions behind it to ensure an accurate translation.
Another strategy I used was to look at the title of the chapter, then work ahead as much as possible to implement that. It really helped my learning process to naturally explore the problem space myself, then read through and see how my naive attempts compared to a more seasoned implementation. But as the parent said, ymmv!
Could anybody clarify for me the purpose of the NOP opcode that the article refers to? I would think that something like a "do nothing" instruction would want to be optimized away as much as possible, but maybe there's some hidden facet of the instruction protocol I'm not familiar with that necessitates it?
I don't know how much it's really used these days, but when I've done asm stuff before if you have tight timing requirements in assembly, say you're bit banging an IO for some arcane serial protocol a la WS281X LED series, where it doesn't use a hardware supported serial protocol like SPI (where you can just DMA what you want to send to the controller) you then have to implement this protocol manually, in code, switching the IO pin on & off according to the timing requirements of said serial protocol/what data you want to send.
For the purposes of this, the nop instruction is useful as it's a great way to delay the processor for 1 instruction at a time and given you know the clock speed and therefore instructions per second you can:
* Set the IO high
* Use nops (or other instructions) to waste time
* Set the IO low
This is very useful in situations where the timing needs to be too precise to use interrupts, which are somewhat unpredictable. Obviously this means an entire CPU is held up running this code.
Another timing usage is for generating a signal to display a picture on a CRT using a microcontroller. Plenty of others as well.
Personally, though, I would never bother. It's always better to get a dedicated chip/controller for something like driving those stupid LEDs (or just get an SPI LED) or to generate a TV signal.
Reading up on it further, apparently it is also used as a way to reserve space in code memory, I imagine for self-modifying code (ie fill with nops which can be replaced with actual instructions by your code, depending on code execution). But I've never actually done this myself.
In terms of roles, I'm looking for either full stack or fully backend positions. I've previously worked in the finance sector, but I'd prefer to branch out into something different.
Thanks for taking the time to read this, hope to hear from you soon!