I was at Semicon Japan last year in December and I learn about the coolest semiconductor company (non profit research organization) ever - MinimalFab [1]. There isn't much information their website but this [2] video explains what MinimalFab is about. Essentially, it is a cleanroom-free, modularized fab where each process step in a fab is like a little ATM machine. Miniaturization of complex fab processes is mind blowing and everything is contained inside the machine including a Class 100 environment. Load a tiny quarter sized wafer in a cassette to process and move material from one machine to another. This kind of a fab setup would be incredibly useful to R&D fabs in universities and small scale fabrication for military, space, defense and perhaps even hobby use.
Or something physical, like Jacob Rus figuring out how a chunk of metal from a machinist’s workshop can scribe out a perfect sine curve: https://observablehq.com/@jrus/sinebar
- 1st order thinkers primarily see causes and *direct* effects.
- 2nd order thinkers frequently see causes and their *indirect* effects.
I guess that seems like a reasonable idea.
For what it's worth, the truly exceptional people I've met in life had a different quality.
* When most people are presented with a difficult/challenging problem, they soon give up.
* The most exceptional people that I've met just kept hammering away after the rest of us had stopped. Most of the time, they failed, but if you have some aptitude and you keep hammering, you have a better chance to make breakthroughs that the rest of us don't make.
Just as an example, I had worked for a company that used X-ray crystallography as a tool for drug-development. I would be in meetings with crystallographers where we discussed the technical problems they were having in trying to grow crystals. The crystallographers were all smart and talented, but when we had group meetings, there was only one guy who would float suggestion-after-suggestion-after-suggestion, long after everyone else had run out of ideas. I don't think he was any "smarter" than anyone else in the room, but he just could not shut himself off. He was relentless. He went on to make some important contributions to the field.
So, there are three main papers on linear scan:
Poletto and Sarkar,
Traub, Holloway, and Smith,
and nowadays, wimmer and franz
Sarkar and Poletto invented it for JITs. They were comparing it against literal chaitin-briggs style graph coloring (this was IBM, and IBM obviously did it that way, since IBM invented it).
The implementation they compared against was not particularly efficient (it happened at a time when I was at IBM research, so i've seen the code).
That said, linear scan made sense for a JIT at that time.
Compared to standard graph coloring algorithms of the time, you got somewhere between 10-30% crappier code, but the algorithm was a lot faster.
(There are papers that seemed to cherry pick linear scan result data to show better results. However, the consensus for implementers was it generated significantly worse code, and that was the tradeoff for faster).
Around this time, everyone started redoing everything in SSA, and noticed some things. Among them, that register allocation on SSA seemed easier.
It took until ~2005, but Sebastian Hack and Bouchez, et al then both independently proved that SSA generates chordal interference graphs, which means you can calculate the number of colors it requires (and color them) in polynomial time.
As an aside, interestingly, outside of special case graphs, you can't even estimate the chromatic number (min number of colors needed) of a graph sanely. The chromatic number is greater than or equal to the clique number, and even that can't be approximated sanely.
For SPEC CPU2000, the compilation time of our implementation
is as fast as that of the extended version of linear scan
used by LLVM. Our implementation produces x86 code that is
of similar quality to the code produced by the slower,
state-of-the-art iterated register coalescing of George and
Appel with the extensions proposed by Smith, Ramsey, and
Holloway in 2004."
As you can see, at that point, there is no point in linear scan :)
As an aside, he also proved optimal live-range-splitting generates elementary graphs, which give you nice properties like being able to color using register pairs and under various alignment constraints, in linear time.
(this would still be NP-complete on chordal graphs)
>Twitter’s statuses lookup API endpoint allows for a total of 1,200 API calls every 15 minutes. Each call allows the user to pass 100 ids for a total of 120,000 id requests every 15 minutes using both APP auth [...]
Gunter Stein's inaugural Bode prize lecture from 1989 titled "Respect the Unstable" [0]. In this talk, he uses a minimum of mathematics to clearly demonstrate the fundamental (and inevitable!) trade-offs in control systems design. He effortlessly makes the link between his (in)ability to balance inverted rods of various lengths on his palm (with shorter rods being harder to balance) to why the X-29 aircraft was almost impossible to control and why Chernobyl blew up.
The fundamental message is extremely important and the derivation is so crystal clear that it is simply marvelous to watch him present it. I like it so much that I re-watch it about once a year.
This is one of my favorite user interface design articles that I recommend every chance I get, and it should be required reading in every design class.
Apple's long romance with skeuomorphism peaked with Bob Bishop's APPLE-VISION, and went downhill from there.
>[... detailed step by step instructions to demonstrate a terrible usability failure that I wrote up in a bug report that was brushed off and ignored ...]
>This single facet of VLC's terrible UI deserves to be front and center in the User Interface Hall of Shame [2] -- it's even worse than Apple's infamous schizophrenically skeuomorphic QuickTime 4.0 player [3], from 1999! The latest version of VLC in 2017 is still much worse than the shameful QuickTime player was 18 years ago!
At least QuickTime 4.0 serves as a useful ruler by which we can measure the terribleness of other video players.
The nostalgic web designer / graphic designer from those years in me is really fond of and misses the old conventions of web interface and application design.
I don't know if anyone / everyone remembers phong photoshop tutorials (oh my god they are still there! http://archive.phong.com/tutorials/ ) but that type of interface design was my heart back then.
It was all about creating almost sci-fi like installation design. Wires, vents, metal surfaces, rust, plastic, gel, glass, reflections, scanlines, wireframing, grids and 3d grids, and other things like that. Think Starcraft's HUD design and the Terran installation type maps that floated in space. ( https://media-curse.cursecdn.com/attachments/21/437/b34694f4... <~ this is actually the perfect representation in my mind of what I wanted to make in terms of the graphical interface in websites in the mid to late 90s and probably early 2000s before everything became about "clean, flatter, simple, "elegant") and I'm not so sure I like things that way.
In fact I think I'm going to make it an express feature that all of my future side project's design portion will be dedicated to creating mid-to-late-90's-like interface / graphical designs based on phong's tutorials and other things I loved back then.
>Recent technological changes have transformed an increasing number of sectors of the economy into so-called superstars sectors, in which a small number of entrepreneurs or professionals distribute their output widely to the rest of the economy. Examples include the high-tech sector, sports, the music industry, management, fnance, etc. As a result, these superstars reap enormous rewards, whereas the rest of the workforce lags behind. We describe superstars as arising from digital innovations, whicih replace a fraction of the tasks in production with
information technology that requires a fxed cost but can be reproduced at zero marginal cost. This generates a form of increasing returns to scale. To the extent that the digital innovations are excludable, it also provides the innovator with market power. Our paper studies the implications of superstar technologies for factor shares, for inequality
and for the effciency properties of the superstar economy.
The entire information world is either a database or a cache (or communication between them), layered on top of each other over and over. Every new storage technology typically ends up being yet another layer as either database or cache (or both). This case is pretty unique in that it can actually serve to remove a layer: ram (typically a cache) is not necessary if nvs is viable at the same speeds. But in general new storage tech just adds another layer, which the software world reacts to by rushing in as if to fill a void by creating new software to take advantage of it which ends up being... another database or cache, often with similar tradeoffs to the layers of cache/database surrounding it. In the limit, I see more and more layers of cache/database until they merge into some kind of continuous data/cache field with a continuous tradeoff gradient between size and latency.
Damn, glad to see this, but they're also kinda encroaching on my territory! :P
I've been working on something like this that's language agnostic and works with not only tables but also trees, graphs, lists, hashmaps, etc., and animates the visuals as the data structures are modified in real time (and you can pause, playback, step, etc.): http://symbolflux.com/projects/avd
The api I was conceiving earlier was more complex, but if you read the copy there, you'll notice it's essentially the same now: `MObject.monitor(...)` or MTable.monitor() MStateMachine.monitor() etc.
Edit: really am happy to see some other projects like this in total, partly because I want to use 'em now if possible—but also because I've had a surprisingly hard time communicating to other developers why it might be desirable to do something like this in the first place, so I'll be very happy to have general awareness raised.
I read the first line of their docs as rather dry humor, "Displays tabular data as a table." —like yeah, of course you would want to see tabular data as a table, and hierarchical data as a tree, etc.!
> With unspecified behavior, the compiler implementer must make a conscious decision on what the behavior will be and document the behavior it will follow.
No, what you described is implementation-defined behavior.
It may be confusing, but here's the breakdown of different kinds of behavior in the C standard:
* Well-defined: there is a set of semantics that is defined by the C abstract machine that every implementation must (appear to) execute exactly. Example: the result of a[b].
* Implementation-defined: the compiler has a choice of what it may implement for semantics, and it must document the choice it makes. Example: the size (in bits and chars) of 'int', the signedness of 'char'.
* Unspecified: the compiler has a choice of what it may implement for semantics, but the compiler is not required to document the choice, nor is it required to make the same choice in all circumstances. Example: the order of evaluation of a + b.
* Undefined: the compiler is not required to maintain any observable semantics of a program that executes undefined behavior (key point: undefined behavior is a dynamic property related to an execution trace, not a static property of the source code). Example: dereferencing a null pointer.
> If you're like me and immediately said ohhhhhhhhhhhh, so THAT'S why there's twelve hours in a day! and immediately followed it up with: wait, why the fuck are they using twelve instead of ten? ...
> It's most likely because you have twelve joints in your hands: three in each of the four fingers, excluding the thumb. I thought that was pretty nifty to discover. Like, I had never looked at my hands before, really. Hands are really wild, when you think about it.
Probably not. This explanation of duodecimal is really farfetched and I’m pretty sure it arose from someone staring at their hands desperately trying to come up with a way to count 12 on their fingers to explain this. (Ten fingers and two feet seems as likely.)
Clocks are (probably) divided into (two sets of) 12 hours for the same reason feet are divided into 12 inches. Because it’s convenient for everyday use. Subdivisions of 1/12th suck for complex math (if your number system is decimal anyway), but are great for lay use. 12 subunits captures halves, thirds, and quarters, all of which are very intuitive and very common in everyday use.
12ths are handy enough that we have a special word for “12 of something” (dozen) even though our numbering system is decimal.
Reminds me of the MIT SICP lecture videos from the 80s. The concepts of black box abstraction and the simplicity of using LISP like lego building blocks blew my mind and made me switch from being a UX designer dabbling in Rails to a full blown programmer.
It was still entirely relevant to today even though it was a few decades old as the fundamentals of computer science are still fundamental.
Hearing that intro music still brings a smile to my face.
I just happen to be relearning math right now as I dive deeper into data science and this is perfect timing. Going to watch this series once I get through my math proofs book ("Book of Proof" by Richard Hammack which I recommend to people getting into math https://www.amazon.com/Book-Proof-Richard-Hammack/dp/0989472...).
Bob Widlar, who started at Fairchild, was my childhood hero. I literally last week built a power supply with a µA723, a 1967 Fairchild originated part (although mine was an SGS Thompson clone) because it's still pretty damn well one of the best parts out there even after 50 years. Good times. I wish I was born a lot earlier than I was if I'm honest.
> It seems to me that some design patterns are essentially ad-hoc, informally specified, specialised instances of basic category theory concepts.
There is a flaw in this type of thinking which I think is not addressed here, but should be. (I took the quote from the summary, but I think it's fair.) The usual issue with ad-hoc informally specified things is that the ad-hocness and the informality leaves something out, making sure that if you understand the informal thing, but not the formal, you end up understanding only some part of the whole thing that you could understand if you tried.
So in the context of design patterns, that means there should exist situations where the extra abstraction, the non-ad-hoc non-specialized understanding, allows you to write code that is better than otherwise, and (more importantly) that you would not have otherwise written. This would show explicitly the gap between the ad-hoc ideas and the formal ideas instead of leaving you to guess.
Whereas the way this post tries to explain it, as I understand it, is that it shows you things (design patterns) that you would have otherwise written, and shows how it can be talked about in more general terms. But the things that you wouldn't have written are missing.
E.g., in some monad tutorials at some point one of the exercises is the nondeterministic choice monad, which is totally something you might not have written yourself, which makes it a way of showing why monads are a useful concept. If the only monad you had was IO, it'd be a much more useless idea (it would merely be a different, not even necessarily better, way of writing things that you already knew).
In an effort to get people to look
into each other’s eyes more,
and also to appease the mutes,
the government has decided
to allot each person exactly one hundred
and sixty-seven words, per day.
When the phone rings, I put it to my ear
without saying hello. In the restaurant
I point at chicken noodle soup.
I am adjusting well to the new way.
Late at night, I call my long distance lover,
proudly say I only used fifty-nine today.
I saved the rest for you.
When she doesn’t respond,
I know she’s used up all her words,
so I slowly whisper I love you
thirty-two and a third times.
After that, we just sit on the line
and listen to each other breathe.
The Quiet World by Jeffrey McDaniel
Here's some more influence and then weird connections:
- The Japanese word for bread is "パン" (pronounced 'pan')
- The Korean word is "빵" (pronounced 'ppang')
- Both are loaned from the Portuguese 'pão' (bread).
- In Chinese many kinds of bread are called <something>包 (<something>'bao') or 包<something> ('bao'<something>). Examples: 面包 'Mianbao' (bread) or 包子 'Baozi' (steam bun). Note the 包 'bao' sounds a bit like the Portuguese 'pão'. However, it turns out this is completely unrelated and 包 means something more like "package" (or so all the translators and dictionaries I could find claim) so most people believe that this is a false cognate and not a loanword like in Korean and Japanese.
- Despite this, the Chinese 包 'bao' has ended up in Korean as a kind of loan word as well, but because it seems to mean the same thing as 빵'ppang' and is used in similar ways in Chinese as 빵'ppang' might be used in Korean and for various pronunciation reasons in Korean, Chinese origin breads are also called 빵'ppang'.
- The proper translation for 包 into Korean is more likely 꾸러미 'kkuleomi' (package).
- So....Chinese-Korean breads are often called <something>빵 such as 찐빵 'jjinppang' (steamed bun)) even though the correct Chinese name for the Chinese-Korean 찐빵 in China is called a 馒头 'Mantou'.
- However, in Korean, a 만두 'Mandu' is a different but related food, a dumpling, and in Japanese a 饅頭 'Manjou'.
- Going West instead of East from China, the 'Mandu' as a food goes back a thousand years and spread all over the Silk Road, so local variants are found all over Turky, Persia, Afghanistan, Mongolia and so on are called the same thing and may have been the basis for the famous Russian "Pelmeni" and Polish "Pierogi" and various other dumplings known around the world.
And thus the great bread-dumpling belt is enjoyed around the world to this day and was established by explorers, wanderers, conquerors and traders.
My personal recommendations, mostly stuff I've got:
Soldering iron: I like the Hakko FX-888D. $90-110 or so. They have better if you can afford it, but that one's very good. The FX-951 is the next step up, and can take micro-soldering handpieces and has the quick-change tips. It's about $240.
Get a chisel tip, eg Hakko T18-S3, a bevel tip (T18-S6), and a bent-conical tip (T18-BR02). The conical tip is perfect for lots of general purpose work, you can use the fine point or the sides of the bend. The back of the bend can be used for drag soldering, the inside of the bend makes soldering wires together easy. The chisel tip is good for soldering things with more thermal mass (PCB-mount heatsinks) and the bevel tip is pretty necessary for drag soldering on QFP and similar surface mount packages.
Hot air station: Probably something cheap from china, there aren't any particularly affordable name-brand ones that I know of. Weller has the WHA900 for around $600.
Magnification: Get at least one of the magnifying headsets ($8-10 on Amazon) and a desk magnifier with LED ring light. Better option is an AmScope stereo microscope, such as the SM-4NTP and a ring light for it like the LED-144W-ZK. $480 total.
PCB vise: I have an Aven 17010, it works pretty well. MUCH better than trying to hold a board in the helping hands.
Flux: Get liquid flux with a syringe. Amtech is the best, but there is a lot of counterfeit stuff out there, and Amtech doesn't sell it directly (bulk orders only). https://mailin.repair/amtech-nc-559-v2-30-cc-16160.html sells the real flux.
Tweezers. Any ESD safe set.
Fume extractor: VERY important for health. You do NOT want to be breathing in flux fumes. A high-volume HIPAA air purifier on the desk works, ($150 or so) or a dedicated device like the Hakko FA430 is even better ($625).
Oscilloscope: Rigol DS1054-Z. 50MHz, hackable to 100MHz bandwidth easily. $400. There's no better cheap scope at the moment (IMO).
Function generator: Siglent SDG805. $270. Needed to give you analog signal inputs. Part of the big-3 of 'scope, power supply, and function gen.
Power supply: Get a linear supply. The Tekpower TP3005D-3 is $200, and is an actual linear power supply. The knobs are coarse adjust only (it's analog), I replaced the control potentiometers with 10-turn versions which substantially improved the accuracy of the output. There's also the Siglent SPD3303X-E ($340) if you want a digital panel version. You definitely need arbitrary +- voltages for lots of very basic circuits, PC power supplies are very limiting and too noisy if you do any sensitive analog design.
Multimeter: Get a safe one (HRC fuses, proper transient voltage suppression, etc.) Can't go wrong with Fluke, of course, but Extech, Brymen, and some others have cheap and capable handheld meters. $100-300, depending on brand. Be sure it has a micro-amp range! The really cheap ones don't, and you WILL need it if designing embedded stuff.
Logic Analyzer: Get a LogicPort. pctestinstruments.com. They're $390, for a 34-channel 500MHz device, very nice for the money. Needed if doing much digital work. (Keysight's 34-channel standalone analyzer is $12165 base price. 5GHz, but still, twelve grand...)
Spectrum Analyzer: If you're doing RF work (radio design), you'll need one. Otherwise skip it. The Siglent SSA3021X with tracking generator add-on is $1764 (pretty cheap) and quite capable (9kHz to 2GHz). It's also hackable / software upgradeable into the 3.5GHz model. The Rigol DSA815-TG is $1550, but significantly worse (smaller display, worse resolution bandwidth, max 1.5GHz, etc).
Be sure to get a GFCI outlet and a GFCI adapter or two. The oscilloscope, function gen, spectrum analyzer, etc, all are mains earth referenced, and should each have their own GFCI plug. If you accidentally connect the ground lead of any of them to something other than ground the GFCI will trip and prevent the ground traces from being blown up inside the device. They're about $20 each, well worth it IMO.
You might want an anti-static mat and wrist-strap.
Get a bunch of small drawers, eg https://www.amazon.com/gp/product/B000LDH3JC. Print labels for them, use them to store resistors, capacitors, and other types. You can fit two values of component in each drawer (though they don't come with enough dividers :/). You want at least 96 drawers for resistors and 32 for capacitors assuming you're buying 1% or 5% resistors and 10-20% capacitors (pretty normal). I bought a kit of resistors (https://www.amazon.com/gp/product/B017L9GKGK) and (https://www.amazon.com/Joe-Knows-Electronics-Value-Capacitor...) for capacitors (Joe Knows Electronics kits are good for stocking up, they have more of the most common components in their kits.)
Get some desoldering wick and a solder sucker too. Also some tip tinner, and/or a sal ammoniac block. Make sure you have a roll of kapton tape to hold parts down while you solder them (it survives high temperatures). If you'll be doing a lot of surface mount you'll want a reflow oven and solder paste.
EDIT: One tip I forgot, very important: When you buy parts (on DigiKey/Mouser or similar) make sure you buy extras. At least the number needed for the first volume discount or 10, whichever you can afford. 3x the number needed for the project at the minimum. You WILL drop parts, burn them out, and otherwise damage them. It's much easier if you already have spares, don't have to wait for shipping, and don't have to pay for shipping. This will also help you develop a parts library, as you do more projects you'll be likely to re-use common parts and already have many left over from past work.
If, like me, you were hoping to find code that is poetry, a few recommendations:
whytheluckystiff, or _why, was a very colorful persona in the early English-speaking Ruby community who wrote some very poetic code (some of it production grade) and inspired a lot of people (me included), and eventually disappeared one day when he was outed. Maybe start with Why's (Poignant) Guide to Programming that is a literary work that is also a Ruby tutorial, available at http://poignant.guide/, and continue by reading archives of his blog (where he posted hand-drawn and sometimes animated poetry in Ruby) and libraries he wrote (I strongly recommend Camping.rb, a 4k minimal MVC framework, that was at once very poetic and production-ready.)
Other than that, if you can read x86 asm I recommend this book of Koans/Poems/whatever you call them, written by someone I know and extremely beautiful: https://www.xorpd.net/pages/xchg_rax/snip_00.html
I felt obliged to comment because I feel I know what you are talking about and I also worry that much of the advice posted so far is wrong at best, dangerous at worst.
I am 42-year-old very successful programmer who has been through a lot of situations in my career so far, many of them highly demotivating. And the best advice I have for you is to get out of what you are doing. Really. Even though you state that you are not in a position to do that, you really are. It is okay. You are free. Okay, you are helping your boyfriend's startup but what is the appropriate cost for this? Would he have you do it if he knew it was crushing your soul?
I don't use the phrase "crushing your soul" lightly. When it happens slowly, as it does in these cases, it is hard to see the scale of what is happening. But this is a very serious situation and if left unchecked it may damage the potential for you to do good work for the rest of your life. Reasons:
* The commenters who are warning about burnout are right. Burnout is a very serious situation. If you burn yourself out hard, it will be difficult to be effective at any future job you go to, even if it is ostensibly a wonderful job. Treat burnout like a physical injury. I burned myself out once and it took at least 12 years to regain full productivity. Don't do it.
* More broadly, the best and most creative work comes from a root of joy and excitement. If you lose your ability to feel joy and excitement about programming-related things, you'll be unable to do the best work. That this issue is separate from and parallel to burnout! If you are burned out, you might still be able to feel the joy and excitement briefly at the start of a project/idea, but they will fade quickly as the reality of day-to-day work sets in. Alternatively, if you are not burned out but also do not have a sense of wonder, it is likely you will never get yourself started on the good work.
* The earlier in your career it is now, the more important this time is for your development. Programmers learn by doing. If you put yourself into an environment where you are constantly challenged and are working at the top threshold of your ability, then after a few years have gone by, your skills will have increased tremendously. It is like going to intensively learn kung fu for a few years, or going into Navy SEAL training or something. But this isn't just a one-time constant increase. The faster you get things done, and the more thorough and error-free they are, the more ideas you can execute on, which means you will learn faster in the future too. Over the long term, programming skill is like compound interest. More now means a LOT more later. Less now means a LOT less later.
So if you are putting yourself into a position that is not really challenging, that is a bummer day in and day out, and you get things done slowly, you aren't just having a slow time now. You are bringing down that compound interest curve for the rest of your career. It is a serious problem.
If I could go back to my early career I would mercilessly cut out all the shitty jobs I did (and there were many of them).
One more thing, about personal identity. Early on as a programmer, I was often in situations like you describe. I didn't like what I was doing, I thought the management was dumb, I just didn't think my work was very important. I would be very depressed on projects, make slow progress, at times get into a mode where I was much of the time pretending progress simply because I could not bring myself to do the work. I just didn't have the spirit to do it. (I know many people here know what I am talking about.) Over time I got depressed about this: Do I have a terrible work ethic? Am I really just a bad programmer? A bad person? But these questions were not so verbalized or intellectualized, they were just more like an ambient malaise and a disappointment in where life was going.
What I learned, later on, is that I do not at all have a bad work ethic and I am not a bad person. In fact I am quite fierce and get huge amounts of good work done, when I believe that what I am doing is important. It turns out that, for me, to capture this feeling of importance, I had to work on my own projects (and even then it took a long time to find the ideas that really moved me). But once I found this, it basically turned me into a different person. If this is how it works for you, the difference between these two modes of life is HUGE.
Okay, this has been long and rambling. I'll cut it off here. Good luck.
Mathias Rust is another, more famous, unauthorized border-crosser. Thirty years ago, in May 1987, the then-18-year-old West German took off from Helsinki in his rented Cessna and due to various happenstances was allowed to not only enter the Soviet airspace unchallenged, but to fly all the way to Moscow and land next to the Red Square.
Much less known, and much more tragic, is the story of two Finnish teenage boys who, in 1946, set sail from Helsinki in their small boat. Their intention was to voyage to Stockholm to meet some relatives, but as a result of extremely bad luck and post-war Soviet paranoia ended up in a forced-labor camp in Siberia.
[1] https://www.minimalfab.com/en/ [2] https://www.youtube.com/watch?v=WsOVbmfYxoM