I appreciate the nm discussion. So often people bash larger process nodes but the wider gates are often/always better at static power consumption, and mature lithographies mean more research into optimization. It's not just smaller = better. My understanding is that smaller nodes reduce dynamic power due to lower gate capacitance (because the fet is just smaller), but there's a lot to the story, like architecture as mentioned.
I've even had one coworker, higher level than me, get repeatedly praised for how they multitask so much. But I've had them countless times distracted during meetings, and then they get irate some time later. "I wasn't consulted" "nobody told me" "nobody asked me permission" "I don't remember that meeting." If I'm in a meeting and say "my plan is to kick the computer until it works", don't come to me 2 months later and say it was a stupid plan, and worse get upset that you weren't asked. The point of the meeting is to have a forum for everybody to weigh in if needed. Not just to charge the program for an hour while you okay candy crush or listen in on another meeting.
This feels like a bad solution. At least the STM32H7 line has SPI slave functionality in hardware, no need for PIO. The h7a3 can do full-duplex at 45 MHz in the datasheet. I've been able to overclock the SPI master, so 45 is likely conservative. So that's 3x faster reads. Writes up to 100 MHz, almost 5x faster. And then no PIO programming. I guess you'd have to do a bit of work with software to get DMA going during the dummy cycles, so maybe the read command wouldn't work, but that seems like a worthwhile tradeoff for such a bit performance improvement. It also looks like this library fully utilizes a core to meet latency needs? That's a bit much.
The cheapest STM32H7 costs about 4x more than RP2040 and runs at 5x the clock, so the fact that it would be 5x faster doesn’t imply a defect in this library.
I don't understand how it takes 3 years to get off the cloud. I'm not a cloud developer, though. The most I've done is run code on free hosts or compute instances. Presumably there's something to the microservices and lambdas and distributed compute that makes this hard. I'm thinking if this was a monolith (like AWS themselves admit is cheaper), they could just run it locally? What a giant waste of money. I'm very glad to start seeing xAAS start to die out. At the end of the day it's just looking like more middle-men instead of how I've always assumed it was intended to be: economies of scale.
However, and missing from this article + discussion so far, is their revenue. If they pay $4/day and make $2 in revenue, that's bad. They pay $300k/day but make ~ $2250k/day in revenue. I don't know what the ratio is supposed to be, but at first blush that doesn't actually seem too bad. I'll let the more qualified take over, I'm struggling to find out how big a % of their total expenses this is.
A mistake I see commonly whenever someone says to "just move off of the cloud" is that they see the cloud as just a VM provider. If it was, then yeah, moving to another provider wouldn't be such a big deal.
In reality, the cloud creeps into your systems in all sorts of ways. Your permissions use cloud identities, your firewalls are based on security group referencing, your cross-region connectivity relies on cloud networking products, you're managing secrets and secret rotation using cloud secrets management, your observability is based on cloud metrics, your partners have whitelisted static ip ranges that belong to the cloud provider, your database upgrades are automated by the cloud provider, your VM images are built specifically for your cloud provider, your auditing is based on the cloud provider's logs, half the items in your security compliance audit reference things that are solved by your cloud provider, your applications are running on a container scheduler managed by your cloud provider, your serverless systems are strongly coupled distributed monoliths dependent on events on cloud specific event buses, your disaster recovery plans depend on your cloud provider's backup or region failover capabilities, etc. Not to mention that when you have several hundred systems, you're not going to be moving them all at the same time. They still need to be able to communicate during the transition period (extra fun when your service-to-service authentication is dependent on your cloud) without any downtime.
It's not just a matter of dropping a server binary onto a VM from a different provider. If I think about how long it would take my org to move fully off of _a_ cloud (just to a different cloud with somewhat similar capabilities), 3 years doesn't sound unrealistic.
> A mistake I see commonly whenever someone says to "just move off of the cloud" is that they see the cloud as just a VM provider. If it was, then yeah, moving to another provider wouldn't be such a big deal.
I think it can still be a big deal depending on what's the overall system architecture, where are all data stores, how many services you run, and what constraints you're dealing with.
For example, when you are between two cloud providers, more often than not you will have to replace internal calls with external calls, at least within the migration stage. That as important impacts on reliability and performance. In some scenarios, this performance impact is not acceptable and might require architecting services.
This is why I have a hard rule of just doing everything in the VM for stuff I build. I'm able to move between cloud providers and even self host often with near zero effort because of this.
People dig themselves in hard, with reliances on all kinds of proprietary services, and complex relationships.
My experience in helping people do cloud migrations are that companies also often quickly lose oversight over which cloud services they are even still running. Sometimes systems that should've been shut down years ago are still hanging around, or parts of them anyway, like S3 buckets etc. Most companies that use cloud systems underprovision their devops because they think they don't need much for a cloud system (in fact, they typically need more to do it well).
$300k/day for their revenue is very much crazy high.
During good times, many companies simply don't care about which services or how much. I've worked at several startups where they got a large funding round and the word was "we don't care about cloud costs, just get it done fast / make it scale." Unfortunately, one bad mistake (like storing all data in a proprietary service like DynamoDB) can make this difficult to unwind when things get bad...
To me it's the same thing. You can pay somebody to care about that, but they might be underutilized for the majority of time so it's not worth it. If you have a service, instead of your security expert being used idk 1/x of full time, they can be y/x where y is the number of contracts. For me and my time we are just way too small to have somebody full-time dedicated. So that's how I think about it
It is a reasonable point. But i think it is not exactly that. Having your organisation focus on maintenance is a certain type of opportunity cost. It is pretty often one of your most knowledgeable engineer that does this. And it also interrupts the flow of many of your other engineers.
I'd love to see some real figures on that. My gut feeling is most companies spend as much on AWS experts as they previously did on people running in house facilities but I really don't know.
I have never had a client that got away with less maintenance because they used cloud.
In fact, those of my clients who insist of relying on cloud, tend to spend far more with me for similar complexity systems. I love taking their money, but I'd frankly rather help them save it, because longer term it's better.
The services are the "easy" part; moving data out of a cloud provider is slow and expensive. For a _really_ big dataset it can take months, sometimes years, just to complete the data transfer.
I'll admit that there have been times I've been listening to those hour-long music videos (but not "music videos" you know) on YouTube, and only thought to check they were AI later. I was very disappointed to find out they were. Really wish there was attribution to the training material. The good news is that it doesn't look like this is AI, so we may just be returning to a time where physical appearance of the artist doesn't matter vs their talent.
In the same way "embedded" is relative, I appreciate the author's recognition that "edge" is relative. For some, AI at the edge means on-prem server farms. For some it means a mini-pc. For others, maybe an SBC. Here it's a microcontroller. Further still is AI within the sensors a microcontroller would talk to. That's probably just another microcontroller but still.
There's "micro" and "micro". The microcontroller operating a simple coffee machine, or a simple washing-machine is probably 8 or 16 bits. This is what I would call "bare metal", as they don't run an OS, only off-the-shelf frameworks at best.
For "bigger" devices, it's usually a Cortex inside a system-on-chip or system-on-module, 32 bits single core and a few Mb of RAM for low-end (enough to run regular Linux distro instead of uClinux for instance), 64 bits multicore for high-end devices that deal with audio/video. That kind of business is often resource-hungry in every way.
I work with that kind of stuff, and to me these "microcontrollers" are just monsters that I hesitate to call "micro" when some of my coworkers work on much smaller chips with only a few K of RAM available.
I do wish sometimes they used the bigger micro, though. We have some power supplies that technically have an Ethernet interface. But when using it, even for SCPI over TCP (forget about the virtual front panel that takes a minute to update), it lags so bad the output enable button needs a few tries to toggle. I should practice yanking the positive wire for an emergency
I just want to make sure I'm understanding, this means when they say somebody expected to get x euros, really they won (x/100)/100 euros? Once to get it back to what it was before math, a second time to do the math correctly?
This could get dicey. I know there are a lot of users out there and I understand the various causes for use. However, for those of us who didn't because of concerns like this, it is a bit vindicating. Addiction is a disease, full sympathy, but please get help if you need it.
I think that a lot of drug users (at least in America) use drugs to escape their circumstances. IMHO, what's needed is to help people improve their circumstances so that they have better alternatives. Unfortunately, that is sorely lacking in America.
Speaking as an addict, yes most regular users are. It's not the same kind of addiction (emotional, not physical) but it's still addictive. If you smoke weed every day for a year, and stop, for the first year you stop, your suicide rate is doubled.
Isnt it possible that people smoking weed were treating conditions like anxiety or depression and are more likely to commit suicide in the first place? Weed has had a real deficit of actual study and although some people can get a dependency on it I have a hard time qualifying those people as addicts when stopping weed is so much less dangerous and difficult than other substances. Its a complicated topic that requires quite a lot of care when you talk about it, because labelling someone as an addict is a tall order and also weed has been a moral panic for so long.
People don’t take the withdrawal potential seriously.
I’ve had a complex relationship with cannabis over the years, and one of the things that I didn’t understand before getting stuck in a daily habit of heavy use for a period of time is how hard it can be on your body to just stop.
I had major upset stomach and food aversion for a few weeks to the point that I lost 10 pounds, major sleep issues, a major spike in depressive symptoms (not just baseline depression, because I eventually got back to baseline), etc.
No, upset stomach and food aversion were not common for me before. I've gone through the quitting process several times over the years, and the experience was consistent each time. I've also run experiments while quitting like: what happens to the food aversion and stomach issues if I use some cannabis? When in the middle of withdrawal, it was like flipping a switch that allowed me to eat again.
These symptoms are well known among the cannabis community, and there is increasing awareness in the medical community [0] especially for people who experience the most extreme forms like CHS [1], which has become increasingly common with more people using extremely high % THC products.
If you're curious about this subject, I'd highly recommend reading the thousands of 1st hand accounts of the quitting process on a subreddit like /r/petioles or /r/leaves.
It's extremely common for people to be skeptical of cannabis addiction/withdrawal [2], and I'm pretty certain this is an overcorrection after the decades of demonization and straight-up lies about cannabis. This is understandable, but the pendulum is gradually swinging back as more people experience difficulties quitting.
I'm still in favor of legal cannabis, but the addiction potential is very real and worth highlighting. Part of the issue currently is that research is decades behind at this point due to the federal scheduling of the drug, and much of the research we do have was conducted with old low-THC strains. Hopefully we'll get better studies that highlight the things many heavy users already know 1st hand.
”Carolla co-hosted the syndicated radio call-in program Loveline with Drew Pinsky from 1995 to 2005 as well as the show's television incarnation on MTV from 1996 to 2000.”
Pinsky is seemingly a medical doctor specializing in addiction treatment. Just saying that he has a financial incentive to present hyperbole, not to mention a dramatic one (anything for more content in his rehab TV shows)
That effect could be simply due to pre-existing pathology such as anxiety, depression or even PTSD, not necessarily due to cannabis. Do you have a source for those statistics?
Some of us are smoking for the past 20, 30, 40 years without cardiovascular issues tho. And I have met many old guys and ladies smoking weed in their 60, 70s doing fine if not better than their peers.
There are many old guys who smoke a pack of cigarettes a day for their whole life. There's many more guys who aren't as old because they died from smoking a pack of cigarettes a day. (I'm not saying weed causes these issues, but I am saying anecdotes are less than useful)
I have met many smokers with diseases directly related to the habit of smoking cigarettes. I have never met a cannabis smoker in the same situation or that developed it over time (20 years span).
Obviously, cannabis smokers also die from heart diseases but if smoking cannabis was something that would be related to heart attacks we would know by now, a research would not be needed or it would be quite obvious and accepted.
Cannabis was illegal for many years in most countries (it often still is, but the laws are ignored). There have not been many studies year, given the timelines of the law enforcement I'd expect that we would first see the long term studies start to come out now. And of course the first studies will not be conclusive.
The windows support is interesting because Python dropped 32-bit x86 for Linux, yet kept it for Windows (and maybe ARM). That means at the end of the day unless they're doing stupid things, the code can't be assuming 64-bit, which meant once I modded the auto tooling to remove the lock-out it worked just fine. So what my thought is, is that I guess I understand both sides. It's not hard to just compile it yourself if you have the expected configuration, but it's also usually not much work to compile a package twice for a release flow once you have the pipeline setup.
the python example continues to confuse me. in a comment in the proposal discussion cpython was used as an example for a case that would cause problems if it were to drop 32-bit support, and now you say that python already did drop 32-bit for linux. and that seemingly without consequences for fedora. that dropping 32-bit support would not have consequences is what i would have expected anyways, because fedora stopped offering a kernel and installer with 32-bit support some 5 years ago.
https://github.com/python/cpython/blob/847d1c2cb4014f122df64.... i686 is Windows only. I see there's a warning about this later if you're unsupported and therefore a 0 case, but either this becomes an error later on or they softened the impact since I did this. Or I'm looking at the wrong check.
reply