Why is this getting downvoted? Indoor cats always have a longer lifespan than outdoors. It is the right thing to keep cats indoors but provide plenty of indoor play & stimulation to keep them active.
It’s not as “simple” as gp suggests, nor is it “the right thing”. We really don’t know what psychological and qol differences there are between the two… we’ve never been able to ask the cats. There are trade-offs, life expectancy being only one dimension of it. And in both cases (like the play you mention) there are mitigations to the downsides.
Source: I’ve had both indoor and outdoor cats, including cats that transitioned to outdoor after being raised indoor.
Lost one outdoor cat tragically in middle of his lifespan. But his equally-outdoor brother lived to 19, and all the outdoor cats hands down seemed happier on average (less lethargic, less neurotic, less obsessive, less overeating).
It's getting downvoted because this varies hugely by location. In the UK, the life expectancy is not that different, there are no large predators, so the consensus of cat owners is that cats life richer lives if let out. If you live in the US your cat could be eaten by a mountain lion or coyote.
This is pretty much the same attitude which leads to kids being wrapped in cotton wool, ferried everywhere in giant SUVs and never let outside to play. I find it a bit sad.
dude, it isn't a giant SUV divers that are advocating for indoor cats. they are the same people that hold onto the whimsy that "cats a natural and belong outdoors! :)"
So what you're saying is you'd need to be an uneducated imbecile to prefer politicians speaking to live space footage.
I think you're selling uneducated imbeciles short; surely even they prefer the space footage. Only the politicians doing the speaking prefer themselves.
I had a complete opposite experience a few weeks ago. It took me about an hour max to install Android Studio in Manjaro, install SDKs, spin up Android VM, pull open source Android keyboard app and add my own customised layout, then test it in VM and install to my phone. There were no errors at all. Just clicking buttons and waiting.
Why something like this requires a downtime? Could not both databases be utilised at the same time while backfilling older entries in parallel and once all data migrated flip a feature switch to stop writing to old db? Reades can be switched with a flip too, or some smart logic to check both databases if data is not found. This approach is hard to implement and overall migration would take longer, but considering many companies depending on Gitlab SC, this should be the preferred approach I think.
Swallowing an hour or two of pre-scheduled downtime can be worth it if you can significantly reduce the complexity and risks associated with a migration, and get it over and done with sooner. Particularly if you're already hurting from whatever it is that you're fixing, and it's about to cause you unscheduled downtime.
The comment [0] provides more insights into the planning and downtime requirements. The epic itself may be helpful too, it is linked from the blog post.
I don't think you can say that. They might know what happened, but it could still be hard to recover from.
Catastrophes happen. If you deploy something that has a destructive migration you can't easily roll back without reverting to a backup and there's a problem that you've not seen in QA then you're in for a bad time. This is compounded if you also discover your backup process hasn't worked properly for a while. If that happens you're facing some serious downtime, and the dilemma of either trying to fix the problem, or trying to rollback to the last working backup.
There's a good reason why grumpy old devs like me insist on writing docs, having playbooks, testing everything including non-code stuff, and we still fear major deploys. I have scars from exactly those sorts of disasters.
Hopefully the devs at Circle get past this with as little stress as possible, and they learn from what went wrong.
You are totally right, catastrophes happen and I also wish that they get past this with as little stress as possible. The whole reason of my assumption was the lack of description in their updates for the incident that is going on for 6 hours. Maybe little more detail would give me a hint that everything is under control, but I didn't feel that when I read their updates.
When facing such large scale issues, communicating properly is very hard: Several teams might be investigating several possible root causes in parallel, and you might change your mind over time as to what is the most probable root cause.
So you might end up communicating something ("we think it comes from X, we're fixing it that way"), just to find yourself changing you mind a few minutes later.
Changing your message is usually not well perceived, even though that's actually normal during an investigation.
I would not like to be in charge of the communication. Finding the balance between saying too much or too little is tricky.
It’s probably just because they had the choice of either focusing all their energy on fixing the problem asap or setting aside some of it to write a more detailed description that’s also fit for public consumption. Given the severity, they probably chose the former since whatever descriptive, reassuring description they put out there isn’t going to be actionable anyway.
Hi folks. As the CircleCI CTO, I appreciate your patience here and all the feedback. It's true that we are focused on getting customers moving again over sharing more detailed information, but will aim to do better in providing a bit more in our updates. status.circleci.com provides real time updates for both how we're tackling outages and more detailed incident reports. We will post more information there about this incident once we are on the other side and have comprehensive detail.
If question asked for just returning the sum, then yea, that would be acceptable. However, the question requires the comparison of sums. To decide which one is actually larger, 5-6 digits is not enough, even "practically".
It's not really different. If you were given the sums in the first operation to 5-6 digits and it was acceptable error margin, the comparison is within that acceptable error margin too, it's just this time the error led to a wrong binary not a wrong decimal.
How long till Linux or Gnome supports these types of laptops? To me this is an incredible concept since I can use it both as another monitor as well as a portable device for a few days of a year during my holiday. If only after it has full Linux support.
Does it work out of the box or do you need to do something specific. I have a custom built desktop with a 4k and a 1080p monitor. I can get one of them to look good. :( I’m on mint.
Works out of the box for me since moving to Fedora running Wayland and Gnome.
I just set one monitor to 100% and the other (4k) to 200%. In fact I think that choice was originally made for me but you can change it.
When I first started using it a couple of years ago you'd get old artifacts when moving a window from one monitor to another. e.g. the mouse pointer could be the wrong size.
Over the years, like many things on Linux, it's got more polished.
Linux supports this, but applications on Linux may not (or not gracefully). I've found Chrome on Ubuntu (GNOME with Wayland) just doesn't adjust its scale when being dragged across displays with differing UI scaling settings. I don't have this problem on Windows or Mac OS.
- Gnome actually does the scaling in the compositor, not in the X server itself.
- Most desktop environments should be doing all that for you. At the time I wrote that (over 2 years ago), in Gnome you had to go in to an advanced settings menu and enable "experimental fractional scaling"; I'm not sure if that's still true today.
- Some things pick up the DPI from the XSETTINGS protocol, not from the XRDB. Specifically, parts of Java's AWT/Swing do this. But other parts use the XRDB. And they conflict with eachother. When I last looked at Swing (April-ish 2020) it was impossible to get it to do the right thing on X11 (I was working on a patch, but then I had some life disruptions and never came back to it).
It is. Right now your mileage will vary depending on graphics driver, compositor (KDE and Gnome have their own), etc - but for me it worked pretty well. Wayland's not _quite_ polished.
Wayland itself functions quite well in my experience, but if you run anything Nvidia or anything with the X11 compatibility layer, you're in for the rare but annoying bug to slip through.
Linux desktops support color management to approximately the same extent Windows does, which is to say "annoying for professional users with specific applications, non-existent otherwise".
Uh, you can load ICC profiles on both Linux and Windows. On my Thinkpad X201 it was a hard requirement if you didn't want your eyes to bleed, without the profile it was like staring into a bug zapper it was so blue.
This is a super common misunderstanding. When you load an ICC profile into "the system", sometimes all the colors everywhere change, because many ICC profiles contain (non-standard) RGB gamma ramps. Those are 1D per-channel LUTs - the same ballpark as changing the "R G B" values in a monitor's OSD; they can't do color-space conversions. The reason this is done is essentially to reduce numerical artifacts when a color-managed application uses the profile. [1] The reason why it's global is because gamma ramps are part of the scanout system (display interface) in a graphics adapter.
This effect makes it look like the entire system is color-managed when in fact the opposite is true.
[1] Any profile that has been created with gamma ramps has to be used system wide and specified in color-managed applications, because the standard color transforms in the profile are calculated to be correct when the gamma ramps are in place. If you skip either step, you'll get wrong results.
Wouldn't it be possible to do a vector product with the wrong RGB gamma ramp to get a resulting color profile that would work system-wide? (as it may be easier to change the color profile than the RGB gamma ramp)
Color temperature is not what's being talked about here; that's just a small subset of color management. HDR is also not about bits, but color space, which requires profile support.
Even 10 bit support is another issue. A few composers have alpha support recently, but it isn't something you can expect to use.
From what I read the windows support for the X1 fold is still pretty bad and usually for specialty consumer devices linux support lags behind quite a bit.
I was considering buying a X1 fold, but reviews saying windows support was bad (probably since windows X never launched) and no linux support even on the horizon made me hold off.
The MacBook-iPad integration for external display use is really quite nice. I don't use it that much right now because I've got a nice extra wide screen monitor next to my laptop stand for when I'm at home and I'm always at home, but I used to, when I would be working at the library in the olden days and it was nice to have an extra screen from time to time.
Yea, there are many people just using their laptops as a monitor on stand next to their existing monitors. Then they just use whatever separate keyboard they like. So this would essentially remove the bottom part from laptop for extra screen estate.
I can imagine this would work fine too in a full display mode (if that is hardware default), but then when you try to switch to a laptop mode somehow half of the screen needs to turn off and the other half needs to rotate 90 degrees. It sounds like a lot of work.