I like them too, but they get empty quickly. Only a fraction of the cartridge is ink. The majority is plug gel and pressurised air. Apparently that’s required for it to function properly.
the erasing part is what frightens me ; putting your document near a heating device erases your writings (the heat produced by the friction of the eraser is what remove the ink when you do it manually)
This is useless. You can't stop Collective Shout (their campaign almost surely falls under First Amendment rights), and even if you could, 30 minutes later a new group pops up. Plus your message would fall completely on deaf ears for anyone who agrees with Collective Shout.
Bring attention to the fact that payment processors are acting as active censorship of legal content, rather than neutral infrastructure. Emphasize that if they can censor legal content, anything could be next, including but not limited to political donations of a specific party.
Collective Shout is a foreign organization attacking American companies. The First Amendment does not mean you get to speak and advocate in secret, and it only applies to American residents.
> Wow, that's smart. I was wondering whether there is a way for the bots to generate "unpredictable" domains such that security researchers could not predict them efficiently (even with source code), but the botnet controller can.
There is a fairly simple method which achieves the same advantage for a botnet controller.
1. Use a hash of the current day to derive, for that day, an infinite stream of domain names. This could be something as simple as `to_human_readable_domain(sha256(daily_hash + i))`.
2. A botnet slave attempts to access servers in a diagonal order over (days, domains), starting at the first domain for today and working backwards in days and forwards in domains. An image best describes what I mean by this: https://i.imgur.com/lcEbHwz.png
3. So long as one of those domains is controlled by the botnet operator (which can be verified using a signed response from the server), they can control the botnet.
This means that the botnet operator only needs to purchase one domain every couple of days to keep controlling their botnet, while someone trying to stop them will have to buy thousands and thousands every day.
And when you successfully purchase a domain you can publish the new domain to any connected slaves, so this scheme is only necessary for recruitment into the network, not continued control.
Imgur has been inaccessible for me for months, they're one of those organizations that consider it proper to block whole countries to counter bot abuse.
I've definitely heard of cnc using a plural of domains for this reason. the bots have a list of domains they reach out to, searching for one that is valid.
I believe one issue with this strategy is many corporate VPNs block fresh domains. I guess if the software was pinned to use encrypted DNS instead of whatever the OS recommends, then the DNS blocking could be avoided...
My employer uses Zscaler. I don't know exactly how they implement this, but my educated guess is the corporate DNS server doesn't resolve domains that were created recently.
In technical terms, the device asks the private corporate DNS server for the IP address of the hostname. The private DNS server checks the requested domain against a threat intelligence feed that tracks domain registration dates (and security risks). If the domain is deemed a threat, either return an IP address which points at a server that shows a warning message (if http traffic) or return an invalid IP (0.0.0.0).
Ooh, I remember this, but actually the thing is older than it.
First, nVidia and ATI used executable names for detecting games, then they started to add heuristics.
If you think they stopped the practice, you're very mistaken. Every AMD and nVidia driver has game and app specific fixes and optimizations.
nVidia cheated in 3D Mark that way, so they patched/changed their benchmark to prevent it. Also, again nVidia, patched their drivers so some of the more expensive but visually invisible calls like scene flushes in a particular game is batched (e.g. do all 50 flushes at the 50th call) to prevent the game becoming a slide show on expensive hardware.
This is also why AMDs and Intel's open source drivers under Linux a success, because they are vanilla drivers written from scratch per spec, and if your code calls OpenGL/Vulkan to spec, then you're golden.
Even some companies cross compile AMD's Linux drivers for windows on embedded systems since they're free from useless optimizations from them.
Interestingly, most benchmark controversies back in the day are now expected behaviour, i.e. game-specific optimizations with no (well, in this age of upscalers and other lossy optimization techniques, probably even somewhat) visible image degradation. A gaming-specific driver with no game-specific improvements in its changelog would be considered strange, and it very much works with executable detection.
Back in the day, there was still the argument that drivers should not optimize for benchmarks even when visually identical, because it wouldn't show the hardware's real world potential. Kinda cute from today's perspective. :)
But of course there were the obvious cases...
The Quack3 lowering filtering quality as shown above, of course (at least that one was put into the driver as a togglable setting later on).
But the most cheeky one has to be nVidia's 3dmark03 "optimizations", where they blatantly put static clip planes into the scenes so that everything outside the predefined camera path from the benchmark sequence would simply be cut from the scene early (which e.g. fully broke the freelook patched into 3dmark and would generally break any interactive application)
I think that was the first case (to go public), but I remember reading about this in game magazines a couple times after this, for both ATI and nvidia.
Even in the middle of that turmoil, we managed to compile some code with Intel's ICC and make it go faster on AMD Opterons, breaking Intel's own numbers.
When my colleague said that they managed to go faster than intel with icc with some hand tuned parameters, I remember answering "youdidwat?".
Runtime of the compiled code. The ostensible intent is so that new processors can use new features like SIMD, while offering a fallback for older ones. In practice, they’re detecting an Intel processor, not just the specific feature.
ok i took a look: i think i just did "companies by market cap" not "wikipedia companies by market cap"... should have refined rather than assume the wiki doesn't exist.
> The most novel aspect of OoT bitsets is that the first 4 bits in the 16-bit coordinate IDs index which bit is set. For example, 0xA5 shows that the 5th bit is set in the 10th word of the array of 16-bit integers. This only works in the 16-bit representation! 32 bit words would need 5 bits to index the bit, which wouldn't map cleanly to a nibble for debugging.
There is nothing novel about this really. It's neat that it works with hexadecimal printing to directly read off the sub-limb index but honestly who cares about that.
Outside of that observation there's no advantage to 16-bit limbs and this is just a bog-standard bitset where the first k bits indicate the position within the 2^k bit limb, and the remaining bits give you the limb index.
My comment was worded a bit too harshly, sorry about that, I certainly wouldn't have worded that way in a personal message.
As I mentioned the hexadecimal printing coincidence is a neat fact, I was just excited when clicking the link to find a novel bitset idea. In my disappointment to find the standard bitset (albeit with 16-bit limbs) I reacted a bit too harshly. And as per https://xkcd.com/1053/, just because something isn't new to me doesn't mean it's not new to anyone.
VN extrator is a specific case of a more general idea: When you independently (hard assumption of VN extractor) draw M times with N possibilities then you can extract entropy from their permutation.
Assign some scheme for converting permutations to an index.
Then get uniform bits out, maintain two variables: one is the product of the number of permutations, the other gets multiplied by the number of permutations and the index added. Whenever the number of possibilities is divisible by two, output the LSB of the index accumulator and halve the number of possibilities.
Size up your groups and accumulators and you can get arbitrarily high extraction rates.
Doing it efficiently and in constant time (e.g. without divisions) is the more exciting trick. A colleague and I managed an extractor for the binary case that packs takes 10+3N multiplies and N CTZs to pack N bits (giving an exact invertible encoding when bits choose ones is < 2^64).
But also, you might have to flip the coin an arbitrarily large number of times before you get a "heads tails" or "tails heads" roll (if I can arbitrarily pick how biased the coin is).
Whether or not DuckDB is faster than Polars depends on the query and data size. I've spent a large portion of the last 2 years building a new execution engine for and optimizing Polars, and it shows: https://pola.rs/posts/benchmarks/.
reply