> This chapter describes the current draft proposal for the RISC-V standard compressed instruction
set extension, named “C”, which reduces static and dynamic code size by adding short 16-bit
instruction encodings for common operations.
The RISC-V compressed instructions are more a shorthand for some commonly used 32-bit instructions than a 16-bit ISA that stands on its own: the compressed instructions lack a number of essential operations that make it impractical to only use compressed instructions
This is in contrast to, say, the ARM Thumb instructions, where you can can have entirely libraries that use 16-bit instruction without ever needing a 32-bit one.
But it's odd that they do not cite Tokio. I know this isn't an academic paper, but come on have some professional curtesy and discuss the contributions made in prior art.
Apologies if I'm misunderstanding things here, I'm just now getting back into Rust after a couple of years of not using it. Did Tokio really inspire this library that much?
Absolutely. The whole std::future interface has been borne out of years of careful attempts to actually make these abstractions work in real life. async-std didn’t come from a vacuum. It’s a incremental improvement on tokio that benefits from being able to greenfield on top of the newly changed and standardized future trait.
Carl Lerche and the rest of the Tokio contributors deserve a citation.
I've got a LimeSDR-mini (which is more economical). If you're only interested in RX, you should definitely consider the cheaper RTL-SDR. I'm just starting to learn about SDR, and am far from sending anything. I wish I had gone with an entry-level device first.
I’ve got at least a dozen RTL’s strewn around the place lol. They are easy to use but once you hit some of their limitations you wind up wanting something more
It's not so much the fork but the memory cost. Each of those subprocesses has at least one call stack = 2 megabytes of memory. 2 megabytes per connection is many many orders of magnitude more that you would use in an asynchronous server.
1) that's virtual size, and most likely (depending on OS/cfg) COW (assuming no call to execve).
2) that's a default - most systems allow tuning
You can have pretty decent performance with forking models if you 1) have an upper bound for # of concurrent processes 2) have an input queue 3) cache results and serve from cache even for very small time windows. Not execve'ing is also a major benefit, if your system can do that (e.g. no mixing of threads with forks). In forking models, execve+runtime init is the largest overhead.
It will not beat other models, but forking processes offer other benefits such as memory protection, rlimits, namespace separation, capsicum/seccomp-bpf based sandboxing, ...
I think you guys are both right. Back in the days when I measured UNIX performance, it was fork that was expensive due to memory allocation - but not the memory itself. It takes time to allocate all the page tables associated with the memory when you are setting up for the context switch. But I should admit that it was a long time ago that I traced that code path.
Not for science history. It's very hard to grasp what the hell the LHC is about. First tell me why how they figured out water wasn't an element but a combination of two.
It can kind of work for that. Looking at Wikipedia the LHC article links to "Composite particles" to "History of atomic theory" to
"Leucippus (/luːˈsɪpəs/; Greek: Λεύκιππος, Leúkippos; fl. 5th cent. BCE) is reported in some ancient sources to have been a philosopher who was the earliest Greek to develop the theory of atomism—the idea that everything is composed entirely of various imperishable, indivisible elements called atoms."
The LHC is about looking for more "indivisible elements" by smashing stuff harder.
just specify a different --line-length argument? Easily solved.
I have mine set to 120. I code my python code in Pycharm, on a high resolution screen. Having a column limit of 88 only makes sense if you are inside of VIM or something.
Almost everyone agrees that there should be a column limit, but there is general disagreement about what that limit should be, exactly. There are plenty of sound arguments for 80 and as many for 100 or 120.
It was a standard from a time when we had green-screen terminals only capable of displaying an 80x24 grid of characters. (https://en.wikipedia.org/wiki/VT100)
It's worth asking - does the standard make sense still, given how we edit today?
It's not just about whether your hardware is capable of displaying wider lines. If it were only about hardware capability, why wouldn't we be writing code that's 200 or 300 characters wide?
* Studies of readability generally show that it declines once lines of text are longer than 60-70 characters, not counting whitespace or punctuation. At that point, humans have difficulty finding the beginning of the next line, which slows them down. You can compensate for this by increasing line spacing but you lose a bunch of space that way. The vast majority of professionally typeset natural language material is limited to about this length, or even shorter.
* People read code in terminals. Most terminals default to 80 columns wide. Consider people that develop on multiple computers and multiple OSs, and have to reconfigure them all. Or if you use a new computer or loaner computer the defaults will be back to 80. So if you change to 120 columns, you have to do it over and over again. Same with text editors, but less so.
* Side-by-side diffs can get cumbersome if the text is more than 80 columns wide, and consider that font sizes vary, and some people like their monitors vertical for reading diffs so they can see more context. On my 24" 1920x1200 monitor, I can easily read a side-by-side 80 column diff, very nearly 100, but definitely not 120.
* As a heuristic, an abundance of wide lines often indicate problems with the code itself. Too much nesting or something like that. This depends on the language and indentation used, it's generally accepted that Java code will be something like 25% wider.
I'm not saying that 80 columns is the right choice, only that there are reasons to support that choice. Just like there are reasons to choose 100 or 120.
I exclusively program on a small laptop and don't have great eyesight. Formatting code such that it only looks good on huge displays makes my life more difficult.
The other respondents to this message more or less have it right.
The way this stuff works is that when GEICO signed the deal to get access to this, they pinky-swore in a contract to only use the data certain ways.
Often, the representatives on both sides of such transactions even have a wink-wink nod-nod deal going which is different from what the contract materially represents.
Importantly, these contracts virtually always avoid talking about mechanisms for tracking such usage, auditing such usage, and even any remedies for violations (beyond discontinuing the service access - and then only if it's egregious).
You'd be amazed how much in the telecom world is handshake and contractual with no technological enforcement and often neither side of these agreements are incentivized to enforce the terms laid out.
The parts of these agreements that are solid is how transactions, events, etc are measured and what these cost and who pays and how. Shocking, that.
They don't need oral approval or any approval. GEICO is only asking so that their customers won't freak out when GEICO magically knows where they are. The customer service rep probably had the data up on their screen already when they asked.
I wonder if they use this data to price insurance -- they would easily know when their drivers are going over the speed limit (or, if such data is not so precise, if their average speed over 10 minutes exceeded the speed limit).
More likely is approximating number of miles driven and price discriminating based off that. More miles driven = more risk of an auto accident. Basically pay-per-mile car insurance, but hidden.
They don't need to know you are driving to do price discrimination. They could just as well take the zip codes where you live and work and assume you're driving, and make a profit giving discounts to folks with a shorter commute regardless of whether or not they actually drive it.
You need approval from the customer if you're using a data provider that is pinging E911 location of the phone. Carriers require it. E911 location isn't that precise, its not like GPS and can be a mile or so off. It's good for detecting travel(banks) and roadside service.
> This chapter describes the current draft proposal for the RISC-V standard compressed instruction set extension, named “C”, which reduces static and dynamic code size by adding short 16-bit instruction encodings for common operations.