Would this be far enough out to use the sun's gravitational lensing to image distant planets?
It seems like the idea was to send a bunch of instruments way out and then take pictures in the brief time they were at a useful distance, but if there's a planet out there we can orbit and so stop the instruments at that distance it seems like we could make a permanent super telescope.
orbiting a planet in that case is no different than orbiting a sun on the same orbit as planet. Probably even more cumbersome, all that jiggling around. Or are you talking about making a gravity assist to turn the orbit of the probe into less eccentric?
I'd like to remind you that there are still millions of people around the world using Windows 7 daily. The fact that some software is no longer supported by its developer doesn't mean it stops working somehow, or becomes radioactive.
You can't really exploit something when its attack surface is nearly nonexistent, which is the case for most people who use an outdated OS on their personal device, for example.
Even if there's an exploitable vulnerability, the exploit has to be delivered to the target system somehow. You don't have much of an opportunity to do that with a device that doesn't have a public IP address. Most likely the user themselves will have to do something that would compromise their system, like visiting a website that would serve them an exploit for their particular combination of browser and OS.
For example, when solar plus direct air capture can remove a ton of CO2 for cheaper than it costs a container ship not to emit that CO2 then it's reduced cost for the same CO2 outcome even though it's using more total energy.
Regardless of whether it actually makes sense to capture carbon, you'll see a lot of sky-is-falling fanatics and vested interests dismissing it because it caps the price of carbon credits and limits economic damage estimates. You can't price CO2 at $500/ton to necessitate change when it only costs $200/ton to capture it - without quickly going bankrupt that is.
This is why the IPCC not even attempting to evaluate mechanical capture shows they aren't serious about solving the problem. They seemingly exist to push a fear narrative, and having an upper bound on the impact of CO2 limits their ability to do so.
The 1..125 loop stores 8000 bytes of string and they need to clear 8000 bytes.
There may be a fast path for adding one character, but in any case bytes of program are a valuable resource with only 64k ram so having a second loop from nearest power of two to 8000 would be a waste of bytes.
If this had been in shells and cmdline tools since the beginning it would have saved so much work, and the security problems could have been dealt with by an eval that only set variables, adding a prefix/scope to variables, and so on.
Unfortunately it's too late for this and today you'll be using a pipeline to make the json output shell friendly or use some substring hacks that probably work most of the time.
That's great for key=value data, but more complex data structures don't work so well in that format, JSON does. "Why would you need to represent data as a complex data structure?" Sometimes attributes are owned by a specific entity, and that entity might own multiple attributes. It might even own other sub-entities. JSON represents that. Key=value does not.
JSON is literally key=value, just nested. Which you can do with shell variables.
The question was "What's not to like [about JSON output from cmdline tools]?" and the answer is that it's cumbersome to read in a shell and all but requires another pipeline stage.
I didn't even recommend shell variable output and made it clear this isn't today a reasonable solution so I'm not sure where this hostility in the replies comes from, but I assume from recognition that it's a more practical solution to reading data within a shell but not wanting that to be so.
The nature of being nested, and also containing structures like lists, maps, etc. All of which makes it more complicated than key=value.
> The question was "What's not to like [about JSON output from cmdline tools]?" and the answer is that it's cumbersome to read in a shell and all but requires another pipeline stage.
It depends on the intended use for your shell program. If you intend the CLI tool to be used in CI pipelines (eg. your CLI tool's output is being read by an automated process on a computer) and the data it outputs is more complicated than a simple key=value, JSON is great for that. Your CI program can pipe to jq. You as a human can pipe to jq, though I agree it's somewhat less desirable. Though just piping to jq without any arguments pretty prints it for you which also makes it fairly readable for humans.
> so I'm not sure where this hostility in the replies comes from
You're reading into hostility where there isn't any.
> The nature of being nested, and also containing structures like lists, maps, etc. All of which makes it more complicated than key=value.
These are javascript objects, which are key-value. A list array is just keyed by a number instead of a string. They're functionally exactly the same as name=value except JSON is parsed depth-first whereas shell variables are breadth-first parsing (which is way better from shells).
Do you have an example of a CLI tool - intended for human use - that has output so complicated it can't be easily mapped to name=value? I don't think there is one, and it's certainly not common.
> You're reading into hostility where there isn't any.
I think "it seems you're determined not to use jq" is pretty hostile since I made no intimation of that at all.
> I think "it seems you're determined not to use jq" is pretty hostile since I made no intimation of that at all.
Well, I didn't say that, so I don't know what that other person's feelings or intentions are, to be fair. I personally have no feeling of hostility towards you just because we (apparently) disagree on the usefulness of JSON to represent complex data types, or at least disagree on how often human-usable CLI tools output complex data. But to answer:
> Do you have an example of a CLI tool - intended for human use - that has output so complicated it can't be easily mapped to name=value? I don't think there is one, and it's certainly not common.
kubectl. Which to be fair defaults to output to a table-like format. Though it gets all that data in the table from JSON for you.
smartctl is another one, which also defaults to table format.
To be honest, I could go on and on if the only qualifier is a CLI tool that emits complex data, not suited for just key=value.
> These are javascript objects, which are key-value. A list array is just keyed by a number instead of a string. They're functionally exactly the same as name=value except JSON is parsed depth-first whereas shell variables are breadth-first parsing (which is way better from shells).
As mentioned before, just because you can compare JSON to key=value, does not mean it's as simple as key=value. It's a data serialization language that builds well on top of simple key=value formats. You're welcome to enjoy other data serialization languages, like yaml, HCL, or PKL. But none of those are simple key=value formats either. They built the ability to represent more complex structures on top of that.
A data serialization language allows the end-user to specify how they would like to use that data, while allowing them to use standard parsing tools like jq. Cramming complex data into a value string in a key=value format gives end users the same allowance to use that data however they want, while also giving them a chore to handle parsing it in custom ways tailored to just your CLI application, likely in ways that would seem far more brittle than parsing a defined language with well defined constraints. That doesn't sound like great UX to me. But to be fair to you, you're not saying that you wish to use key=value to represent complex data. Rather, you're saying there's a general lack of complex data to be found, to which I also disagree with.
> But none of those are simple key=value formats either.
What is the difference between:
{ object: { name: value }}
{ object: "{ name: value }"}
object="name=value"
There's zero difference between any of them except how you parse and process the data.
> kubectl. Which to be fair defaults to output to a table-like format.
With line-based shell-variable output you have a line of variables and you have blocks of lines separated by an empty line (like an HTTP 1 header).
This can easily map to any table, two dimensions, or two levels of data structure without even quoting subvariables like in the example above. So, no, kubectl is not an example at least not how you've described it.
> What is the difference between .. There's zero difference between any of them except how you parse and process the data.
Answered in the previous message... "A data serialization language allows the end-user to specify how they would like to use that data, while allowing them to use standard parsing tools like jq. Cramming complex data into a value string in a key=value format gives end users the same allowance to use that data however they want, while also giving them a chore to handle parsing it in custom ways tailored to just your CLI application, likely in ways that would seem far more brittle than parsing a defined language with well defined constraints."
> With line-based shell-variable output you have a line of variables and you have blocks of lines separated by an empty line (like an HTTP 1 header)...
I would not choose to write application logic that foregoes defined data serialization languages for parsing barely structured strings the way you seem to prefer. But you go about it the way you prefer, I guess. This whole discussion leaves a lot of room for personal opinions. I think we both agree that the other person's opinion here is subjectively the more annoying route to deal with. But that's the way life is sometimes.
That's not your original request though, to use line-based data. It seems you're determined not to use jq but if anything, json output | jq is more the unix way than piping everything through shell vars.
> That's not your original request though, to use line-based data.
It wasn't my request and OP (not me) said "line-based data" is best. The comment I replied to said "Newline-delimited JSON ... a line-based format".
If the only objection you have is "but that's line-based!" then you're in a completely different conversation.
> if anything, json output | jq is more the unix way than piping everything through shell vars.
The unix way is line-based. The comment I replied to is talking about line-based output. Line-based output is the only structure for data universal to unix cmdline tools - even tab/space isn't universal; sending structured non-line-delimited data to a program to unpack it is the least unix-like way to do it.
Also there's no pipe in the shell-variable output scheme I described, whereas "json | jq" is a shell pipeline.
And, the author isn’t suggesting only having JSON output, but adding it as an option for those of use that would make use of it. The plain text should remain as well (and has to or many, many things would break).
On a separate point, I find the JSON much easier to reason about. The wall of text output doesn’t work for my brain - I just can’t see it all. Structuring/nesting with clear delineations makes it far easier for me to grok.
SPDY's header compression allowed cookies to be easily leaked. This vulnerability was well known at the time so had they even asked an intern at Google Zero to look at it they would have been immediately schooled.
In their performance tests vs HTTP 1.1 the team simulated loading many top websites, but presumably by accident used a single TCP connection for SPDY across the entire test suite (this was visible in their screenshots of Chrome's network panel, no connection time for SPDY).
They also never tested SPDY against pipelining - but Microsoft did and found pipelining performed the same. SPDY's benefit was merely a cleaner, less messy equivalent of pipelining.
So I think it's fair to say these developers were not the best Google had to offer.
another explanation - they did test it in other scenarios but the results were against their hopes so they 'accidentally' omitted such tests in the 'official' test suite. Very common tactic, you massage your data until you get what you want.
> there’s no way to measure time directly. It clearly exists, yet all you can measure is change of things besides time.
If it can't be measured then it can't be said to clearly exist.
Imagine a cellular automata where particles have lots of "slots" that could be used for moving or interacting. As the particle speeds up and more slots are used for moving, there are fewer slots for the kind of interaction change that we use to measure time. At the highest speed, with all possible slots used for motion, the particle would experience no change, which is indistinguishable from no time passing.
Does that sound familiar to anything? It's certainly possible that light being a speed limit, time dilation, relativity, and so on are in some way actually describing change rather than time.
> if running without swap and there exists any ram which is accessed less commonly than the next-most-commonly-accessed area of disk currently not in cache, the memory utilization is suboptimal.
Memory that is swapped out is a small write operation, which generally is much more resource and wear intensive than a read; a program memory page and disk cache page are not equivalent.
Additionally, the swapped out program memory may be required again and cause an unpredictable delay in program operation; when a user has to wait for a menu to open while it is swapped back in that is suboptimal use of memory.
A modern operating system should have compressed memory rather than swap. Take the pages that would be swapped out for being rarely accessed, if they compress well then free the page and store it in an area for compressed pages. This will get most of the expanded cache benefit from swap without delays, wear, or possibility of the system grinding to a halt.
> because of how competition over mining rewards works, [bitcoin] has the characteristic of consuming more and more energy the more it succeeds.
New bitcoin from mining is halved every ~4 years, so every four years miners can afford to spend only half as much electricity to mine from that revenue.
As revenue from new bitcoins tapers off the work expended will self limit to the value of transactions. If you're charged 1% to include your transaction the value of energy used to mine a block will eventually not exceed the fee for including those transactions.
So it doesn't have the characteristic of consuming more and more energy and is self-limiting in how much energy is used. The bitcoin energy problem will take care of itself in time.
> New bitcoin from mining is halved every ~4 years, so every four years miners can afford to spend only half as much electricity to mine from that revenue.
Not exactly. Miners are paid out of the sum of block reward and fees multiplied by the market price of BTC. Every ~4 years the contribution of block reward goes down, but that doesn't mean that the price goes down, or that the contribution of fees stays the same.
If it was block reward alone and the "energy problem solved itself" then the blockchain would be completely vulnerable to a 51% attack and it would instantly become worthless.
The expectation is that the contribution of fees will go up as the block reward goes down, although it remains to be seen how much direct fee the market will bear. Currently the actual cost of a BTC transaction is hundreds of dollars - but most of it is socialized via inflation. It is unclear if the market will bear paying hundreds of dollars in transaction costs instead -- and if not, there's no reason the 21M coin limit can't be raised to continue doing exactly what has been happening so far.
> Currently the actual cost of a BTC transaction is hundreds of dollars
I thought... huh, this can't be right. So I did some back of the napkin math. We have about 2500 transactions in a block, a block reward is 6,25 BTC. That comes around a cost of 0.0025 BTC per block, or about 112USD. That's without considering the extra tip from the transaction. So, not really hundreads, but damn' close.
It makes me think increasing the block size really wasn't a bad idea.
> there's no reason the 21M coin limit can't be raised to continue doing exactly what has been happening so far.
Sure, but you'd need a hard fork. Not impossible, but hard to reach the consensus.
I don't buy this argument (just like the point you are responding to is mostly wrong too). Yes it won't consume more and more but it won't consume less and less either. Miners are n't going to mine a loss it will always be SUM(rewards + fees). The network has to be secured or bitcoin loses all perceived value. Difficulty will have to creep upwards as technology improves. The question is will there be a way to mine more difficult problems using less power. That is the only way bitcoin power usages stops growing (slowly).
> The question is will there be a way to mine more difficult problems using less power. That is the only way bitcoin power usages stops growing (slowly).
That's not how it works. The whole point is to spend power (work).
If there was some breakthrough that allowed finding hashes with 10x less electricity, then the network wouldn't burn 10x less electricity. It would instead find 10x more hashes.
It seems like the idea was to send a bunch of instruments way out and then take pictures in the brief time they were at a useful distance, but if there's a planet out there we can orbit and so stop the instruments at that distance it seems like we could make a permanent super telescope.