It's more of a capability thing. If you're running, say, Piston cloud you're using ceph over ethernet to back your disks, so you can easily decouple disk usage and ram usage. If you're stuck using local disks (ie. rackspace/joyent/linode/amazon to a point/etc.), then it's a lot harder to provide that sort of product.
That being said there are providers out there that sell it, and have been for years.
This makes sense when you keep in mind how the older clouds work (essentially VPS providers 101).
You have a server with some disks, some ram and some cpus. You aggregate the disks together, then split them to form the individual disks for the virtual machines. You then use kvm/xen to provide isolation as well as to split the ram/cpu between the virtual machines.
So to answer your question: Storage/ram/cpu is sold in lock step because otherwise there would be resources sitting on servers that are unable to be sold. Bandwidth isn't constrained like that because bandwidth isn't a thing tied to a machine.
There are some providers out there that don't lock ram/disk together. This is mostly because they use a distributed storage pool rather than local disks. This is significantly more complex and is a 'fairly' new addition to the scene (~2010?).
This is also why certain providers still charge you for ram even when your machine is turned off, and why backups/migra
tions/plan upgrades can be a bit of a pain in the neck at times.
Key value stores are useful, and they are especially useful in this form factor. On the other hand, you now have a very large black box that you have to somehow navigate in order to create a workable system. Given that this is likely an arm core running linux on the inside, I would have considered a slightly more open approach to be 'Here's a working KV store using backing db X and here's how to reflash it if it doesn't quite work for you'.
Details will vary between different displays but my point is that due to the pixels being on a grid sequential access is going to be faster. This is not unlike memory.
"The TFT-LCD panel of the AMLCD is
scanned sequentially line-by-line from
top to bottom. Each line is selected by
applying a pulse of +20V to gate line Gn,
which turns on the TFTs in that specific
row. Rows are deselected by applying
–5V to G
n-1
and G
n+1, which turns off
all the TFTs in the deselected rows and
then the data signal is applied from the
source driver to the pixel electrode.
The voltage applied from the
source driver, called ‘gray-scale voltage,’ decides the luminance of the pixel. The storage capacitor (CS) maintains
the luminance of the pixel until the
next frame signal voltage is applied.
In this way, the next line is selected
to turn on all the TFTs, then the data
signal is fed from the source driver and
hence scanning is done."
Selecting rows like that is very similar to how DRAM works, just with the word size being the number of subpixels in a row, and with no read port. Bit-level random access is inefficient, but you don't have to write to the rows in sequential order, and you don't have to update all the other rows before issuing another update for the first row. That's purely a limitation of the current driving circuitry, but a replacement like G-SYNC doesn't have to be bound by sequential rasterization any more than it has to stick to a fixed refresh rate.
But it's most likely working that way because (analog) displays always worked like that and your display technology needed to be as compatible as possible for CRTs and TFTs.
From my point of view you have not explained why it's not technologically feasible. You're merely describing that the current display tech isn't working that way.. of course not..
It's hard to speak about feasibility in absolute terms here. I would disagree that the current tech is the way it is simply because of the history of CRTs (though there's definitely some influence). Displays have evolved to their current technology by optimizing for things like manufacturability, price and performance. Those obviously would come ahead of CRT compatiblity.
The motivation for the gridded layout is clear I think? You have this grid of transistors and you need to address them individually. Being able to drive an entire line and then select the columns is a good and relatively cheap solution. Now you can drive all pixels in one line concurrently if you need to and the performance of a single pixel becomes less of a bottleneck.
So the row/col grid structure isn't a result of needing to be compatible with CRTs... Also naturally accessing in sequence allows you to simply send the data and clock down the line. Random access would require either multiplexing the coordinates or widening your bus.
I would imagine it's possible to design a random access LCD. You would need better performing individual pixels, you will almost certainly need more layers and more conductors, you will complicated your interfaces and protocols. So you end up with a more complex and expensive system for practically little benefit. In many applications (games, videos) all pixels change every frame.
Sub-scanning a rectangular portion of the display is maybe a more reasonable target.
Using external kernels isn't in any way fundamental to the way Xen works. Most standard setups (even PV) have the kernel inside the VM, which allows for standard upgrades, etc.
Why are you mounting customer images on your host fleet? Put another way, what do you supply for "kernel=" in xen.conf? A file from your customer's filesystem?
And it happens to use bloom filters under-the-hood to do it.
From the docs:
"Two GiST index operator classes are provided: gist__int_ops (used by default) is suitable for small- to medium-size data sets, while gist__intbig_ops uses a larger signature and is more suitable for indexing large data sets (i.e., columns containing a large number of distinct array values). The implementation uses an RD-tree data structure with built-in lossy compression."
I saw that, but it doesn't seem to implement any of the indexes you'd need in order to run a bloom query efficiently.
That being said, if you stored your bitvector as an array of powers of two, then it would work. But that would be horribly inefficient in terms of space usage.
Which is why PostgreSQL is scriptable: the various contrib modules are often better looked at as examples of how to build your own indexes using GIN/GiST than "this is what we provide".
In your case, though, a strict immutable function mapping the bitvector to an int array as part of a functional index should be sufficient to use the existing contrib module: you don't need to store the things you index in the table.
I don't see how you could theoretically index this in a way that supports the bloom filter query. Indexes rely on data being ordered, and b-tree (similar to binary tree) lookups. A where clause that's like "where col & val = col" can never be supported by a b-tree style index... right? Am I missing something?
That being said there are providers out there that sell it, and have been for years.