You can DRS (https://www.dtcc.com/asset-services/securities-processing/di...) your shares so that no one can lend them out from you.
Some brokers have a setting (opt in or opt out) that disallows lending your shares (or that compensate you if they do).
I don't get the "its hard to measure throughput" line. I'm using RDS at work. At some point we had 20TB data, with daily 500GB (batch) writes into indexed tables. Same order of magnitude cost, sure. But the combination of RDS instance monitor, Performance Insights, PGadmin dashboard means you have: visual query plan with optional profilling (pgadmin), live tracking of SQL invocations with # invokes per second, avg number of rows per invocation, and sampling based bottleneck analysis (disk reads, locks, cpu, throttling, network reads, sending data to client, etc), you have per disk read/write throughput (MBps), IOPS being used, network throughput, etc. At most times what i felt lacking was the ability to understand why PG was using so much CPU/disk troughput(e.g. inserts into indexed tables) but the disk throughput the instance was under was always very visible.
The article also doesnt mention anything about using provisioned IO instances. Nor any mention of which architectures have the highest PIOPs ceiling.
IOPS times blocksize is bandwidth in my experience (on modern storage).
I've built block devices using the highest IOPS (fulfilling all the necessary requirements) at well as extremely large block devices (64TB) using EBS. When maxxed out and tuned to the gills, it's fast and big.
This exactly. Whatever the motive, whomever fully realizes that this address is watched with a very fine tooth comb and simply isn't possible to go unnoticed.
Why the expenditure? Someone above basically said marketing -- marketing Bitcoin so headlines hit the masses and maybe get some sort of ball rolling again. I think it's safe to assume there is a larger plan in the works and whomever responsible has played their hand and now hopes the cards fall as they projected, for whatever ends.
> "it is difficult to write a program that can play a legal action in every situation"
This is trivial (just forfeit).
The hard part is figuring out the best possible action to select from. MTG is particularly hard at this because:
* Some actions are only allowed in certain conditions (e.g. in response to, in a specific phase, etc)
* Actions vary significantly not only in their effects but also in their inputs (some require targeting a creature, a player, an opponent, a card in hand, a card in exile, a name of a card that could exist). Some have a varying list of inputs (target many creatures). This variance makes it hard to encode the action space.
* The state space is huge. It is not only determined by the cards in play, but is also affected by the meta-game (to play optimally players have to play in a specific way to avoid getting countered by a card that could be in play by the opponent, because that card is legal to play and is commonly used by decks that look like what the opponent is playing).
* Technically the state space is also infinite because you can create infinite loops that keep creating more and more triggers/creatures/etc.
While I was in SF this wealth disparity feeling was present, but it was an order of magnitude less than what I saw in Mumbai. There you have 30+ story building acting as the personal residence for a multi bilionare (complete with 2 helipads) and across the street a family of 3 living on a "tent", cooking food on a makeshift fire made from trash and a baby drinking milk out of a transparent plastic bag - all of this under the nauseating smell of human feces. This wasnt a one off thing, its all over the place.
> I hate to say it, because that's not how science should work
I have the opposite view. Science should be incremental and authors should be incentivized to share their (interesting) findings early and often. This makes the community as whole move faster because you get more visibility, funding, man-hours dedicated to things that are on the leading edge of research. Consider a scenario where a researcher is required to explain exactly why some phenomenon happens. Maybe it took 1 year to find the original phenomenon and then it takes 10 years to explain it to a reasonable level. Everyone only gets the benefit of this research 11 years after. Now consider the opposite scenario. After 1 year the author publishes and gets the attention of fellow colleagues. Some of them will collaborate together adding more man-hours / year, reducing the total time. Some of them might have already discovered something similar and thus avoid all repeated work. Some of them might be better positioned to solve the explanation piece based on their field of expertise, personal interests, availability. All of this makes the innovation happen faster.
What you often see (or should see in high quality papers) is an hypothesis of why something happens. This in itself is valuable. Many hypotheses are unproven until today. If you assume that these hypotheses are true you'll often find better results or find them faster; and if you don't you have discovered something interesting to report on.
I'm of the opposite view - publish only if you can prove it. If you reward half results, the field will be deluged with half truths. If someone publishes a hypothesis you're working on with only a half proof (and stands to gain the future credit), then there is little incentive to continue doing the work to prove it.
This was actually a major issue one of my PhD advisors had, since it led to poor foundations for the field with little incentive to ensure their validity.
We need something in between. The paper author may not know why something is happening but has showed that it is statistically significant. Maybe he just does not have the context or background but someone else along the way. Of course the assumption is that people would do replication studies but no one is incentivized to do them so better to be on the safer side
I don't think we disagree. All I am saying is that if your goal is to filter papers by their likelihood of having a useful result, the best signal is the author and their reputation, not necessarily the content (unless there is perhaps some obviously amazing demo). We don't disagree that for the community as a whole the system of publishing early and often is much better than the alternative of imposing restrictions.
That works well only if people have no motivation to fabricate or embellish results, and/or if they fully share the code and experimental setup. But in general I agree.
I've found that it works well to add the prediction horizon as a numerical feature (e.g. # of days), and them replicate each row for many such horizons, while ensuring that all such rows go to the same training fold.
Can anyone explain why is the lawsuit against RealPage and not the landlords specifically? They are the ones hypothetically doing the price fixing.
Considering the following scenarios:
* A landlord/tenant publishes their rent online: not price fixing.
* A group of landlords/tenants publish their rents online: not price fixing.
* A group of landlords share their rents privately: maybe price fixing?
* A group of landlords share their rents to a 3rd party, which publicly shares aggregated data: doesn't look like price fixing to me.
* A group of landlords share their rents to a 3rd party, which privately shares aggregated data: maybe price fixing?
* A group of landlords share their rents to a 3rd party, which uses ML/AI to predict occupancy rates at a given price; and uses it to maximize expected profits to each individual: doesn't look like price fixing to me, maybe it is if we consider that it is using non-public data.
* A group of landlords share their rents to a 3rd party, which uses reinforcement learning to dictate the best price to set, considering that the same policy will be shared with other landlords: price fixing.
Considering the difference between the two last scenarios, is the lawmaker going to evaluate how sophisticated is the algorithm behind the scenes?
The AG isn’t a lawmaker. p52 of the AG’s contract says False Claim Act, Consumer Protection Procedures Act, Restraints of Trade, and Trusts in restraint of trade illegally.
I highly doubt the algorithm is any more complicated than a couple of rules. The YieldStar guy already got busted by the DoJ for price fixing when he was at Alaska Airlines in the early 90s.