Depending on exactly what you need, you can often fake this with a functional index on mod(queue_value_id, 5000). You then query for mod(queue_value_id,5000) between m and n. You can then dynamically adjust the gap between m and n based on how many partitions you want
> How have fat people gotten thinner without those meds up until now, then?
Mostly, they haven't. You and I are outliers.
The population-level data tells us that overweight people are mostly unable to control their weight in the face of modern food. That being the case, it doesn't seem unreasonable to look for alternative solutions to the failed option of just telling people to eat less.
edit: regarding strength of addiction - I mean, of course, isn't it profoundly obvious that different people will have different strengths of addiction? I can drink without the slightest inclination to excess, while others are broken alcoholics. My grandfather didn't have the slightest interest in food beyond the calories needed to survive, while I have to fight every day to eat well.
Exactly, regarding strengths of addiction. I don't feel morally superior about not being an alcoholic... it's pretty clear that my experience of alcohol is just wildly different from some of my friends. I enjoy alcohol fine, but I never feel like I'm exercising willpower when I choose to stop after 1-2 drinks.
It's profoundly obvious you're missing the point, and conflating somehow having a low degree of addiction to something with not being addicted at all to it. Your example about alcohol clumsily compares people addicted to it with people who obviously don't have a problem with it. We were talking, instead, about people, like myself, who had some degree of addiction to food, and still found it in themselves to overcome that shit. So it's two groups of people: addicts who beat their addiction, and addicts that didn't; not addicts and non-addicts, like you explained. Your examples, as you can see, are totally irrelevant and miss the point completely.
You also seem to imply that the degree to which you're addicted to something is the sole factor determining whether you will overcome your addiction or not, leaving your own will out of the equation. It should be logically self-evident that the fact that somebody beat their addiction says close to nothing about its "strength". One could have many physiological and psychological predispositions to food adiction and still beat it, while somebody with just a fraction of such problems could live a miserable life and never do away with it.
Me> different people will have different strengths of addiction
You> It's profoundly obvious you're missing the point, and conflating somehow having a low degree of addiction to something with not being addicted at all to it
Suggest applying some of that willpower towards paying attention to what you're reading.
> You also seem to imply that the degree to which you're addicted to something is the sole factor determining whether you will overcome your addiction or not
I don't imply anything of the sort. Willpower is one variable, level of addiction is another. What I do imply is that without deeper observation of a person's life, and the other areas in which they might demonstrate willpower, you can't make strong conclusions about their lacking willpower based simply on their weight.
Based on all I know about you (or you about me), we could each be people of tremendous willpower who overcame titanic odds to beat our food addiction, or we could simply be people who really quite like food who tried hard and overcame our mild predisposition.
Respectfully, have you ever had anything in your life that you have struggled desperately with, and needed help? Anything at all that might give you a little empathy on the topic?
I was obese twenty years ago, and lost the weight via diet and exercise. Keeping that weight off is the single hardest thing I have ever done, and a battle I still have to consciously fight every single day. Doing so causes me a great deal of pain and frustration, and I know that I'm someone who is right on the edge of not being able to control my weight. Why should it be that difficult? So that I can pass some kind of purity test?
The fact is that the food we eat has evolved over time, and is too hard to resist overconsuming for a large fraction of our population. If we can create more addictive food, why not create antidotes? If we could easily treat alcohol addiction with a pill, would we tell alcoholics to just apply willpower instead? Why would we want people to suffer like that?
I used to (and I didn't find much relief from eye drops). For the headaches, turned out they were migraines, which I was getting from screwing up my face because my eyes were uncomfortable.
That link isn't particularly convincing. As far as I can see, the only Postgres test performed on the hardware that the top MariaDB entries had was on a positively ancient Postgres version (9.2.1).
* In that link, V11 is not the version of Postgres, it's the version of the test. Scroll down to DB Version.
* Lots of versions are tested, but 9.2.1 is the only version I see on the same hardware that the top MariaDB versions are tested against. The others are on much weaker hardware.
* Postgres 9.2.1 is 12 years old.
Why shouldn't they want the easy way out? I was obese twenty years ago, and lost the weight via diet and exercise. Keeping that weight off is the single hardest thing I have ever done, and a battle I still have to consciously fight every single day. Why should it be that difficult? So that I can pass some kind of purity test?
The fact is that the food we eat has evolved over time, and is too hard to resist overconsuming for a large fraction of our population. If we can create more addictive food, why not create antidotes? If we could easily treat alcohol addiction with a pill, would we tell alcoholics to just apply willpower instead?
At $work we did use this approach to upgrade a large, high throughput PG database, but to mitigate the risk we did a full checksum of the tables. This worked something like:
* Set up logical replica, via 'instacart' approach
* Attach physical replicas to the primary instance and the logical replica, wait for catchup
* (very) briefly pause writes on the primary, and confirm catchup on the physical replicas
* pause log replay on the physical replicas
* resume writes on the primary
* checksum the data in each physical replica, and compare
This approach required <1s write downtime on the primary for a very comprehensive data validation.
This is a pretty tough definition of correct, though. Without foreign key constraints you'll have a really tough time dealing with concurrency artifacts without raising your isolation levels, which generally brings larger performance concerns.
My experience is that if you have a moderate amount of foreign keys, a lot of DBMS (not Postgres) will refuse the `ON DELETE CASCADE` (in the diamond case), and you have to do it "manually" anyway (from your query builder).
I think it because a significant fraction of my career has been spent fixing db-concurrency-related mistakes for people once they hit scale :-).
I’m not talking about using cascade - this applies perfectly well to use of on delete restrict. FKs are more or less the only standard way to reliably keep relationships between tables correct without raising up the isolation level (at least, in most dbs) or doing explicit locking schemes that would be slower than the implicit locking that foreign keys perform.
If I want to restrict deletion of a user to only be possible after all the resources are deleted, I'm forced into using higher-than-default isolation levels in most DBs. This has significant performance implications. It's also much easier to make a mistake - for example, if when creating a resource I check that the user exists prior to starting the transaction, then start the tran, then do the work, it will allow insertion of data into a nonexistent user.
Can you give an example? I’m not aware of a mechanism like that that will protect you from concurrency artifacts reliably - certainly not a general one.
Do that in most relational dbs in the default isolation level (read committed), and concurrently executing transactions will still be able to delete users underneath you after the select.
If we take postgres as an example, performing the select takes exactly zero row level locks, and makes no guarantees at all about selected data remaining the same after you’ve read it.
edit: my mistake - I missed that the select is for update. Yes, this will take explicit locks and thus protect you from the deletion, but is slower/worse than just using foreign keys, so it won't fundamentally help you.
further edit: let's take an example even in a higher isolation level (repeatable read):
-- setup
postgres=# create table user_table(user_id int);
CREATE TABLE
postgres=# create table resources_table(resource_id int, user_id int);
CREATE TABLE
postgres=# insert into user_table values(1);
INSERT 0 1
Tran 1:
postgres=# BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN
postgres=# select * from user_table where user_id = 1;
user_id
---------
1
(1 row)
Tran 2:
postgres=# BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN
postgres=# select * from resources_table where user_id = 1;
resource_id | user_id
-------------+---------
(0 rows)
postgres=# delete from user_table where user_id = 1;
DELETE 1
postgres=# commit;
COMMIT
Tran 1:
postgres=# insert into resources_table values (1,1);
INSERT 0 1
postgres=# commit;
COMMIT
Data at the end:
postgres=# select * from resources_table;
resource_id | user_id
-------------+---------
1 | 1
(1 row)
postgres=# select * from user_table;
user_id
---------
(0 rows)
You can fix this by using SERIALIZABLE, which will error out in this case.
This stuff is harder than people think, and correctly indexed foreign keys really aren't a performance issue for the vast majority of applications. I strongly recommend just using them until you have a good reason not to.