There's a probability model called the Pólya urn where you imagine an urns containing numbered balls (colored balls in a typical example, but to draw the comparison with dice we can say they're numbered 1-6), and every time you draw a ball of a certain color, you put back more balls according to some rule. A few probability distributions can be expressed in terms of a Pólya urn, see https://en.wikipedia.org/wiki/P%C3%B3lya_urn_model.
A fair 6-sided die would be an equal number of balls numbered 1-6 and a rule that you simply return the ball you drew. You can get a gambler's fallacy distribution by, say, adding one of every ball that you didn't draw. I read the code as a Pólya urn starting with 1 ball 1-N and doing that on each draw plus reducing the number of balls of the drawn number to 1.
Also related, in 2d space, is the idea of randomly covering the plane in points but getting a spread-out distribution, since uniformity will result in clusters. (If you're moving a small window in any direction and you haven't seen a point in a while, you're "due" to see another one, and vice versa if you just saw a point.) Mike Bostock did a very nice visualization of that here: https://bost.ocks.org/mike/algorithms/
In the 1960s, the biostatistician Marvin Zelen proposed using something very much like the Pólya urn for clinical trials, calling it the "play the winner" rule [1]. This has had a major effect in causing a rethinking of the traditional randomized controlled trial, and these ideas are still making their way through the medical community today [2].
Interesting - just perusing those links, it sounds like a multi-armed bandit problem, in which you reason that if something has worked out before, you should tilt your bets more in that direction. In the context of the urn model, you'd return more balls of the same color for every successful draw. In the context of medicine, you can balance between proving or disproving a treatment effect and actually supplying that treatment to the test subjects who need them.
Relatedly, there's a Bayesian interpretation to overweighting successful past draws. A model where you return one extra ball of the same color to the urn gets you a Dirichlet-multinomial distribution, which is a die-roll distribution where the weights to each face are not known for sure, but are given a probability distribution and revised with observed evidence. In other words: here's an n-sided die, I don't know its weightings, but as I observe outcomes I'll update my beliefs that the sides that come up are more favorably weighted. The number of balls in the urn you start with correspond to your priors; only 1 ball of each color means a very weak belief that it's a fair die, 1000 balls of each color means a strong belief, unequal numbers mean that you start off believing it's weighted.
Covering the plane randomly but without clusters is actually quite useful in simulations. The "random" numbers that do that are often call "low-discrepancy sequence".
Also called "quasirandom" numbers, as I learned it from wikipedia years ago. ("Quasirandom" and "quasirandom numbers" today redirect to "low-discrepancy sequence".)
There is many names for it. The problem with "quasirandom" is that is sounds a lot like "pseudorandom". For the special case of points in a plane looking for "Poisson disk sampling" also brings many great resources.
Settlers of Catan switched to this at some point--from 2 dice to a set of numbered tiles with the same distribution that you shuffle and draw from. It definitely pared some "oh man I'm so unlucky" scenarios from the game but I'm not convinced it made it more fun.
A great application for this is in randomizing playlists. My friends, who are also CS grads and should know better, have often complained that their MP3 players, CD carousels, etc play the same music too often claiming that the random is broken, when a song repeating in a short period of time or other songs never playing is what you would expect from a truly random selection. Using this algorithm, you'd be sure to hear all of your songs. I'm guessing most music services already do something like this.
A simpler way that's guaranteed to not repeat a song till the entire set has been played is: Assuming you have N songs, pick a prime P such that N is not divisible by P. Then pick a random index I (0 <= I < N) to start from. The index of the next song is I' = MOD(I + P, N). This method has the advantage of being O(1) regardless of the size of the playlist.
Another method is to pick a random key K and sort the entries of the playlist based on HMAC(SONG, K) (where SONG is the name or other identifier of the song). This has a number of interesting properties. For starters the playlist will be randomized (that's good). It will also keep it's overall order if a new song is added. The new song will get inserted based upon its HMAC. You can switch things up a bit by having K be a based on the current date. That way new songs will maintain their order and jumping to a track on a given day will continue the chain as before, but day to day you won't always hear the same tracks back to back.
I think you're solving the wrong problem here. The goal is not to take a list and shuffle it once. That's trivial. The goal is: given a set of N songs, sample an arbitrarily long sequence of songs with replacement from this set such that the sequence "feels" random. Things that are random but don't "feel random" include: playing the same song too often, not playing a song often enough, and repeating the same sub-sequence of songs too often. The "shuffle the list once and repeat forever" approach fails hard on the last criterion. Generating a new shuffle every time through the list solves that issue but fails on the first two criteria, since a song could appear at the end of one shuffle and at the beginning of the next, resulting in repeating the same song twice in a row (or vice versa, resulting in the song not playing for 2N-2 songs).
- don't play too many songs from the same artist in a row (and too many is probably 2)
- don't play too many slow/fast/sad/angry/... songs in a row
- ... instruments
- ... genres
- ... "feel"
- ...
Deep AI is definitely needed to create playlists that "feel random". :)
I don't think what people want is random when they say that... they want intelligently mixed playlists that meet a few criteria. no back to back songs from the same artist unless the pool of songs doesn't have sufficient variability in artists, not too frequent repeating of the same song, if you have a live version of the song it should be lowered sufficiently in probability of playing when another cover/live version played. basically people want to remove repetition.
I think in the general case you could imagine a NxN matrix of values between 0 and 1 representing similarity between N songs. Songs on the same album would have high similarity, especially consecutive tracks on the album, and different versions of the same song (e.g. live vs. studio) would also have high similarity. Then when selecting the next song, you would weight each available song's selection probability inversely to its similarity to the last few songs.
That's not true, at least for the even number apart example.
The prime method will sample songs p apart from each other, and it's easy to find examples of N and p where p is even, for example N=5 p=2. More generally, this method works for any n that is co-prime with N, so N=5 n=4 works as well.
There are lots of shufflings where subsequent songs are not separated by a constant amount, so your main point is true. There are many combinations of songs that you would never hear no matter which prime you picked.
To quantify just how many takes a little bit of work.
There are N! total shufflings possible. We know that there are N-1 or less numbers that are co-prime with N (you get N-1 when N is prime, less otherwise). For each number that is co-prime, we have N possible shufflings, each starting from a different point. This gives at most N(N-1) shufflings from the prime method (really the co-prime method).
As a percentage of total shufflings, we know the upper bound from the co-prime method is N(N-1) / N! == 1/(N-2)!. This very quickly goes to 0. For the first few N,
Wouldn't all random methods with some reasonable number of bits of randomness (eg 256) be unable to create all permutations of a sufficiently long playlist. The number of permutations of the playlist would be greater than the number possible random seeds. Eg a playlist with 60 songs has more than 2^270 permutations.
This - and possibly also a reminder that the same words may mean different things for people with different backgrounds.
If someone with no specialized knowledge about math and probability requests their songs in a "random" order, I don't think one should assume they are insisting on an order sampled from a uniform distribution.
Shuffle and weighted random both give a better chance of variety than truly random. Shuffle is cheaper to calculate in terms of storage space.
The article is about weighting randomness based on past distribution. In a music playlist that may be enough. Some user might also want a term in the equation that weights songs they've rated highly to be favored over lower-rated or unrated songs.
"truly random" refers to how well a real distribution matches its intended distribution. So, a long cycle is not truly random, because many desired outcomes are physically possible to obtain.
This is orthognal to the choice of distributon. Shuffle and Weighted Sampling are different choices of distribution.
Because the casual definition of "random" means "without a discernible pattern", which multiple artists in a row doesn't handle. Music shuffle algorithms sometimes go lengths to avoid patterns and duplicate artists.
This sounds like a giant strawman to me. Where do programmers get the idea that a mode called 'shuffle' would call for sampling with replacement? The obvious software analogy to shuffling would be re-ordering a list randomly, not randomly drawing from a list.
I think this whole shuffling meme started with Spotify's engineers misunderstanding what they were asked to do, then claiming their customers don't know what they want when they obviously delivered the wrong feature.
Or a great example of how things fail when producers and consumers don't use the same language. If you don't understand the language your consumers are speaking, you are not going to produce what they want.
So next time some one tells you their computer is broken, don't reply saying it appears to be in one piece.
Many music players have done this for a long time. In fact I believe that's part of the reason why they refer to that function as "shuffle" rather than "random"
That depends entirely on the implementation. There's no one rule for how developers should implement shuffle.
When I was writing my in car entertainment system (it's not as good as it sounds!) the first metric that was shuffled was the artist, followed by weightings for albums.
Several years ago I used to use a music service that had a bunch of sliders, something like:
Favorites: Normal <----------> Very Frequently
Artist: Liked only <-----------> Wide variety
Prefer: Old <----Balanced----> New
Popularity: Hits only <--------> Fringe
It was great to be able to customize stations. I don't remember what service it was, but I think it did get purchased and folded into something else.
This was a great way of tweaking the "randomization", and really gets into understanding what people want to hear. In my case, I had different stations/playlists customized in different ways.
> My friends, who are also CS grads and should know better...
I know this is about being "technically correct" but I'd argue that even if you're pedantic and know the behavior of true randomness, the complaints are justified.
First, "pick a song from a uniform distribution" is an implementation detail, not the use case. The use case is "pick songs in a pleasant, novel order". If the implementer chose to do this using uniform randomness, that's their descision. If that implementation does not fulfill the use case, it's a bug and also their responsibility.
Second, the function is frequently called "shuffle" - so it even gives some hints about the implementation expected. No shuffling algorithm should ever result in duplicates. (except at the beginning and end of playlists)
Oh, I really like this idea. I might go with a number greater than two though for the aesthetic reason that 25% of the time, you will select a song that has been played more recently than half of the playlist. With four, you'd be selecting something from the back half of the playlist more than 90% of the time.
As it is apparent from the comments, most people want the playlists shuffled, but after a song is played, it is taken out of the set that's to be shuffled. So a song is never repeated.
I have a theory that Spotify doesn't do random shuffling, but weighted shuffling, so that new items are more likely to be at the top of a shuffle (or maybe even popular items). I also believe that this will satisfy users better than true random shuffling.
About your linked post: You don't test the randomness of the shuffling, you only test if a shuffled list contain one and only of each original item.
Spotify's shuffling is most certainly not random. Every once in a while I get a bug where it simply shuffles through various songs from 4-5 albums and will never pick songs outside that set. Restarting the app corrects the behaviour. To me that's some level of proof, because the songs are all there. So what's the algorithm really doing?
On your website you put a source mark on the statement 'This is the same reason the Brits mistakenly assumed
that the Germans had an exceptionally good aim with their V-1 flying
bombs during World War II '
I'd like to read about this, do you still have the intended link?
This is called "sampling without replacement" and lines up with the physical intuition users have for taking a playlist where each song appears once, randomly reordering it, and then playing the entire playlist.
Good implementations actually don't make a random choice, but do a random sort and then play the whole list, then random sort again, then play again the whole list, etc. The only possible problem here is when the last song in one playthrough is the same as the first song in the next playthrough. But even that can be overcome by re-random-sorting the follow-up.
I had a few issues with Spotify doing this. I even saw a post where an engineer claimed it was all psychological, but I was definitely getting repeats when I had queued up a large amount of music (e.g. over 40 hours.) I think they only shuffle a certain percentage of the playlist and forget which songs have already been played after a while.
I manually shuffle some playlists now. One nice thing about the desktop client is the clipboard integration. You can press cmd+a to highlight all the songs, cmd+c to copy, and then paste all the song URLs into a text editor. Then I used "permute lines" in Sublime Text to shuffle them, then cmd+c and cmd+v to paste the songs back into Spotify.
I probably have some details wrong, but
according to Spotify's customer feedback site, there at least used to be a couple of issues with their shuffle.
One, if I remember somewhat correctly, was that if the player was stopped/shutdown, it forgot the shuffle seed. This would essentially lead to a reshuffle, which would cause duplicate songs for songs played before the unintended reseeding.
Oh ok, yeah that seems to line up with what I experienced. I think it was after closing my laptop, or switching between different devices. I might have to try it again.
back when I used Winamp as my daily player, I was getting annoyed that its "shuffle" kept playing the same songs. and it would play the same songs in a row. so I would hear the songs C,A,F play. then the next day again the sequence would be C,A,F. over time, I would start hearing the melody of the next song before it even started playing.
when I finally noticed this and realized that could only mean it's not actually random, I researched it, and found out that once you hit shuffle/random on Winamp, it shuffled the songs once and kept the sequence forever.
and then as it turned out, Winamp actually had a setting to select a truly random song each time. all I needed to cure my madness was a checkbox!
You can take it too far, though. When I listen to Venetian Snares on Spotify, they only play some songs off of 3 albums, despite the fact that have about a dozen albums. If you're going to do that, tell people it's not actually random.
Interesting, and at first I was excited about the possibilities in something like D&D, where a series of bad rolls can have you feeling down. "I'm due for a critical hit any swing now..."
Players would love that! Make my hero feel more heroic! The inevitable comeback!
But then I thought about the inverse case -- you are doing really well, and now you are due for a failure. Or series of failures. That would feel awful.
We have a lot of emotions around dice rolling. I wonder if players really want from their dice. Would players prefer dice that are secretly unevenly weighted towards good rolls? Would they still want those dice if they knew they were weighted?
I made this specifically for my online D&D sessions. I can tell you that nobody was suspicious when I replaced the standard Math.random() with this version, and all of the "I think your dice function is broken" statements all the sudden stopped happening.
As a D&D player I can see how this might be nice, but at the same time it would be frustrating to get a 20 for something stupid (passing a DC5 check) and know that it had a real effect on my chances for getting a 20 when I could actually use it (a critical attack roll).
There are times in a RPG for (a GM/DM) relying on dice for something and when not to. For some people driving appears to be like bumper cars but for most of us it isn't 8) Apply that to D&D and the player's approach to things should be used to bias or even dispense with dice.
In some competitive games like Dota and LoL, the random distribution for critical hits is weighted so that it ramps up over time, resetting after a successful crit.
(http://dota2.gamepedia.com/Random_distribution) This both reduces the chance of two high rolls coming up twice in a row as well as long periods of low rolls.
It doesn't have the "you are doing really well, and now you are due for a failure" since "not a critical" isn't really that negative of an outcome.
For something like D&D, the same kind of "better-than-random" distribution just comes from adding dice. 4d6 is much more consistent than 1d21+3.
Dota was my first thought when I read the article. Good Dota players pay attention to their crits, and if they haven't had one in a few hits they adjust strategy with the expectation of higher damage, something you would never do if it was truly random.
Fascinating! I wonder if the distribution is engineered like that to avoid specious cheating claims. E.g. Player:"Look at this video of Famous_Player getting 3 crits in a row! They're probably cheating!" So much potential for fallacies; Gambler's Fallacy, Silent Evidence Fallacy..
There isn't really a possibility to cheat game mechanics in those sort of games thankfully since things like critical hits are all handled server side.
Interesting, when I used to play League of Legends I was sure that the crit chance wasn't random at all. In my experience, at 25% crit precisely every 4th strike would crit, at 50% precisely half, and at 75% every 4th strike I'd do wouldn't be a crit. It seemed fully deterministic which I thought was an interesting way to make the mechanic more skillful as you could count in your head when you'll get a damage spike; you could fire some useless shots on minions then dive in for a trade on your enemy laner knowing that you'd win because of your guaranteed crit.
I've heard Blizzard does the same thing for valuable random drops (like legendary drops in Hearthstone and Diablo 3): instead of being random, there's a proc rate so you can't get screwed by chance.
They do it in Hearthstone for pack openings, the players call it the "pity timer" - you're guaranteed to get a legendary card after opening 40 packs or something like that.
There are even websites like https://pitytracker.com/ that keep track of where you are in your chances.
In hobby time I have been exploring the storytelling possibility space of RPGs using card decks. I love playing cards and I love custom playing card designs and I wish there was more impetus for players to bring decks with interesting designs to an RPG table in the same way that dice are so varied and fun to see what people bring to a table.
I think there are interesting stories to tell in an RPG where you know the "weight" of your rolls, approximately how many "success" cards you have left, some choice in what you play from your hand.
«But then I thought about the inverse case -- you are doing really well, and now you are due for a failure. Or series of failures. That would feel awful.»
It depends on how personal you take it in how awful it feels. If the goal is to create an interesting story, a grand sequence of failures can be a very interesting story. Look at Fiasco, for instance, where some of the fun can be very explicitly "how do I make my character more miserable?" In those cases where you know you are about to hit a lot of failures in die rolls/card draws you can lean in to it. Try to explain why your character is having such awful luck, for instance. Maybe try to set up an elaborate Rube Goldberg design of failures that eventually cascade into the weirdest, grandest success, like a "drunken master" kung fu ballet.
Randomness is what makes a game more realistic but true randomness will not make it more likable. True randomness dictates that in half of your games, your players will have a harder time achieving anything.
It is not uniformity of results that the players are looking for, it is uniformity of successes weighted by the importance of the rolls. Failing a perception check in an empty room is less important than failing a dodge roll in the final fight on your last health point.
Many games suggest that the gamemaster cheats on some crucial rolls. More games propose an alternate system that biases rolls based on points systems. Some others even replace totally the dices with a points system.
I really prefer those. A very simple biased rolls system I often use (Dk2) is that GM can emphasize the action of dangerous NPCs by adding special D6 dices to a D20 roll (they are special as they only give +3 or +6 on 3 and 6 and +0 on other results). Once used, these dices are added to a pool that the players can use for their actions.
Anyway, my favorite system is Amber Diceless RPG. It trades randomness for secrets. Every player is a bit of a backstabing asshole who does not show all their skills on any action. If someone has 30 in strength, they may just announce 10 if they think it is enough for the action.
The winner in a fight is the one with the bigger score. Add bonus if they use some kind of trick or ambushes (recommended). It is surprisingly interesting for such a simple system.
EVE Online has a really interesting approach to this balance as well.
In general, things are heavily weighted towards actual player skill and also character skills accrued over time for the important interactive elements.
For things like loot drops, it's strictly random from a chart per enemy type, but they monitor the overall economy and use that to set the weights for the various items. If one group of people has figured out some trick to farming an extremely valuable item, they will nerf it to keep the economy where they want it. For a number of years, the company actually kept a professional economist on full-time to keep track of these things. I think that got eliminated when CCP was having a rough financial time several years ago. And I think that game has gotten a bit worse as a result.
But in terms of stuff that counts RNGs are a fairly small part of the action, and in some cases not at all, so you can pick your poison: lower overall damage to a target but more consistent, allowing for one kind of play style. Or you can have more variable damage with higher potential if you're willing to take risks. And there are a couple of flavors along the spectrum.
It's a pretty interesting game design. And, of course, tons of people complain about it. Loot drops really are pretty damn random (with between 30-40 thousand people online at any given time, there's a lot of entropy to seed the PRNG). And you get the clusters you would expect from random data. But everyone really hates random outcomes. People want uniform outcomes. It's baked into our psychology.
But in terms of things that determine who lives and who dies in a fight. The random element is really downplayed, and it boils down to player skill more than anything.
There's a whole thing where people who have been playing for a long time will start brand new accounts with low skill levels and shitty equipment and still be able to blow people out of the water because they have a superior understanding of the game mechanics. In fact, there are entire "corporations" who really do nothing but this. Fine people who have been around for long enough to make some money and buy some expensive things and then lure them into fights and slaughter their bling-bling.
It's an interesting approach to game balance, and one I quite admire. It does keep me coming back, and paying that sweet, sweet monthly sub.
EVE is also one of the games that really capitalizes on the technology component. As much as I love Skyrim, for example, I don't think there's anything going on with the actual game interaction mechanics that couldn't be easily replicated on a table with some dice. I'm also willing to eat my words if I'm completely wrong about that.
EVE Online has the luxury of hammering out fairly complicated formulas based on the current state of things, and it does so very well. It does it for every ship in space (again, typically 30-40k), once each server tick (usually a tick is 1 second, but for really huge fights, they dilate time and slow the ticks down).
So they can make things like damage calculated for each gun or missile or drone on each ship based on every relevant ship's current speed and direction relative to every other ship, etc.
It's an impressive use of technology for an RPG, and I find it especially cool that you are your own gamemaster. The only story is the one you make for yourself.
But now I sound like a fucking fanboi, so I'll shut up.
I've never heard of Amber Diceless before. I'm curious and will check it out. Thanks for mentioning.
Maybe a hybrid mode, where too many bad rolls bias the dice towards a more "true uniform" - lol - distribution while a series of good rolls just keep it actually fair.
Better have games without dice rolls. Unexpected events could come from other non-random sources (like the combination of a lot of factors involving other players actions).
> I made a chatbot that rolled dice, and it was constantly criticized for being "broken" because four 3's would come up in a row.
> These accusations would come up even though they (all being computer science majors) know it's possible (although unlikely) for these events to happen. They just don't trust the black box.
> As I promised earlier, if you donate to the site and are unhappy about the rolls, let me know and I will pull a die out of the machine, melt it flat and mail it to you, as an object lesson to the other dice. Tangible revenge.
Hah! That might be the best donation "gift" I've ever heard of!
I had the privilege of studying probability from G-C. Rota. One of my favorite quotes from him was "Randomness is not what we expect", which he used to describe the phenomenon of people disbelieving that random data was actually random. Another great was "This will become intuitive to you, once you adjust your intuition to the facts."
True But I would like to show you an interesting counterexample which requires changing the rules slightly. If we toss a coin and keep tossing until one of the below sequences turn up then which sequence is more likely?
HHHHHH THHHHH
Answer: THHHHH. Only 1/(2^6) of the time it will be HHHHHH. The only way to get HHHHHH will be if the first 6 flips are heads. If you don't flip a head you must have got a tail which means THHHHH will always appear before HHHHHH.
I think the real issue that most people have understanding this stuff is that they don't have a clear grasp of dependent vs independent events.
We tend to use the fair coin or the fair die as a common way of talking about things, but I've found that this is problematic for a lot of people.
First off is the use of the word fair. People not from a background in statistics don't think of that word the way we do. They think of it as an attribute of the object itself. The the coin or die is 'fair' in the ethical sense of the word.
Second is the experiential understanding: 'This coin is fair and therefore should flip heads and tails with equal frequency. It's the same damn coin, and I flipped it a thousand times! Why are these crazy stats people telling me it comes up heads and tail equally often? It doesn't, dammit!'
What I've had some success with helping people understand independent trials is to get them to envision flipping 6 different coins at the same time. Then you can sort of walk them back into understanding that flipping 6 different coins once is no different from flipping the same one 6 times. The events are totally unrelated, and a past event is not a predictor of the future.
Plus, you can actually map out the math for them on the spot. .5 * .5 * .5, etc.
That's just my experience trying to explain things though. Would love some criticisms if there are better ways to go about this.
There was an old flash video game I worked on a long time ago where I did exactly this. I had a boss with two main attacks, and I didn't want it to be super predictable A/B/A/B, so I had it pick between A and B randomly, then reweight the probabilities, so if it picked A, instead of 50% A, 50% B it'd now be 25% A, 75% B. If it picked A again it'd be down to like 12.5% A, 87.5% B. If B then got chosen, it'd flip flop to 75% A, 25% B, etc. The result was it mostly went back and forth between the two, but would do some attacks 2 or 3 times in a row before switching back to the other.
Hmm, but by the algorithm you describe you are MORE likely to get a "super predictable" a/b/a/b than you would with just an independently random coin flip each time. That is, strings of multiple As or Bs in a row is less likely with your algorithm, but you make it sound like that's what you wanted.
No, you misunderstand. I didn't want it to be always A,B,A,B, but I also didn't want it to be an even coin flip and risk an A,A,A,A,A,A,A or whatever (like in the play Rosencrantz and Guildenstern are Dead).
With my method, it was possible for the same move to be picked 2 or 3 times in a row, but it mostly went back and forth, so the player was definitely going to encounter both moves, but they couldn't know for absolute certain which move the next one would be (and they both needed you to kind of quickly get into different positions before getting hit).
The type of pattern you'd usually get with mine was more like A,B,B,A,A,B,A,B,A,A,A,B,B,A.
A similar, simpler idea is sometimes used in games: you put all choices in a "bag", then draw from the bag until it's empty, then put everything back.
Tetris is the go-to example. Tetris has seven tetronimos, and in most modern implementations you're guaranteed to see them in sets of seven in random order.
This is pretty essential to make competitive play err on the side of skill rather than randomness: pro-players can anticipate and plan for this. For fun, here's a recent Tetris head-to-head speed-run from Awesome Games Done Quick, with pretty good narration about the tactics involved:
You could just use a deck of cards with numbers on them. That's the simplest way to do "Gambler's Fallacy Dice" IRL. For example, there's a popular Catan expansion pack that replaces the dice with a deck of cards with numbers on them for just this reason, because in Catan each roll of 2D6 represents different regions of the map paying out so if a number never comes up it never pays out.
One of our first house rules for Catan was to introduce a deck. Not quite sure why they went with dice for that game; the deck eliminates a lot of boring games.
Some purists insist that they role dice, so we just let 'em and the resulting game feels quite like you're using gambler's dice.
The thing about rolling 2D6, is the distribution isn't even.
I'm not sure that factors into the game (have played but its been a while). Cards seem like a more controlled way to distribute the results with some randomness.
only one way to get a 12 (6+6)
but 7 much more likely.. (6+1, 5+2, 4+3, 3+4, 2+5, 1+6)
as someone who played a bit of online backgammon back in the day, We always suspected the random dice rolls behaved differently in the game than playing in real life...
They basically made one card for each possiblity of pair of dices, so there is a 7 card for 6+1, another for 5+2, and so on...
Playing with it is MUCH better than with dice... when I played with the dice that came with my copy with catan, the dice were clearly defective (rolls weren't just random-looking, they were WILDLY wrong, it rolled 10s 2 times the amount of 7s...), and the game was super boring, it would end with everyone having the same materials and not having others, so not even trade would fix the game, we had stuff like 10 turns in a row where noone had one particular item while the 'bank' ran out of the other items because everyone had lots of them.
In Catan can't you trade any 4 identical resources for any one other resource? It slows down resource acquisition but should eliminate the situation you describe!?
You can, but if I remember the resource noone had was bricks, that is very important...
So the game pace slowed to a crawl... Also, the funky dice never rolling bricks meant people would quickly rack up random resources, sometimes like 3 of each, so they ended with 9 cards, and would then lose to robber rolls (7).
Then of course in one game where this happened there was an asshole cousin that thought it would be hilarious to hog resources and make the game worse, he would spend his whole turn making 'negotiations' with everyone, and them cancelling them, wasting like 5 minutes every time it was his turn, and sometimes if 7 was taking long enough have 20 cards on his hand...
We don't play with that cousin anymore either.
But yeah, not having a way to build roads make the game suuuuuper slow.
As I'm sure you know, understanding this distribution is critical to good backgammon play. For example, all else being equal it's better to leave your exposed blots closer to a threatening opponent checker than further away (within 6 pips of course). Also, you need to account for rolling doubles which results in the counter-intuitive average pip value of a roll being 49/6 (~8.16).
If you look at the Catan number-circles that you use use to mark the tiles with their roll, they have a set of dots on them - 6 and 8 have 5 dots, while 2 and 12 have 1 dot. The dot is the numerator over 36 of its probability.
Likewise, if you use the Traders and Barbarians dice-deck, the same thing applies - there are 5X as many 6s and 8s in the deck as there are 2s and 12s.
Catan takes the distribution into account, and is pretty up front about it (There are dots on the numbers that represent the probability). I've played a lot of games of Catan and, every once in a while, you'll have a statistically unprobable game where 11 is rolled 10-15 times, but 6 is only rolled 3 times the whole game.
That's exactly what the pack of cards is designed to solve. With the cards it's guaranteed that the 6-7-8 will show up more often than everything else.
It very much does factor into Catan. Not only are there no spaces marked "7", which is instead used for a special type of result, the probability of the remaining rolls are described graphically on the number tokens:
Notice that the 12 has one dot, the 9 (6+3, 5+4, 4+5, 3+6) has 4 dots, and the 3 (2+1, 1+2) has 2 dots. The gray robber piece is sitting on a red number token, which is either a 6 or an 8 and has 5 dots because it is more likely to be rolled.
In the game, you develop your pieces with the hopes of getting resources based on the adjacent numbers being rolled. It definitely creates a lot of stress in some people when "eights are being rolled all the time this game but never sixes!" or "your stupid 10 keeps getting rolled when it shouldn't, that's not fair!"
I think I'll give it a try tonight with some numbered cards. I just threw some together in Excel: 1-6 in B1-G1, 1-6 in A2-A7, and the formula
one 2
two 3S
three 4S
four 5S
five 6S
six 7S
five 8S
four 9S
three 10S
two Jacks
one Queen
Other market research says that people start counting cards to see when their numbers are going to come up again....perhaps some compromise, with perhaps two or three such decks? Real dice are an approximation of an infinite number of shuffled decks of dice, and I imagine you'd go through on the order of 100 'cards' in a given game, so with more than 3 or 4 decks you could end up with certain cards/numbers that never appeared, which seems to happen frequently and cause ire in games with real dice...interesting. Science is required! Sorry for rambling/thinking out loud.
There's a little randomness in the Catan deck of cards: the bottom 5 cards of every shuffle are not used. Not as much as dice, but also not a guarantee for certain numbers to come up a certain amount.
I've never played in real life with the deck, but I first came across this version on a software version of Catan I had (I think Xbox?). I really enjoyed it and my guess is that in real life it adds a substantial angle to negotiating placement of the robber. In particular, bribing people to put/not put the robber in a given location - if the deck is 7-heavy or your good squares are already drawn, no big deal.
I was a little disappointed when I clicked, too. I can't think of a way you could make physical dice like that, unless maybe the faces were LEDs? That might be too fragile.
Apple has a patent[0] for a fall-protection system that will rotate the phone while it's in the air. If something like that could be made small enough to fit into a die, that would work.
The biggest challenge will probably be the power supply. I'm sure that a processor, accelerometer, and the Taptic Engine (the thing that provides vibration and haptic feedback in Apple devices) are all small enough to fit in a typical die. Of course, something like this could easily be built into a larger novelty die.
Bonus points should be awarded if the die is perfectly balanced when it's not actively adjusting itself in mid-air.
That's really clever - I've been wondering how small and quiet you could make such a device. Can you imagine how cool it would be to have one of those attached to a VR controller? You'd be able to get real (albeit slight) resistance when you interacted with things.
Put 6 weights in the die, one per face. A mechanism controls each weight individually, moving it from near the face (likely to land down) to near the center (likely to land up). Combine it with something that can detect roll outcomes.
This would probably cost a lot of money to fit in a normal sized die.
Why not have an electromagnet at each face and ball-bearing in a 3 way race. You weight the device by turning on the opposite number's coil. You don't guarantee a particular roll but the statistical weighting should be pretty strong.
I wonder if you tuned the e-m coil for a particular frequency if you could make the device passive and weight it using an external frequency change?
Or use computer vision, a passive dice with a metal insert and a magnetic table; switch on the magnet when the dice hits the table right (this version would be highly obvious but perhaps could be made less so).
Well, a little micro mechanical device offsetting the internal balance could work, though the balance change would have to be pretty large to bump the probability of any one face up significantly.
This was the first place my thoughts went to. But a passive device like this seems like it would oppose the gambler's fallacy: after you roll, the weight distribution of the die shifts to be as energetically favorable as possible for that number. Now the die is weighted to roll another of that same number!
Make them from a substance that leaves a tiny amount of residue on any surface it's left on for more than a small amount of time. Any time a die is rolled (and then left unperturbed for a short amount of time), that face becomes lighter.
Wouldn't this be the opposite of what you want? You'd weigh down the current bottom face, making that face more likely to be on the bottom for future rolls.
Oh, you're right. I was thinking the residue material was all contained inside the die, not that the die itself was made from it and would gradually fall apart as you used it.
Basically, the higher your chance to hit with an attack was, the the sooner it would break a streak of misses and force a hit. This is definitely not random, but seems to be more satisfying to the player. Plus it no doubt cuts down on the number of complaints on the forums about "unfair" RNG.
Playerunknown's battlegrounds currently suffers from item spawns being "too random". You can go houses without finding a rifle then suddenly find 2-3 practically on top of each other.
They could do with learning a little from other games and making it a little less random.
Ultimately you can see it as a bug in human reasoning but we're the ones playing the game.
TBH I like the current uniformity of the item spawns. It varies the gameplay and forces you to adapt.
If every house had one pistol and one rifle it would get pretty dull. Raiding 3 houses and only getting a crossbow, bandages, and SMG magazines is frustrating but it's exciting. Just last night I got to #3 or #4 with a crossbow and a pistol.
For board games, like "Settlers of Catan" where resources are generated based on rolls of 2d6, one could use an analog version of this with a cup containing 36 chits, of the numbers 2-12 according to the normal distribution, _without_ replacement. You would still get the randomness of ordering, but over 36 draws/turns would get a complete "average" distribution.
If that is a bug or a feature is left as an exercise for the reader.
I often use google "flip a coin"[1] for stupid things and the other day I was wondering why almost every single time it came up heads. I started to wonder if there was a browser rng problem or the code was crazy etc.
And for those who are really in to this topic, Taleb's "Fooled by Randomness" is great. There's a section on how financial traders often end up ranked like this. Somebody makes an effectively random bet, but it pays off big so they're treated as a genius. It made me see all sorts of organizations differently.
The guy spent years as a trader. I spent a few years working for traders. I think his analysis of the sociology and psychology of trading was spot on. That's the main content of this book, so I'm still comfortable recommending it even if he doesn't meet your standards elsewhere.
At first I was confused, because statistical models that aren't temporally independent are very common.
But it's very clear from the comments that having dice that aren't independent between rolls is incredibly in demand :o, and having the right words to google can be tricky.
Video games, especially competitive ones, do this to limit the effect of randomness on the outcome of the game, while still keeping the sequence of random events unpredictable enough to "feel" random and preventing simple exploits.
DoTA2 uses a simple distribution based on the number of "rolls" since the last successful one - P(N) = P0 * N, where P0 is the base probability and N is the number of rolls since the last successful one[1].
It keeps both "hot" and "cold" streaks from being too much of an issue, although that doesn't stop players from cursing the RNG gods when they lose.
> I made a chatbot that rolled dice, and it was constantly criticized for being "broken" because four 3's would come up in a row.
> These accusations would come up even though they (all being computer science majors) know it's possible (although unlikely) for these events to happen. They just don't trust the black box.
This reminds me of a talk [1] given at Game Developer's Conference (GDC) about the game Civilization, in which the Sid Meyer -- creator of said game -- spent a bit of the time talking about the difference between fairness and perceived fairness. The talk is only an hour long and worth watching.
This reminds me of an article discussing the perceived outcome of RNG decisions in video games. In many types of games, the system will display a percentage chance of success for a given action which allows the player to make risk assessments regarding possible choices. Unfortunately, the unmodified RNG behavior creates an unpleasant experience for the user because the unweighted random outcomes feel "unfair" when failure streaks pop-up, thus, game designers almost always introduce some type of magic cushioning to the RNG so that the user never faces too many repeated failures.
Folks who play Blizzard games call this the "Pity Timer". World of Warcraft, Hearthstone, and Overwatch all have some mechanic where your odds of getting a desireable thing go up if you've had more recent disappointments.
It's also known as a mercy pull in kingdom hearts Union X, where after a few failed attempts of trying to get a high level medal they will guarantee you one on the next pull.
I'm probably very wrong, but I still feel there is some undiscovered science when it comes to RNG and the fallacy of the maturity of chances ( Gambler's Fallacy ).
Einstein believed the universe was deterministic. Just because it appears to us that there is no correlation between independent events ( the roll of a dice ), does not mean that there isn't some underlaying variable that we are unable to measure or perceive which is affecting the outcome of the roles.
We can actually measure randomness though. RNG isn't usually truly random, it is usually pseudo-random. But if you'd like to get true randoms you need to look at events that are true random events. Something like particle decay or photon entanglement. You can also go to https://www.random.org/ to get extremely random numbers. I note that it is not necessarily true random because there are factors that can influence those numbers (they discuss this).
Einstein, as everyone knows, was one of the most intelligent and insightful theoretical scientists ever born. His work on the photoelectric effect, which posited that the frequencies of light emitted by a material are quantized, rather than continuous, is what won him the Nobel prize. His thinking influenced the great thinkers behind the formulation of quantum physics as we understand it today: Bohr, Dirac, Heisenberg, Schrödinger, Planck, etc. But nobody, not even Einstein, has been able to come up with a way to describe the physical basis from which the laws of quantum mechanics (which we know to be true based on many, many years of experiments which have failed to falsify them) can be arrived at.
Contrast this to a topic for which Einstein is better known, relativity. The mathematical framework describing the effects of special relativity had already been partially discovered by Lorentz, but we credit Einstein for its discovery. Why? Because he postulated two simple rules from which these laws can be derived: 1) The speed of light is the same in all inertial reference frames, and 2) the laws of physics should be the same in all inertial reference frames. How about general relativity? A similar idea, that a person inside of an elevator should not be able to tell the difference between sitting stationary on Earth and accelerating through space at a rate of 9.8 m/s^2.
Einstein had opinions on the physical basis for the laws of quantum mechanics. But they were not necessarily the most popular opinion at the time, and definitely not today. As tylerhou posted in another comment, tests of Bell's theorem more or less disprove the existence of local hidden variables, meaning local quantities that behave deterministically but not measurable. It is still possible that there are nonlocal hidden variables, but this is not a popular opinion these days for a variety of reasons.
I would also like to point out that the various "interpretations" of quantum mechanics (e.g. the Copenhagen interpretation, pilot wave theory, etc) frequently fail to address what the nature of physical reality, i.e. what is the starting point, what is the axiom, what is "real". Are wave functions real? Do they actually exist, or are they merely convenient mathematical tools that can be used to describe reality? Are pilot waves real? Are density matrices real? Are Greens functions real? Are quasiparticles real? (n.b., most of these things are described as complex-valued quantities ;) )
This could maybe, possibly be true if it were possible to have complete knowledge about the (quantum) state of the universe, but that is impossible. The uncertainty principle isn't a hypothesis, it's not even really a theory, it's a direct mathematical consequence of the laws of quantum mechanics.
I want to be clear that my point is not merely a technical one. I'm not just saying "it's technically impossible to have complete knowledge," I'm saying that it is physically, mathematically impossible. The laws of physics do not allow it. One might argue "maybe it's not possible, but if we did know the complete quantum state of the universe, the universe would be deterministic and we could predict everything," but this is equivalent to saying "if the laws of physics were different, then the universe would be deterministic."
Edit: To the distinction you make between "physical" and "quantum" processes: There is no distinction. Quantum effects do diminish in magnitude with increasing size, but they never truly go away. You can predict the motion of macroscopic objects with what may seem to be an arbitrarily high degree of certainty, but in truth you can never have 100% complete knowledge of even macroscopic properties. I still disagree with your claim that the universe is deterministic in any way, even if you conveniently exclude systems for which quantum effects are substantial.
Not counting radioactive decay as a physical process because you've chosen to define physical processes as deterministic is some intriguing mental gymnastics.
> Theoretically all physical processes are predictable, but in practice there is a limit.
So you defined away quantum effects as 'not physical', which most people would not agree with. How do feel about the theoretical limits on measurement accuracy that also negate your claim?
Seems a rather odd definition to make, then. Radioactive decay has large-scale effects, such as determining The elemental composition of the universe. Quantum effects govern the entirety of the universe, so I'm not sure why you would want such a large exception to your generalization.
I would go the other route. Nothing can be predicted, except statistically.
It's only odd if you're worrying about it outside of the context that the parent comment was talking about.
Which I wasn't. You clearly are. Yes, quantum stuff exists. At the scales the OP was talking about though they are mostly insignificant therefore I hand-waved them because they were not relevant to the discussion.
As far as I could tell, the thread started from the mention that Einstein believed that the universe is deterministic, which set the context of the thread. This argument is traditionally seen in context of quantum effects, and whether or not there exist hidden variables.
I wonder if this could be / has been applied to loot tables in video games in order to keep the player interested in playing.
I've designed a few loot tables and the Gambler's Fallacy is a criticism I often have to deal with when people don't understand why a given item hasn't dropped despite them having killed a mob enough times to statistically warrant it.
on the other side of the coin, monster hunter players claim that the game actively avoids giving them items that they want in order to increase the (eventual) satisfaction. they call it the "desire sensor".
The quote at the end is meant as a joke but it's interesting how often this is true. A lot of magic tricks rely on being prepared for different outcomes, while often trying for the least likely one first. This unlikely outcome happens surprisingly often and therefore makes the effect even more unbelievably amazing.
I had a friend think of a playing card and any number (1-52). She picked the 6 of spades and the number 15 which is exactly the position where the card was located. It was only the third time I had done this trick with anybody.
Obviously, card and number picking is not uniformly random, especially when you influence their choice (e.g. "pick a large number"). But the odds of someone guessing the exact combination should still be extremely low.
A lot of what you see from David Blaine on TV is exactly this. He always has a backup plan but more often than not he doesn't need it.
It's always nagged me that statistical problems are scoped so small. Surely in saying there's 6 outcomes on a dice you'd obfuscated the billions of interactions between atoms and input possibilities in doing so. Thrower A and thrower B will undoubtedly throw slightly different which might actually constraint the outcomes and skew the 1 in 6 percentages?
It's similar to me to condensing 30 characters to 5 via an algorithm. You can go one direction but not the other and if your model was centered around the resulting 5 it doesn't really reflect what's actually happening which may skew the probabilities quite a bit. e.g. if the algorithm was "if first letter is not q, then first letter in output is q". If you were saying each has an equal percentage of occurring it'd be flat out wrong.
* I am not a statistician and have no idea what I'm talking about
For these kinds of exercises about fair dice and fair coins, it's a lot like the kind of jiggery-pokery that happens in Physics classes and stuff. "Under ideal circumstances . . .", "In a perfect vacuum . . .", etc.
Outside of games, for which this was explicitly created, these kinds of things are learning tools to understand distributions and dependent vs. independent events, and it also makes a separate point about assumptions.
In reality, if we were wanting to predict the outcome from a real dice-throwing event, we would either sample the results from actual people throwing dice, or we would simulate the results based on parameter inputs for exactly the types of things you are talking about.
Of course, no one really cares that much about dice, other than people who play board games with dice. :) So substitute any other stochastic method for generating an outcome that does matter, and the statistical approach will generally be the same: either sample or simulate.
Obviously there are exceptions, but that's really the basic idea.
I'm not a programmer but I've thought about this a lot. It'd be interesting to know if my simple solution here has something wrong with it.
My idea is based on time - if we assign each song to a 1/1000th of a second, we play the song that matches the 1000th of a second when the next song is called.
In this case, I'm referring to the 1/1000th of a second of current time of day. Depending on the song´s position in the second that I change tracks, is what song gets played.
A bit more randomness (if this is needed) could come if we use Pi - for example, we can run through a series in Pi which adds to the ID of the song. Differing track lengths then do the job of ensuring that we always wind up on a different song in the loop.
The above seems to my layman's eye to be a simpler solution, at least.
This shows up a lot (predictably) in actual games, e.g. Hearthstone sells you digital cards, and the randomization specifically guarantees that the time between rare cards is capped [1].
Having unusually bad luck (e.g. opening 100 packs and not getting a single legendary card, when the average would be every ~20 packs) feels bad and probably loses Blizzard a customer, so the solution is to cut off the downside tail of the distribution.
A simpler (albeit quite deterministic) way of accomplishing this is to use an LFSR [1] or the CRC [2] of an incrementing counter. Such a sequence of values "looks random" under many measures but also has the probability that you will eventually get an even distribution of values (after the period of the counter).
How exactly does the roll() method work? Can't seem to parse the meaning of `runningSum` and `mark`.
roll() {
const sum = this.state.reduce((p, c) => p + c, 0)
const r = Math.random() * sum
let runningSum = 0
let result = -1
for (let i = 0; i < this.state.length; i++) {
const mark = this.state[i]
runningSum += mark
if(r < runningSum && result === -1) {
result = i
this.state[i] = 1
} else {
this.state[i]++
}
}
// Add 1, so the die roll is between 1 -> size of die
return (result + 1)
}
Ok — I think I've figured it out, but let me know if I've gotten anything wrong:
At the outset, `this.state` is initialized as an array of `1`s — [1, 1, 1, ... , 1] — the number of `1`s is based on the `size` parameter passed to the `Die` constructor. So for a 4-sided die, `this.state` would initially be `[1, 1, 1, 1]`.
`sum` is the accumulated sum of `this.state`, but it helped me to think of the `sum` variable as a number line. Continuing with the 4-sided die case, we'd have a number line of length 4 at initialization (sum of `[1, 1, 1, 1]`).
In the context of a number line, though, `this.state` can be thought of as giving the geometric positions of each possible result. `this.state[i]` simply gives the length of the valid region for a certain result. At initialization, each result of `1` to `4` of our four-sided die would have a length of 1 on the number line (again, `[1, 1, 1, 1]`).
In concrete terms, the region [0, 1) on the number line would yield a roll of `1`, the region [1, 2) would yield a roll of `2`, ... , the region [3, 4) would yield a roll of `4`.
With that in mind, we can think of `r` as choosing a random point on the number line between 0 and the end of the number line. At initialization, it simply chooses a point on [0, 4), since 4 is the length of our number line.
`runningSum` is our location on the number line. The code chooses a result here using the boolean expression `(r < runningSum)`, but it helped me to think of it in flipped terms: checking if `runningSum` is greater than or equal to `r`.
In essence, we're traversing the number line starting from 0.
Back to our four-sided die example: let's say `r`, the random point on the number line we got, was 2.5. At initialization we'd start by checking the region of the number line where `1` would be our result. Since `this.state[0]` (the length of the valid region for the result `1`) is 1, we can say that if we choose a point randomly on the number line [0, 4) and it is on the interval [0, 1), our roll is a 1.
However, 2.5 is not on [0, 1), so we add the length of the region where `1` is valid to `runningSum` (our location on the number line) so we can move on to the next region - the interval where `2` is our result. The length of this interval is stored in `this.state[1]`, and is also equal to 1 (`this.state` = `[1, 1, 1, 1]`). In terms of our number line, the valid interval is thus [1, 2). At this point, our location of the number line, `runningSum`, is 2: still less than 2.5.
Next, we'd get to the region for the result `3`, `this.state[2]`, and we'd be at [2, 3) on our number line. At this point, our `runningSum` (location on the number line) is 3, which is greater than `r` (the randomly chosen point on the number line), 2.5, and we have our result. (sidenote: the variable `result` being equal to `-1` is a sentinel value and prevents `4` also being considered a valid roll by virtue of it being greater than 2.5 as well. After this iteration, `result` is set to `2` - changed to `3` in the return statement).
The "magic" of the die comes in when traversing the number line over the non-valid regions, however. While we're traversing the number line, we're also resizing the valid regions for each result and extending the line for the next roll. If we go back to the example in the previous paragraphs with the 4-sided die, we saw that since `r` (our randomly chosen point on the number line), 2.5, was not on [0, 1), the result of our roll was not `1`. To increase the likelihood of `1` in the next roll, we increase the length of the number line and the region where `1` is our roll: `this.state` becomes `[2, 1, 1, 1]`, the valid region for a roll of `1` becomes [0, 2), and the size of the number line grows to 5 (however, this expansion of the number line does not apply to the current roll).
At the culmination of our first example above, `this.state` would be `[2, 2, 1, 2]` — we'd increase the size of the valid region for each result except for `3`, our result from the previous roll. For the next roll, we'd choose a random point on the number line between 0 and (2 + 2 + 2 + 1 = 7). In terms of probabilities, another `3` would be half as likely as a `1`, `2`, or `4`, independently.
On a related note, a colleague once worked at a place where they did this for on-call: Everyone has an on-call score. Every week, the person on-call had their score set to zero, and everyone else incremented by one. You could plan out the next couple of months this way, and it provided an elegant way for new hires to take their place - they start at zero, and were generally familiar enough by the time their number came around.
There were some housekeeping rules to work around the organicness of human life - if someone went on holiday they kept their score, for example - but overall it seemed to work.
This "unfair RNG" issue was big in Dota 2 (a popular video game) for a while. They ultimately implemented something similar and AFAIK now all "random" effects use it.
Not quite all; there are still a handful of skills that use a discrete or flat distribution but practically all attack based changes use the pseudo random distribution.
For more info:
http://dota2.gamepedia.com/Random_distribution
RNG stands for Random Number Generator. It is used colloquially, in gaming, to refer to the probability of certain random events or the underlying implementation of them.
yeah but now it applies for I think everything. The last one to go was Spirit Breaker's bash. Prior to that PA's crit.
Funny thing is that with PRNG you can "warm up" via farming creep camps so if you don't crit in say 5 attacks you can go look for a fight and be "guaranteed" a crit with your first swing.
This will be great for game developers as many a player has complained that the RNG wasn't "fair" because they got so many fails in a row or never saw a critical hit or whatever, even though it was mathematically correctly random. Thanks, looking forward to using it!
Bingo cards (the real ones, not the theoretical ones) often have human interaction as one of the steps in their creation. This is so they appear random, distinct, without patterns, etc.
A good application for this would be prevent "Starvation" when doing random rewards in a video game. For instance, if a special item is to be dropped after defeating a boss...
reminds me of a very interesting demonstration from martin gardner. draw a 6x6 grid, and write a random digit in each cell, proceeding row by row. now count the number of pairs of consecutive (x, x) going horizontally versus vertically; you will almost always get doubled numbers in the columns because that's how random numbers work, but almost never in the rows, because when people are trying to generate "random" numbers by hand they avoid runs or other patterns.
It's actually not that bad. The graph just emphasises the subtle differences. The worst one (a 10) was rolled 696 times extra over the expected number of 100,000 rolls. That's like 0.3% deviation.
Eh, I thought this was something more mathematical, like non-transitive dice (https://en.wikipedia.org/wiki/Nontransitive_dice). Apparently it's... a weighted random number generator? in node.js?
A fair 6-sided die would be an equal number of balls numbered 1-6 and a rule that you simply return the ball you drew. You can get a gambler's fallacy distribution by, say, adding one of every ball that you didn't draw. I read the code as a Pólya urn starting with 1 ball 1-N and doing that on each draw plus reducing the number of balls of the drawn number to 1.
Also related, in 2d space, is the idea of randomly covering the plane in points but getting a spread-out distribution, since uniformity will result in clusters. (If you're moving a small window in any direction and you haven't seen a point in a while, you're "due" to see another one, and vice versa if you just saw a point.) Mike Bostock did a very nice visualization of that here: https://bost.ocks.org/mike/algorithms/