Trying to find a solution to the maximum-entropy probability distribution Q(x,y,z) constrained to reproduce the marginal distributions P(x,y), P(y,z), and P(z,x) from some other distribution P(x,y,z).
It is known that Q takes the form Q=a(x,y)b(y,z)c(z,x) for some functions a,b,c to be determined by solving the system of equations:
P(x,y) = sum_z Q(x,y,z)
P(y,z) = sum_x Q(x,y,z)
P(z,x) = sum_y Q(x,y,z)
It's not clear there exists a general closed-form solution. Iterative algorithms are known. This type of problem comes up in a number of interesting contexts. For instance, testing for non-trivial multi-variable interactions in dynamical systems such as neural networks or spin networks, performing joins on probabilistic databases, constructing reduced models of probability distributions, and in some cooperative game theory problems.
oh this is neat, it's like adding an additional dimension to the Wasserstein distance / optimal transport problem. Well, in the sense that you are using marginals as constraints. Kantorovich won a nobel prize for this kind of stuff, so it's definitely hard.
It is known that Q takes the form Q=a(x,y)b(y,z)c(z,x) for some functions a,b,c to be determined by solving the system of equations:
P(x,y) = sum_z Q(x,y,z)
P(y,z) = sum_x Q(x,y,z)
P(z,x) = sum_y Q(x,y,z)
It's not clear there exists a general closed-form solution. Iterative algorithms are known. This type of problem comes up in a number of interesting contexts. For instance, testing for non-trivial multi-variable interactions in dynamical systems such as neural networks or spin networks, performing joins on probabilistic databases, constructing reduced models of probability distributions, and in some cooperative game theory problems.
Examples: https://www.princeton.edu/~wbialek/our_papers/schneidman+al_...
http://vldb.org/conf/1987/P071.PDF
https://doi.org/10.6028/jres.072b.019
https://www.mdpi.com/1099-4300/16/4/2161 Edit: formatting