More on Couplings
Last time I described how couplings can be used to establish 'closeness' of two random variables. In fact, the results I stated imply that couplings can always be constructed when RVs are close. That still begs the question of whether couplings are a practical method of analysis for important RVs. Well, they definitely are, although to judge the full extent of it you'd have to ask a probabilist.
Beyond their direct usefulness, however, I think couplings also illustrate a broader idea: that to reason effectively about a probability distribution, it often pays to find new ways of 'generating' that distribution to better highlight certain features. This is something I learned to do by example in my classes, but always with lingering doubts as to the legitimacy of such reasoning and without an explicit awareness of the method and its generality. (Have readers had similar experiences?)
Here's a very simple example--the simplest--as a warm-up. Let X, Y be 0/1 random variables, with probabilities p, q respectively of getting a 1. Say p < q and the two numbers are 'close'.
Even though it is immediate from the definition of statistical distance that X, Y are close as RVs (which does not mean they are correlated!), let's see this in a second way by 'coupling' the variables.
Let T be a uniformly chosen real number in the unit interval. Then we can build a pair of coupled RVs by defining
X' = 1 iff T > 1 - p, Y' = 1 iff T > 1 - q.
Clearly X', Y' are individually identically distributed to X, Y respectively. (We said nothing about the joint distribution of X, Y, so it is irrelevant and distracting to ask if they are really 'the same' RVs as X', Y'. This is the kind of question that used to hang me up.) Moreover, X' = Y' unless T lies in the interval [p, q] (which occurs with prob. p - q). This shows X, Y are close via the coupling characterization of statistical distance.
Hope that was clear; here's another simple example.
Let m > n > 0 be natural numbers, m even. Fill an urn with m balls, half red, half blue. Draw n balls at random without replacement, and record the sequence of red and blue outcomes as a length-n bitstring X_n.
Compare X_n with U_n, the uniform distribution on {0, 1}^n; fixing m, how large can n be before the RVs cease to be close?
First, note the extreme cases:
-if n = 1 the RVs are identical;
-if n = m then they have a statistical distance on the order of 1 - 1/sqrt(n), since then X_n always has a Hamming weight of m/2 while U_n usually doesn't (so, we can construct a succesful distinguisher--recall the other characterizations of statistical distance from the last post).
Let's use coupling to examine an intermediate case: we will show that if n << sqrt(m), the RVs remain close. First, note that U_n can be generated by sampling balls with replacement. Let's sample with replacement n times to generate U_n, but now each time 'marking' the ball we draw before throwing it back in the urn. The marks don't affect the outcome of U_n.
To generate X_n in a coupled fashion, use the same sequence of draws, except that we ignore draws that bring up a marked ball, and we perform additional draws if needed until we record n outcomes.
Standard 'birthday paradox' reasoning assures us that if n << sqrt(m), then with high probability no marked ball is ever drawn, hence the two coupled RVs are identical, as needed.
What is the critical value of n for which X_n, U_n 'part ways' and become distinguishable? Is it O(sqrt(m)), or much larger? I could say more, but I still don't understand the issue as fully as I'd like, so I'll leave it as a (hopefully tantalizing) question for readers.
Next time: couplings applied to convergence of random walks on graphs.
Beyond their direct usefulness, however, I think couplings also illustrate a broader idea: that to reason effectively about a probability distribution, it often pays to find new ways of 'generating' that distribution to better highlight certain features. This is something I learned to do by example in my classes, but always with lingering doubts as to the legitimacy of such reasoning and without an explicit awareness of the method and its generality. (Have readers had similar experiences?)
Here's a very simple example--the simplest--as a warm-up. Let X, Y be 0/1 random variables, with probabilities p, q respectively of getting a 1. Say p < q and the two numbers are 'close'.
Even though it is immediate from the definition of statistical distance that X, Y are close as RVs (which does not mean they are correlated!), let's see this in a second way by 'coupling' the variables.
Let T be a uniformly chosen real number in the unit interval. Then we can build a pair of coupled RVs by defining
X' = 1 iff T > 1 - p, Y' = 1 iff T > 1 - q.
Clearly X', Y' are individually identically distributed to X, Y respectively. (We said nothing about the joint distribution of X, Y, so it is irrelevant and distracting to ask if they are really 'the same' RVs as X', Y'. This is the kind of question that used to hang me up.) Moreover, X' = Y' unless T lies in the interval [p, q] (which occurs with prob. p - q). This shows X, Y are close via the coupling characterization of statistical distance.
Hope that was clear; here's another simple example.
Let m > n > 0 be natural numbers, m even. Fill an urn with m balls, half red, half blue. Draw n balls at random without replacement, and record the sequence of red and blue outcomes as a length-n bitstring X_n.
Compare X_n with U_n, the uniform distribution on {0, 1}^n; fixing m, how large can n be before the RVs cease to be close?
First, note the extreme cases:
-if n = 1 the RVs are identical;
-if n = m then they have a statistical distance on the order of 1 - 1/sqrt(n), since then X_n always has a Hamming weight of m/2 while U_n usually doesn't (so, we can construct a succesful distinguisher--recall the other characterizations of statistical distance from the last post).
Let's use coupling to examine an intermediate case: we will show that if n << sqrt(m), the RVs remain close. First, note that U_n can be generated by sampling balls with replacement. Let's sample with replacement n times to generate U_n, but now each time 'marking' the ball we draw before throwing it back in the urn. The marks don't affect the outcome of U_n.
To generate X_n in a coupled fashion, use the same sequence of draws, except that we ignore draws that bring up a marked ball, and we perform additional draws if needed until we record n outcomes.
Standard 'birthday paradox' reasoning assures us that if n << sqrt(m), then with high probability no marked ball is ever drawn, hence the two coupled RVs are identical, as needed.
What is the critical value of n for which X_n, U_n 'part ways' and become distinguishable? Is it O(sqrt(m)), or much larger? I could say more, but I still don't understand the issue as fully as I'd like, so I'll leave it as a (hopefully tantalizing) question for readers.
Next time: couplings applied to convergence of random walks on graphs.
Labels: probability
0 Comments:
Post a Comment
<< Home