I saw a nice result sketched a few weeks ago, on the “online matching” problem. Below I try to re-explain the result, using some idiomatic (and as a bonus, inoffensive) terminology which I find makes it easier to remember what’s going on.

**Online Matching**. There is a set of items, call them Lunches, which you want to get eaten. There is a set of people who will arrive in a sequence, call them Diners. Each Diner is willing to eat certain Lunches but not others. Once a Diner shows up, you can give them any remaining Lunch they like, but that Lunch cannot be re-sold again. The prototypical problem is that a Diner might show up but you already sold all of the Lunches that they like. Your task: find a (randomized) strategy to maximize the (expected) number of Lunches sold, relative to the maximum Lunch-Diner matching size.

This is an *online* problem: you don’t have all the information about Diner preferences before the algorithm begins, and you need to start committing to decisions before you know everyone’s preferences in full.

Consider the following strategy:

Algorithm: pick a random ordering/permutation of the lunches; then when a diner arrives, give her the first available lunch in the ordering that she likes.

How does this perform? In expectation at least (1-1/e)*M* lunches are sold, where *M* is the maximum (offline) matching size in the Lunch-Diner graph. But proving so is tricky. (This ratio 1-1/e turns out to be the best possible.) Birnbaum and Mathieu recently gave a simpler proof that this ratio is achieved. Amin Saberi, whose talk sketched this, rightly pointed out that it feels like a “proof from The Book!”

With a small lemma (e.g. Lemma 2 in Birnbaum-Mathieu), we may assume that the number of diners and lunches are equal, and that there is a perfect matching between diners and lunches. Let the number of diners and lunches each be *n*, and so *M*=*n*.

Let *x _{t}* be the probability that in a random lunch permutation, upon running

**Algorithm**, the lunch in position

*t*is eaten.

**Experiment 1**. Take a random lunch permutation. Score 1 if the lunch at position *t* is not eaten, 0 if it is eaten. The expected value of this experiment is 1-*x _{t}*.

**Experiment 2**. Take a random lunch permutation, and a random diner. Score 1 if that diner eats a lunch in one of the first *t* positions, 0 otherwise. The expected value of this experiment is (*x*_{1} + … + *x*_{t})/*n*.

The key is to show that the second experiment has expected value greater than or equal to the first one, then we will finish up by an easy calculation. We do this with a joint experiment, similar to “coupling” arguments.

**Joint experiment**. Let *t* be fixed. Take a random lunch permutation π_{1}, and look at the lunch L in position *t*. Let D be matched to L in the perfect matching. Obtain π_{2} from π_{1} by removing L and re-inserting L in a random position.

*Key claim*: π_{2} and D are independent, uniformly distributed random variables. This implies that we can run both experiments at the same time using this joint distribution on π_{1}, π_{2}, L, D. Moreover,

**Deterministic Lemma**. For any π_{1}, and for an uneaten lunch L at position t in π_{1}, obtain π_{2} from π_{1} by removing L and re-inserting L in any position. Let D be matched to L in the perfect matching. Then in π_{2}, D eats one of the first *t* lunches.

This implies that whenever experiment 1 scores a point, so does experiment 2. So taking expectations,

(*x*_{1} + … + *x _{t}*)/

*n*≥ 1-

*x*

_{t}.

This equation is the crux. Let *x*_{≤t} = *x*_{1}+…+*x _{t, }*then we see

*x*

_{≤t}≥ (

*/*

^{n}_{n+1})(1+

*x*

_{≤t-1}). This recursion is easy to unravel and it gives a lower bound of

*n*(1-(

*/*

^{n}_{n+1})

*) ≥*

^{n}*n*(1-1/e) on

*x*

_{≤n}, the expected size of the matching!

(This is a cross-post from my blog, which has posts commonly on math and/or lunches.)

Filed under Technical Introduction

Tagged: algorithms, online, randomization

1-(n/(n+1))^n <= 1-1/e and not 1-(n/(n+1))^n>= 1-1/e. It approaches 1-1/e from below and not above. Hence the proof of 1-1/e approximation is only for large enough n.

That’s absolutely correct… upon re-inspection there is an additive constant’s worth of gap in the expected number of matches achieved. One might try to fix this by noting there is a better lower bound of 1 on x1, but it does not help enough.

I am not sure if a better analysis removes this gap or if it is inherent to the algorithm. This coupling argument seems pretty rigid to me. There is a longer analysis by Goel and Mehta (http://dl.acm.org/citation.cfm?id=1347189) of which Birnbaum-Mathieu is a distillate, but I could not parse a yes or no answer from there immediately.