Approximating Partition Functions

[Guest post by Alex Kelser, Alex Lin, Amil Merchant, and Suproteem Sarkar, scribing for a lecture by Andrej Risteski.]

Andrej Risteski: Approximating Partition Functions via Variational Methods and Taylor Series

For a probability distribution defined up to a constant of proportionality, we have already seen the partition function. To refresh your memory, given a probability mass function p(\vec{x}) \propto f(\vec{x}) over all \vec{x} \in \mathcal{X}, the partition function is simply the normalization constant of the distribution, i.e.
Z = \sum_{\vec{x} \in \mathcal{X}} f(\vec{x})

At first glance, the partition function may appear to be uninteresting. However, upon taking a deeper look, this single quantity holds a wide array of intriguing and complex properties. For one thing, its significance in the world of thermodynamics cannot be overstated. The partition function is at the heart of relating the microscopic quantities of a system – such as the individual energies of each probabilistic state \vec{x} \in \mathcal{X} – to macroscopic entities describing the entire system: the total energy, the energy fluctuation, the heat capacity, and the free energy. For explicit formulas, consult this source for reference. In machine learning the partition function also holds much significance, since it’s intimately linked to computing marginals in the model.

Although there exists an explicit formula for the partition function, the challenge lies in computing the quantity in polynomial time. Suppose \vec{x} is a vector of dimension n where each x_i \in \{-1, 1\} for i \in \{1, \ldots, n\}. Specifically, we would like a smarter way of finding Z than simply adding up f(x) over all 2^n combinations of x.

Andrej Risteski’s lecture focused on using two general techniques – variational methods and Taylor series approximations – to find provably approximate estimates of Z.

Motivation

The general setup addresses undirected graphical models, also known as a Markov random fields (MRF), where the probability mass function p : \mathcal{X} \to [0, 1] has the form
 p(\vec{x}) \propto \exp \left( \sum_{i \sim j} \phi_{ij}(x_i,x_j) \right)
for some random, n-dimensional vector \vec{x} \in \mathcal{X} and some set of parameterized functions \{\phi_{ij}: \mathcal{X} \to \mathcal{R}\}. Note that the notation i \sim j denotes all pairs (i, j) where i, j \in \{1, 2, \ldots, n\} and the edge e = (i, j) exists in the graph.

The talk considered the specific setup where each x_i \in \{-1, 1\}, so \mathcal{X} = \{-1, 1\}^n. Also, we fix

\phi_{ij}(x_i, x_j) = J_{ij} x_i x_j
for some set of coefficients \{J_{ij}\}, thereby giving us the well-known Ising model. Note, the interaction terms could be more complicated and made “less local” if desired, but that was not discussed in the lecture.

These graphical models are common in machine learning, where there are two common tasks of interest:

  1. Learning: given samples, models try to learn \phi_{ij}(x_i, x_j) or \{J_{ij}\}
  2. Inference: given \{\phi_{ij}\} or the potentials \{J_{ij}\} as input, we want to calculate the marginal probabilities, such as
    p(x_i = -1) \text{ or }  p(x_i = 1, x_j = -1)

We focus on the latter task. The problem of inference is closely related to calculating the partition function. This value is often used as the normalization constant in many methods, and it is classically defined as
 Z = \sum_{x \in \mathcal{X}} \exp \left( \sum_{i \sim j} \phi_{ij}(x_i, x_j) \right)

Although we can write down the aforementioned closed-form expression for the partition function, it is difficult to calculate this quantity in polynomial time. There are two broad approaches to solving inference problems:

  1. Randomized: Methods based on Markov chain Monte Carlo (MCMC), such as Gibbs sampling, that construct a Markov chain whose stationary distribution is the distribution of interest. These have been more extensively studied in theoretical computer science.
  2. Deterministic: Variational methods, self-avoiding walk trees, Taylor expansion methods

Although more often used in practice, randomized algorithms such as MCMC are notoriously hard to debug, and it is often unclear at what point the chain reaches stationarity.

In contrast, variational methods are often more difficult to rigorously analyze but have the added benefit of turning inference problems into optimization problems. Risteski’s talk considered some provable instantiations of variational methods for calculating the partition function.

Variational Methods

Let us start with a basic observation, known as the Gibbs variational principle. This can be stated as the following.

Lemma 1: Let p(\vec{x}) \propto \exp{\sum_{i \sim j} J_{ij} x_i x_j}. Then, we can show that the corresponding partition function satisfies

 \log Z = \max_{q \in \mathcal{Q}} \left \{ \sum_{i \sim j} \underbrace{J_{ij}\mathbb{E}_q[x_i x_j]}_{\text{energy term}} + \underbrace{\mathbb{H}(q)}_{\text{entropy term}} \right \}

where q : \mathcal{X} \to [0, 1] is defined to be a valid distribution over \mathcal{X} and \mathcal{Q} is the set of all such distributions.

Note that we use \mathbb{E}_q(\cdot) to denote the expectation of the inner argument with respect to the distribution q and \mathbb{H}(q) to denote the Shannon entropy of q.

Proof: For any q \in \mathcal{Q}, the KL divergence between q and p must be greater than or equal to 0.

    \mathbb{KL} (q || p) \geq 0

 \underbrace{\mathbb{E}_q [\log q(\vec{x})}_{-\mathbb{H}(q)}] - \mathbb{E}_q \left[ \sum_{i \sim j} J_{ij} x_i x_j \right] + \log Z \geq 0

\log Z \geq \sum_{i \sim j} J_{ij} \mathbb{E}_q[x_i x_j] + \mathbb{H}(q)
Equality holds if q = p, leading directly to the lemma above.

Note that the proof would have also held in the more generic exponential family with \phi_{ij} instead of J_{ij} x_i x_j. Also, at most temperatures, one of the 2 terms, either the energy or the entropy will often dominate.

As a result of this lemma, we have framed the original inference problem of calculating Z as an optimization problem over \mathcal{Q}. This is the crux of variational inference.

Optimization Difficulties

The issue with this approach is that it is difficult to optimize over the polytope of distributions due to the values of each x_i coming from \pm 1. In general, this is NOT tractable. We can imagine two possible solutions:

  1. Inner Approximation

    Instead of optimizing over all q, pick a “nice” subfamily of distributions to constrain q. The prototypical example is the mean-field approximation in which we let q(\vec{x}) = \prod_{i=1}^n q_i(x_i), where each q_i : \{-1, 1\} \to [0, 1] is a univariate distribution. Thus, we approximate \log Z with
    \max_{q = \prod_i q_i} \left \{ \sum_{i \sim j} J_{ij} \cdot \mathbb{E}_{q_i}[x_i] \cdot \mathbb{E}_{q_j}[x_j] + \sum_i \mathbb{H}(q_i) \right \}

    This would provide a lower bound on Z. There are a number of potential issues with this method. For one thing, the functions are typically non-convex. Even ignoring this problem, it is difficult to quantify how good the approximation is.

  2. Outer Approximation

    Note: Given a distribution q, we use q_S to denote the marginal q(\vec{x}_S) = q(x_{s_1}, \ldots, x_{s_k}) for some set S = \{s_1, \ldots, s_k\} \subseteq \{1, \ldots, n\}.

    In the outer approximation, we relax the polytope we are optimizing over using convex hierarchies. For instance, we can define \mathcal{M}^k as the polytope containing valid marginals q_S over subsets S of size at most k. We can then reformulate our objective as a two-part problem of (1) finding a set of pairwise marginals \widetilde{q} = \{\widetilde{q}_S \text{ s.t. } \lvert S \rvert \leq k\}\in \mathcal{M}^k that optimizes the energy term and (2) finding a distribution q with pairwise marginals matching \widetilde{q} that optimizes the entropy term. For simplicity of notation, we use \widetilde{q}_{ij} to denote \widetilde{q}_{S}, where S = \{i, j\}. It follows that we can rewrite the Gibbs equation as

    \log Z = \max_{\widetilde{q} \in \mathcal{M}^k} \left \{ \sum_{i \sim j} J_{ij} \cdot \mathbb{E}_{\widetilde{q}_{ij}}[x_i x_j] + \max_{q \given q_S = \widetilde{q}_S \forall S, \lvert S \rvert \leq k} \mathbb{H}(q) \right \}

    The first term here represents the energy on pairwise marginals, whereas the second term is the maximum entropy subject to a constraint about matching the energy distribution’s pairwise marginals. The goal of this procedure is to enlarge the polytope such that \mathcal{M}^k is a tractable set, where we can impose a polynomial number of constraints satisfied by real marginals.

    One specific convex hierarchy that is commonly used for relaxation is the Sherali-Adams (SA) hierarchy. Sherali-Adams will allow us to formulate the optimization of the energy term (and an approximation of the entropy term) as a convex program. We introduce the polytope \text{SA}(k) for k \geq 2, which will relax the constraints on \vec{x} to allow for values outside of \{-1, 1\}^n in order to generate a polynomial-time solution for \log Z.

    The Sherali-Adams hierarchy will take care of the energy term, but it remains unclear how to rewrite the entropy term in the context of the convex program. In fact, we will need approximations for \mathbb{H} in order to accomplish this task.

    In this talk, we’ll consider two approximations: one classical one – the Bethe entropy \mathbb{H}_\text{BETHE} and one more recent one – the augmented mean-field pseudo-entropy \mathbb{H}_\text{aMF}.

Bethe Entropy Approximation

The Bethe entropy works by pretending that the Markov random field is a tree. In fact, we can show that if the graph is a tree, then using the Bethe entropy approximation in the convex program defined by \text{SA}(2) will yield an exact calculation of \log Z.

Specifically, the Bethe entropy is defined as
\mathbb{H}_\text{BETHE} (\widetilde{q}) = \sum_{i \sim j} \mathbb{H}(\widetilde{q}_{ij}) - \sum_i \mathbb{H}(\widetilde{q}_i) (d_i - 1)
where d_i is defined to be the degree of the particular vertex i. Note that there are no marginals over sets of dimension greater than two in this expression; thus, we have a valid convex program over the polytope \text{SA}(2).

This lemma is well-lnown, but it will be a useful warmup:

Lemma 2 Define the output of the convex program
\log Z_{\text{BETHE}} =      \max_{\widetilde{q} \in \text{SA}(2)} \left \{ \sum_{i \sim j} J_{ij} \cdot \mathbb{E}_{\widetilde{q}_{ij}} [x_i x_j] + \mathbb{H}_{\text{BETHE}} (\widetilde{q}) \right \}
On a tree, \log Z_{\text{BETHE}} = \log Z and the optimization objective is concave with respect to the \text{SA}(2) variables, so it can be solved in polynomial time.

Proof sketch We will prove 3 claims:

  1. \log Z_{\text{BETHE}} \geq \log Z

    Since the energy term is exact, it suffices to show that for valid marginals \widetilde{q}_S, \mathbb{H}_\text{BETHE}(\widetilde{q}) \geq \max_{q \given q_S = \widetilde{q}_S} \mathbb{H}(q). This can be done by re-writing \mathbb{H}_\text{BETHE} and using a property of conditional entropy, namely that \mathbb{H}(X,Y) = \mathbb{H}(X) + \mathbb{H}(Y|X).

  2. For trees, \log Z_\text{BETHE} is concave in the \text{SA}(2) variables.

    This proof was only sketched in class but is similar to the usual proof of concavity of entropy.

  3. For trees, we can round a solution to a proper distribution over \{-1, 1\}^n which attains the same value for the original \log Z optimization.

    The distribution q that we will produce is
    q(\vec{x}) = \prod_i q(x_i \given x_{\text{pa}(i)})
    where we start at the root and keep sampling down the tree. Based on the tree structure, \mathbb{H}(q) = \mathbb{H}_\text{BETHE}(q). The energy is also the same since q_{ij}(x_i, x_j) = \widetilde{q}_{ij}(x_i, x_j). Therefore, since both terms in the equation are equal to the respective terms in the partition function, we get that the equation must equal the partition function.

Remarks:

  1. \mathbb{H}_\text{BETHE} is commonly used on graphs that are not trees.
  2. However, for non-trees, the optimization is often no longer concave.
  3. Therefore, the value obtained by the output equation is neither an upper or lower bound.
  4. The fixed points of the belief propagation algorithm give a 1-1 correspondence with the stationary points of the Bethe objective. (See a classical paper by Yedidia et al..)

Augmented Mean Field Pseudo-Entropy

For this approximation, we will focus on dense graphs: namely graphs in which each vertex has degree \geq \Delta \cdot n for some \Delta > 0. To simplify things, we focus on distributions of the form
 p(\vec{x}) \propto \exp \left(\sum_{i \sim j} \pm J x_i x_j\right)
where J is some positive constant parameter. The Bethe entropy approximation is undesirable in the dense case, because we cannot bound its error on non-trees. Instead, we proved the following theorem.

Theorem (Risteski ’16)

For a dense graph with parameter \Delta, there exists an outer approximation based on \text{SA}(\frac{1}{\epsilon^2\Delta}) and an entropy proxy which achieves a O(J \epsilon n^2 \Delta) additive approximation to \log Z for some \epsilon > 0. There is an algorithm with runtime O(\frac{1}{\epsilon^2\Delta}).

To parse the theorem statement, consider the potential regimes for J:

  1. J >> \frac{1}{n} \rightarrow One can ignore the entropy portion and solve for the energy portion that dominates. (So the guarantee is not that interesting.)
  2. J << \frac{1}{n} \rightarrow MCMC methods give a (1+\epsilon)-factor approximation in time poly(n, \frac{1}{\epsilon}). (It’s not clear if the methods suggested here can give such a guarantee – this is an interesting problem to explore.)
  3. J = \Theta(\frac{1}{n}) \rightarrow MCMC mixes slowly, but there is no other way to get a comparable guarantee. This is the interesting regime!

Proof Sketch The proof strategy is as follows: We will formulate a convex program under the SA(k) relaxation that can return a solution \widetilde{q} in polynomial time with value \log \tilde{Z}. Note that this value of the relaxation will be an upperbound on the true partition function value \log Z^*. The convex program solution \widetilde{q} may not be a valid probability distribution, so from this, we will construct a “rounded solution’’ – an actual distribution q with value \log Z. It follows that
\log \tilde{Z} \geq \log Z^* \geq \log Z

We will then aim to put a bound on the gap between \log \tilde{Z} and \log Z; namely, that
    \log Z \geq \log \tilde{Z} - \epsilon \cdot n^2 \cdot J \cdot \Delta
or equivalently,
    \sum_{i \sim j} \mathbb{E}_q[x_i x_j] \cdot J + \mathbb{H}(q) \geq \sum_{i \sim j} \mathbb{E}_{\widetilde{q}}[x_i x_j] \cdot J + \mathbb{H}_\text{aMF}(\widetilde{q}) - \epsilon n^2 J \Delta
This equivalently places a bound on the optimality gap between \log \tilde{Z} and \log Z^*, thereby proving the theorem. Here are the main components of the proof:

1. Entropy Proxy

To approximate \mathbb{H}, we will use the pseudo-augmented mean field entropy approximation \mathbb{H}_\text{aMF}. This is defined as

 \mathbb{H}_\text{aMF}(q) = \mathbb{I}n_{|S| \le k} \{ \overbrace{\mathbb{H}(q_S)}^\text{entropy of $S$} + \overbrace{\sum_{i \notin S} \mathbb{H}(q_i|q_S)}^\text{entropy of everything else} \}

Using \mathbb{H}_\text{aMF}, we can write a convex program under SA(k) that provides a relaxation for optimizing \log Z. Specifically, we will let k = O(1 / (\Delta \cdot \epsilon^2)). Let \widetilde{q} be the outputed solution to this relaxation. We define the rounded solution as
q(\vec{x}) = \widetilde{q}_S(\vec{x}_S) \cdot \prod_{i \notin S} \widetilde{q}_S(x_i \given \vec{x}_S)
Using the chain rule for entropy, we can show that
\mathbb{H}(q) \geq \mathbb{H}_\text{aMF}(\widetilde{q})

2. Rounding Scheme The above distribution q is actually the same distribution that correlation rounding, as introduced by a seminal paper of Barak, Raghavendra, and Steurer, produces. By using definition of mutual information \mathbb{I}(X, Y) and Pinsker’s Theorem showing that \mathrm{\mathbb{C}ov}(X, Y) = O(\sqrt{\mathbb{I}(X, Y)}), we can work through some algebra to show that

\sum_{i \sim j} \mathbb{E}_q[x_i x_j] \cdot J \geq \sum_{i \sim j} \mathbb{E}_{\widetilde{q}}[x_i x_j] \cdot J - \epsilon n^2 J \Delta

Then, putting together the entropy and energy effects, we can prove the main theorem.

Remarks:

  1. The above proof easily extends to a more general notion of density, as well as to sparse graphs with “low-rank’’ spectral structure. See the paper for more information.
  2. In fact, with more work, one can extract guarantees on the the best mean field approximation to the Gibbs distribution in the dense regime. (This intuitively can be done as the distribution q produced above is almost mean-field, except for the set S). This is interesting, since as we mentioned, the quality of the mean-field approximation is usually difficult to characterize. (Moreover, the procedure is algorithmic.) See the recent work of Jain, Koehler and Risteski ’18 for more details and results.

Taylor Series Approximation

Another method of calculating the partition function involves Taylor expanding its logarithm around a cleverly selected point. This approach was first developed by Barvinok ’14, for the purpose of computing the partition function of cliques in a graph. The mathematical techniques developed in this paper can be naturally extended to evaluate a variety of partition functions, including that of the ferromagnetic Ising model. We will use this example to illustrate the general idea of the Taylor expansion technique.

The goal here is to devise a deterministic, fully polynomial-time approximation scheme (an FPTAS) for evaluating the partition function of the ferromagnetic Ising model. This means that the the algorithm must run in poly(n, \frac{1}{\epsilon}) and be correct within a multiplicative factor of 1 + \epsilon. We will work in the general case where the Hamiltonian includes an external magnetic field term, as well as the neighboring spin interaction terms. Using logarithmic identities, we can re-write the partition function slightly differently from last time:

 Z(\lambda) = \sum_{x \in \{ \pm 1 \}^n} \lambda^{|{1: x_i = 1}|} \beta^{|E(x)|}
Here, \lambda is called the vertex activity (or external field), and characterizes the likelihood of a vertex to be in the + configuration. Additionally, \beta \geq 0 is referred to as the edge activity, and characterizes the propensity of a vertex to agree with its neighbors. The ferromagnetic regime (where agreement of spins is favored) corresponds to \beta < 1. |E(x)| denotes the number of edge cut in the graph, which is equivalent to the number of neighboring pairs with opposite spins.

As one final disclaimer, the approximation scheme presented below is not valid for | \lambda| = 1, which corresponds to the zero-magnetic field case. Randomized algorithms exist which can handle this case.

The general idea here is to hold one parameter fixed (\beta in our case), and express the logarithm of the partition function as a Taylor series in the other parameter (\lambda). From a physics standpoint, the Taylor expansion around \lambda = 0 tells us how the partition function is changing as a function of the magnetic field. Without loss of generality, we focus on the case where | \lambda | < 1. This simplification can be justified by a simple symmetry argument. Consider \bar{x} as the inverse of x where all the 1’s (spin-up) are flipped to -1s (spin-down) and vice-versa. The partition function of this flipped system can be related to the original system by a constant factor:

\begin{align*} \sum_x \beta^{|E(x)|} \lambda^{|\{1: x_i = 1\}|} &= \sum_x \beta^{|E(\bar{x})|} \lambda^{n - |\{1: \bar{x}_i = 1\}|} \\ &= \lambda^{n} \sum_x \beta^{|E(\bar{x})|} \left(\frac{1}{\lambda}\right)^{|\{1: \bar{x}_i = 1\}|} \end{align*}

This holds because the number of cut edges remains constant when the values are flipped. Since these two partition functions are related by this factor constant in \lambda, for any model with \lambda > 1, we can simply consider the flipped graph \bar{x} and use the above relation.

For our purposes, it will be more convenient to approximate the logarithm of the partition function, because a multiplicative (1 + \epsilon) approximation of Z(\lambda) corresponds to an \mathcal{O}(\epsilon) additive approximation of \log Z(\lambda). In fact, if we allow the partition function to be complex, then we can easily show that an additive bound of \frac{\epsilon}{4} for \log Z(\lambda) guarantees a multiplicative bound of (1+\epsilon) for Z(\lambda).

For notational convenience we define:
 \log Z(\lambda) = f(\lambda)
Thus, the Taylor expansion of f(\lambda) around \lambda = 0 if given by:
f(\lambda) = \sum_j f^{(j)} (0) \frac{\lambda^j}{j!}

The big question now is: Can we get a good approximation from just the lower order terms of the Taylor expansion for f(\lambda)? Equivalently, can we can bound the sum of the higher order terms at \frac{\epsilon}{4}? Additionally, we still must address the question of how to actually calculate the lower order terms.

To answer these questions, we make simple observations about the derivatives of f in relation to the derivatives of Z.
\begin{align*}     f'(\lambda) &= \frac{1}{Z(\lambda)} Z'(\lambda) \\     Z'(\lambda) &= f'(\lambda) Z(\lambda) \\     Z^{(m)} (\lambda) &= \sum_{j=0}^{m-1} {m-1 \choose j} Z^{(j)} (\lambda) f^{(m-j)} (\lambda) \end{align*}
The last equation just comes from repeated application of the product rule. Using these relations, we can solve for the first m derivatives of f if we have the first m derivatives of Z using a triangular linear system of equations. As Z(0) = 1, we see that the system is non-degenerate.

Note, Z(\lambda) is a n-degree polynomial with leading coefficient of 1 and constant coefficient of 1 (corresponding to the configurations where all vertices are positive / negative respectively). Using this form, we can re-write the partition function in terms of its n (possibly complex) roots r_i:

Z(\lambda) = \prod_{i=1}^n \left(1-\frac{\lambda}{r_i} \right) \text{ for some } r_i

f(\lambda) = \log Z(\lambda) = \sum_{i=1}^n \log \left(1-\frac{\lambda}{r_i}\right)

= - \sum_{i=1}^n \sum_{j=1}^{\infty} \frac{1}{j} \left(\frac{\lambda}{r_i} \right)^j

Supposing we keep the first m terms, the error is given bounded by:

|f(\lambda) + \sum_{i=1}^n \sum_{j=1}^{m} \frac{1}{j} (\frac{\lambda}{r_i})^j| \leq  \sum_{i=1}^n \sum_{j=m+1}^{\infty} \frac{1}{|r_i|}\frac{|\lambda|^j}{j} \leq \sum_{i=1}^n \frac{|\lambda|^{m+1}}{(m+1)(1-|\lambda|)}\frac{1}{|r_i|}

Here we can invoke the Lee-Yang theorem, which tells us that the zeroes of Z(\lambda) for a system with ferromagnetic interactions lie on the unit circle of the complex plane. So the Lee-Yang theorem guarantees that |r_i| = 1, and we see that the error due to truncation is ultimately bounded by:

\frac{n|\lambda|^{m+1}}{(m+1)(1-|\lambda|)}

Now by the previous symmetry argument, we can assume that \lambda < 1. Thus, to achieve an error bound of \frac{\epsilon}{4}, we must have:
\frac{\epsilon}{4} > \frac{n|\lambda|^{m+1}}{(m+1)(1-|\lambda|)}
Rearranging terms and taking the natural log of both sides (which is justified given that |\lambda| < 1), we see that the inequality is satisfied if:

m \geq \frac{1}{\log(\frac{1}{|\lambda|})}(\log(\frac{4n}{\epsilon}) + \log(\frac{1}{1-|\lambda|}))

Thus, we need only retain the first \frac{1}{\log(\frac{1}{|\lambda|})}(\log(\frac{4n}{\epsilon}) + \log(\frac{1}{1-|\lambda|})) terms of the Taylor series expansion of f(\lambda). This will involve calculating the first m = \lceil \frac{1}{\log(\frac{1}{|\lambda|})}(\log(\frac{4n}{\epsilon}) + \log(\frac{1}{1-|\lambda|}))\rceil derivatives of Z(\lambda), which naively can be done in quasi-polynomial time. This is done by running over all S, where |S| = m. This takes O(n \log n + \log (\frac{1}{\epsilon})) time, which is quasi-polynomial.

Recent work by Patel and Regts, as well as Liu, Sinclair and Srivastiva has focused on evaluating these coefficients more efficiently (in polynomial time), but is outside the scope of this lecture. Clever counting arguments aside, to trivially calculate the k-th derivative, we need only sum over the vectors with k spin-up components.

Lee-Yang Theorem

Most of the heavy lifting in the above approach is done by the Lee-Yang theorem. In this section, we sketch how it is proved.

First, let’s define the Lee-Yang property, which we will refer to as the LYP. Let P(\lambda_1, \dots, \lambda_n) be some multilinear polynomial. P has the Lee-Yang property if for any complex numbers \lambda_1, \dots, \lambda_n such that |\lambda_i| > 1 for all i, then P(\lambda_1, \dots, \lambda_n) \neq 0.

We can then show that the partition function for the ferromagnetic Ising model, which we wrote as
Z(\lambda_1, \dots, \lambda_n) = \sum_x \beta^{|E(x)|} \prod_{i: x_i = 1} \lambda_i
must have this LYP. In the antiferromagnetic case it turns out that all zeroes lie on the negative real axis, but we will focus on the ferromagnetic case.

Proof (sketch):

For the proof that the partition function has the LYP, we will use Asano’s contraction argument. This relies on the fact that certain operations preserve LYP and we can “build” up to the partition function of the full graph by these operations.

  • Disjoint union: This is fairly simple to prove since the partition function for the disjoint union of G and H would just factor into the multiplication of the two partition function of the original graphs Z_{G \sqcup H} = Z_{G} Z_{H}. Where Z_{G} and Z_{H} have the LYP, then it is clear that Z_{G \sqcup H} has the same LYP.
  • Contraction: Suppose we produce a graph G' from G by contracting 2 vertices z_1, z_2 in G, which means merging them into one vertex. It can be shown that if G has the LYP, then G' also has the LYP. We write the partition function for G as
    A\lambda_1 \lambda_2 + B \lambda_1 + C \lambda_2 + D
    The “contracted” graph amounts to deleting the middle 2 terms so the partition function for G' can be written as
    \underbrace{A \lambda'}_{\text{both plus}} + \underbrace{D}_{\text{both minus}}
    We want to show that the partition function for G' has the LYP. Because the partition function of G has the LYP, we can consider the case where \lambda_1 = \lambda_2 = \lambda. By the LYP,
    A\lambda^2 + (B+C)\lambda + D \neq  0
    if |\lambda| > 1. By Vieta’s formulas we can find a relation between A and D,
     \frac{|D|}{|A|} = \prod |zeroes| \leq 1
    Now, to show that the partition function of G’ has the LYP, we assume there is a \lambda' such that A\lambda' + D = 0. For this to be true,
    |\lambda'| = \frac{|D|}{|A|}
    However, this means that |\lambda'| \leq 1 so there is no solution where |\lambda'| > 1 such that the partition function of G’ is 0 and so it has the LYP.

The final part of this proof is to construct the original graph. We observe is that for a single edge, the partition function has the LYP. In this case, we easily write out the partition function for 2 vertices:
z_1z_2 + B z_1 + B z_2 + 1
Suppose |z_1| > 1: then z_1z_2 + Bz_1 + Bz_2 + 1=0 would imply |z_2| = \frac{|1+Bz_1|}{|z_1 + B|}. However, this is just the Möbius transform mapping the exterior of the unit disk to the interior, so |z_2| \leq 1. Thus, it cannot be the case that both z_1 and z_2 have absolute value greater than one.

Since single edges have the LYP. We break the graph into single edges, with copies of vertices. These copies are then contracted and we build up the graph to show that the partition function has the LYP.

Knowing that the partition function has the LYP, direct application of the Lee-Yang theorem guarantees that the roots are on the unit circle in the complex plane.

Conclusion

Although there exists an explicit formula for the partition function, the challenge lies in computing the quantity in polynomial time. Andrej Risteski’s lecture focused on two less-explored methods in theoretical computer science – namely variational inference and Taylor series approximations – to find provably approximate estimates. Both of these approaches are relatively new and replete with open problems.

Leave a comment