Wednesday, November 30, 2016

(Lossy) Wyner's Common Information and Gacs-Körner's Common Randomness Revisited

Recently, I was involved (not so much, to be honest) in a research project dealing with caching, i.e., the storage of data close to a user such that it need not be transmitted anymore. A typical use case is Netflix: Netflix offers $N$ movies/series, and the user can choose any of these anytime. Most users watch during prime time, which taxes the communication network. During the day, the communication network has unused capacities, so one could try storing part of the Netflix database on a hard drive connected to the user's player. This reduces the network load during prime time and improves the user experience.

I our work, we not only looked at the fundamental limits of this caching scheme, but we also got a good idea of what should actually be stored in the user's hard drive ("cache"). The answers are decades old: One should store either what is known as Gacs-Körner's common randomness, or what is known as Wyner's common information.

For simplicity, we assume that there are only two files available on Netflix, namely $X$ and $Y$. The common information of these two random variables (RVs; in information theory, everything is random) is another RV $W$ that minimizes
$$I(X,Y; W)$$
subject to $X-W-Y$, i.e., subject to the fact that, given $W$, $X$ and $Y$ are conditionally independent. The common randomness of these two RVs is an RV $U$ that maximizes
$$ I(X,Y; U) $$
subject to the constraint that $X-Y-U$ and $Y-X-U$. It can be shown that
$$ I(X,Y; U) \le I(X;Y) \le I(X,Y; W).$$

Common randomness is something that is truly common, in the usual sense of the word. In particular, if $A$, $B$, and $C$ are independent RVs, and if $X=(A,B)$ and $Y=(A,C)$, then $U=A$. So much for the math. Now here comes an interpretation (and probably one for which an information theorists wants to punch me in the face): If $X$ and $Y$ are two different episodes of your favorite series, then $U$ is the opening theme. If you have a small hard drive, storing the opening theme is probably the most useful idea. Common information is a bit more intricate, but an intuitive explanation is the following: Suppose $X$ and $Y$ are two different episodes of your favorite series, and suppose the only thing $X$ and $Y$ have in common is an actress called $A$. In $A$, some parts of $A$ are seen, while in $Y$ other parts of $A$ are seen. (Say, $X$ shows $A$ running, while $Y$ shows $A$ talking to a person.) Obviously, if you could store all information about actress $A$ as a whole on your hard drive, the small computer in your player could reconstruct running scenes and talking scenes in which $A$ participates. Thus, if your hard drive is large enough, that's what you should store.

Let's take this one step further. Suppose that you are fine with watching not $X$ or $Y$, but lower-quality versions $\hat X$ or $\hat Y$, respectively. This requires the definition of lossy versions of common randomness and common information. And indeed, such definitions have been proposed in 1403.8093. Namely, the lossy common information is an RV $W$ that minimizes
$$ I(X,Y; W)$$
subject to $\hat X - W - \hat Y$ and $(X,Y) - (\hat X,\hat Y) - W$. Lossy common randomness is an RV $U$ that maximizes
$$ I(X,Y; U) $$
subject to the constraints $X-Y-U$, $Y-X-U$, $X-\hat X - U$, and $Y-\hat Y-U$. These definitions are all meaningful because they solve communication problems: The get their operational characterization in the Gray-Wyner problem (1403.8093) or in lossy caching (1610.07304).

The lossy version of common information seems fine to me: I only need to store as much information about actress $A$ in order to be able to reconstruct approximate depictions of her running and speaking. This might need more or less space on the hard drive than the perfect reconstruction: I might be able to store not only this reduced version of the actress $A$, but I might also store a, say, generic building, that allows to reconstruct two different but similar houses in episodes $X$ and $Y$. And indeed, lossy common information may be larger or smaller than lossless common information (see Lemma 1 in 1403.8093).

The lossy version of common randomness, however, is not fully satisfactory to me. If I am not interested in perfect reconstruction, should I not be able to store more on my internal hard drive? For example, if the credits of my two episodes $X$ and $Y$ differ only in a few actors' names, can I not store a "noisy" version of the credits in addition to the opening theme? Nevertheless, the lossy version of common randomness is usually smaller than the lossless version (see Corollary 2 in 1403.8093).

I would thus suggest a different definition of lossy common randomness: The lossy common randomness is an RV $U'$ that maximizes
$$ I(X,Y; U') \text{ or } I(\hat X,\hat Y; U')$$
subject to the constraints $\hat X-\hat Y-U'$, $\hat Y-\hat X-U'$, $X-\hat X - U'$, and $Y-\hat Y-U'$. The difference is subtle, but I think reasonable: It mirrors the definition of lossy common information. While lossy common information requires $W$ to make the reconstructions $\hat X$ and $\hat Y$ conditionally independent, the original definition of lossy common randomness requires that $U$ is a common part of $X$ and $Y$ (the first two Markov constraints). The definition proposed here would require that $U'$ is only a common part of the reconstructions $\hat{X}$ and $\hat Y$. Given this definition, we may ask the following questions:

  1. Does the new definition satisfy the desired property that it is larger than the lossless common randomness?
  2. Is there a communications problem for which this definition is a single-letter characterization?

Monday, October 24, 2016

Invariant Distribution of a Second-Order Markov Chain?

A second-order Markov chain $X$ on a finite state space $\mathcal{X}$ is a stochastic process that satisfies
 \mathbb{P}(X_n=x|X_{n-1}=x_{n-1},\dots,X_1=x_1) = \mathbb{P}(X_n=x|X_{n-1}=x_{n-1},X_{n-2}=x_{n-2})
If the second term is invariant of $n$, we call the second-order Markov chain homogeneous and write
  Q_{x,y\to z}= \mathbb{P}(X_3=z|X_2=y,X_1=x)
We say that this Markov chain is irreducible, if and only if from every pair $(x,y)$ every other state $z$ can be reached in any number of steps. In other words, let
  Q^n_{x,y\to z}= \mathbb{P}(X_n=z|X_2=y,X_1=x).
Then, $X$ is irreducible, if and only if for every $(x,y)$ and every $z$ there exists an $n=n(x,y,z)\ge 1$ such that $Q^n_{x,y\to z}>0$. An even stronger condition is regularity: A second-order Markov chain $X$ is regular if and only if this integer $n$ does neither depend on $(x,y)$ nor on $z$. In this case, we write that $Q^n>0$.

We are now interested in the invariant distribution of $X$. In other words, we look for a probability distribution $\pi_{x,y}$ on $\mathcal{X}^2$ such that
  \pi_{y,z} = \sum_{x\in\mathcal{X}} \pi_{x,y}Q_{x,y\to z}.

More precisely, we are interested in the question whether there exists a unique invariant distribution. It is known, for example, that if $X$ is a first-order Markov chain, a unique invariant distribution exists if $X$ is irreducible. A fortiori, for a first-order Markov chain, a unique invariant distribution exists if $X$ is regular. Moreover, for a first-order Markov chain, a unique invariant distribution exists if $X$ is a so-called ergodic unichain, i.e., a chain with a single communicating class.

It is easy to show that if $X$ is a second-order Markov chain, that then the process $X^{(2)}$ with samples $(X_1,X_2)$, $(X_2,X_3)$, $(X_3,X_4)$, etc. is a first-order Markov chain. The hope is that this fact allows us to compute the invariant distribution (or prove its uniqueness) with the help of the simple first-order case (see, e.g., here). Unfortunately, in the setting we have here, this is not possible. It turns out that even if $X$ is a regular second-order Markov chain (and thus irreducible), the chain $X^{(2)}$ need not be irreducible. To this end, consider the following example:

Let $X$ be a second-order Markov chain on $\{1,2,3,4\}$ with transition matrix $Q$, where
$$Q=\begin{bmatrix} 0.5 & 0.5 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0.5 & 0.5 \\
0 & 0.5 & 0.5 & 0 \\
0.5 & 0 & 0 & 0.5 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0.5 & 0.5 & 0 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0.5 & 0.5\end{bmatrix}$$
In this matrix, the columns are labeled with states $\{1,2,3,4\}$, while the rows are labeled with the state tuples, i.e., $\{11,12,13,14,21,\dots,44\}$.

This Markov chain is regular, since $Q^{10}>0$, $Q^{11}>0$,... It turns out, however, that $X^{(2)}$ is not irreducible: $X$ is such that, depending on the initial states, we either have $1-2-3-4-1$ and $1-2-3-1$ or $1-4-3-2-1$ and $1-3-2-1$. It follows that $X^{(2)}$ has transient states $\{(1,1),(2,2),(2,4),(3,3),(4,2),(4,4)\}$ and communicating classes:
It follows that there is no unique distribution $\pi_{x,y}$.

To find out (a little bit) more, I recommend Chapter 7 of this book. Unfortunately, this is as much reference as I have found, the original works being only available in Romanian and on paper. I would be extremely grateful for any reference linked here!

Thursday, September 8, 2016

My First Lecture in Numbers

It's done! The semester is over and I graded the last student of my lecture "Information-Theoretic System Analysis and Design". It was a great experience, and I thank Gerhard Kramer for offering me this opportunity. I also owe Lars, my colleage, more than a few drinks for taking care of most of the problem classes, and my colleagues Andrei and Ali for interesting discussions and comments about the course material. And I also thank my students, who bore with me until the end of the semester, and who were never tired of asking interesting questions -- I think I've never had such an engaged, excited audience.

Here's the course in numbers:
  • (I think) I held 14 lectures. Lars and I shared 6 problem classes.
  • I spend roughly 200 hours preparing course notes, lecturing, correcting deliverables, etc.
  • In the problem classes, students calculated 11 problems in front of their peers. The remaining 7 problems were done by Lars.
  • We had 1 exam and 2 mandatory homework assignments, weighed 50%-25%-25%.
  • Both homeworks had 9 problems.
  • We had 8 regular students with (as far as I know) 8 different nationalities, originating from 2 continents.
  • 7 of these students took the exam and delivered the 2 homework exercises. The average grade of this course was 1.35.
  • The course notes totaled to 102 pages, containing 8 chapters, 40 worked examples, and 62 end-of-chapter problems.
  • 6 students finished the course evaluation. An excerpt is shown below.
Course Evaluation (in parts)
And here are some lessons I learned:
  • My PhD thesis is far from complete.
  • There is never enough time to prepare course notes. Even though the course was based on my PhD thesis, it was hard work to prepare the results in an understandable manner.
  • You have to make sure that the description of homework problems is precise and complete.
  • Don't get confused by registration numbers. 30+ students registered for the course, but in the first lecture less than 10 showed up. Apparently (so I was told), many students register for a course to get the course notes on Moodle.
  • Students prefer a script that they can get printed at the Fachschaft. Putting chapters online a few days before the lecture is ok, but a script is better.
  • It's good not to have a sttrict course syllabus if's the first time you give the course. You never how exactly how much time you have to spend at a given topic, so it's better to leave room for changes. Still, it's a good idea to have at least a rough sketch of what topics you want to cover.

Thursday, July 21, 2016

Reading Notes: Semi-Supervised Clustering

Reading Notes: A brief summary of papers I've read, mainly for personal future reference. The claims about the papers are due to my own (mis-)understanding and reflect my personal opinion. If you know other interesting papers on this topic, I would be happy if you could leave a comment!

Semi-supervised clustering, or constraint clustering, deals with clustering where few of the data points are labeled, either with constraints that they must belong to the same or to different clusters, or with a cluster label.

Constraints as Features (Asafi, Cohen-Or): Data points are assumed to come from an $M$-dimensional space. It is assumed that a pairwise distance matrix is given. Must-Link constraints reduce distances, after which all other distances have to be recomputed to satisfy the triangle inequality. Each Cannot-Link constraint adds a dimension to the data set. Moreover, each Cannot-Link constraint adds an additional distance between the two corresponding data points; the distances between unconstrained data points are increased according to their diffusion distance to the constrained points (data points that have small diffusion distances to the constrained points get larger distances added). By adding a dimension for each constraint, these added distances are "orthogonal" to the original distances, thus avoiding a change of the "general shape" of the data set. The thus obtained distance matrix can be clustered using standard, unsupervised clustering techniques. Experiments: Image segmentation, UCI repo. Noisy constraints are not considered.

Clustering with Partition Level Side Information (Liu, Fu): Data points are assumed to come from an $M$-dimensional space. It is assumed that a pairwise distance matrix is given. A subset of the data points are labeled with a cluster label $1$ through $K$. Labeling is assumed to extend the dimension, i.e., each data point is an element of $M+K$-dimensional space; all data points lie in an $M$-dimensional subspace. Data points labeled to cluster $i$ lie in the subspace that is obtained by shifting the original $M$-dimensional subspace by the unit vector with in the $M+i$-th direction. A k-means-like clustering algorithm is designed, computing alternatingly cluster memberships and centroids. Experiments: UCI repo, analysis of noisy labels.

Spectral Learning (Kamvar, Klein, Manning): Data points are given in terms of an affinity matrix. Labeling can either be given as cluster labels or a Must-Link/Cannot-Link constraints. Data points in the same group override affinity values to the max (without changing the values of close data points; still, the effect on unlabeled points is seen in spectral space), data points in different groups get the minimum affinity values. From the affinity matrix a Markov transition probability matrix is generated, from which the $K$ leading eigenvalues are extracted. These eigenvalues are the (spectral) feature based on which a classifier is trained on the labeled data points, which is then used to classify the unlabeled data points. Experiments: Newsgroup data, performance increases both with the percentage of labeled data points, as well as with the number of unlabeled data points. Noisy labels/constraints are not considered.

Flexible Constrained Spectral Clustering (Wang, Davidson): The constraints are given as Must-Link/Cannot-Link constraints with an additional belief value between 0 and 1. The utility is given as an inner product, i.e., if $Q$ is the matrix with the (real-valued) constraints (positive if $x$ and $y$ are in the same cluster, negative if in different clusters, 0 if unlabeled) and $u$ a cluster assignment vector, then $u^TQu$ decreases by a clustering inconsistent with the labels. Clustering is obtained by spectral clustering, i.e., solving the min-cut problem, under the constraint that the clustering utility has to be larger than a predefined threshold. This can be done by solving a generalized eigenvalue problem. Experiments: Image segmentation, UCI repo. Analysis of noisy labels implicit by assigning belief in labeling.

Semi-Supervised Information-Maximization Clustering (Calandriello, Niu, Sugiyama): Data points are assumed to come from an $M$-dimensional space and are given. The clustering utility function is the $\chi^2$-divergence between the joint data-label distribution and the product of their marginals ("squared mutual information"). The utility/cost of Must-Link and Cannot-Link constraints are encoded as inner products between the conditional class distribution of the two corresponding data points. Assuming a kernel density estimate of the probabilities, the cost function can be written as a matrix multiplication, and the optimal clustering is obtained by spectral methods (leading eigenvectors of the matrix). Strange things are going on: The authors add additional constraints for transitivity of the Must-Link and Cannot-Link constraints (for binary clustering), and the kernel matrix is modified to incorporate these constraints, too. It is therefore not clear which idea (modifikation of the kernel matrix, encoding transitivity, etc.) leads to the desired performance. Experiments: UCI repo, USPS digits, Olivetti faces. Performance similar to spectral lerning and constraint spectral clustering.  Noisy constraints are not considered.

Computing Gaussian Mixture Models with EM using Equivalence Constraints (Shental, Bar-Hillel, Hertz, Weinshall): Data points are assumed to come from an $M$-dimensional space. The constraints are given as Must-Link/Cannot-Link constraints. Data is modeled as coming from a GMM. The transitive closure of Must-Link constraints (called chunklet) is treated as a single data point in EM that is weighted according to the number of data points in the chunklet. Cannot-Link constraints introduce dependencies between the hidden variables (i.e., GMM component indicator), which leads to a hidden Markov random field. This complicates the EM algorithm, requires a generalized EM procedure that approximately solves the problem. The EM algorithm then trains a GMM which provides a soft/hard clustering. Experiments: UCI repo, noisy labels/constraints are not considered.

Semi-supervised Learning with Penalized Probabilistic Clustering (Lu, Leen): Data points are assumed to come from an $M$-dimensional space. The constraints are given as Must-Link/Cannot-Link constraints with an additional belief value between 0 and 1. Data is modeled as coming from a GMM. The (beliefs about the) constraints give a prior distribution over all clusterings (ruling out certain clusterings when hard constraints are given). The probabilities for the GMM components are evaluated numerically, after simplifying the model by, e.g., removing overlapping pairwise relations. The generalized EM algorithm trains a GMM which provides a soft/hard clustering. Experiments: UCI repo, partially labeled images (cluster-classes are given), texture image segmentation. Noisy labels/constraints are not considered.

Monday, June 20, 2016

The “I” in IT

This article has been published in the June issue of the Information Theory Society Newsletter, as this month's student column (students are always encouraged to send us their thoughts and ideas!). I gratefully acknowledge Gerhard Kramer's help in improving this article.

Have you ever asked yourself whether IT is the right field of science for you? I never did, but I guess IT wondered more than once whether I am the right person to work on its problems. Here’s my story.

I’m currently a postdoc at the Institute for Communications Engineering, Technical University of Munich, Germany. In this regard, my story has a happy ending, something I wouldn’t have dreamed of years ago. When I started my PhD at Graz University of Technology, Austria, I hadn’t heard a lecture on IT yet. My supervisor and I nevertheless decided to attempt developing a “theory for deterministic information processing”, or what we called it back then. I knew entropy from communications engineering and thought that I could get all the necessary information from Chapter 2 of the “Elements”. I later read Chapter 4 for stochastic processes and even resorted to more detailed results, such as those from Kolmogorov and Pinsker, when I tried taking limits of random variables. Nevertheless, phrases like “a sequence of codes” never appeared in my work. Probably the rest of my remaining scientific credibility would be lost if I told you how many of my papers got rejected - so I won’t. I will tell you, though, what most reviewers agreed on: That the information-theoretic quantities I introduced to characterize deterministic systems lack an operational characterization. That was a valid criticism, and I learned what an operational characterization is only a few months before I obtained my PhD. Relating to IEEE Spectrum’s article on Claude Shannon, I despaired that I had “jumped on the bandwagon without really understanding” (Slepian) and contributed to the “widespread abuse” (Pierce) of IT by other fields.

However, since then I have discovered that this kind of “abuse” is successful in scientific fields outside IT. For example, nonlinear adaptive filters are trained using an error entropy, Rényi and Cauchy-Schwarz divergences are used for learning and classification, and the information-bottleneck method enjoys widespread use in clustering. To the best of my knowledge, only few of these works accompany their objective functions with an operational characterization - mutual information is “just” a nonlinear measure of statistical dependence, and entropy is “just” a statistic that captures more than the error signal’s variance. Remarkably, these heuristic methods often show superior performance. Hence, at least in these other fields, the works are scientifically justified by experiment. (OK, I admit that this is a feeble attempt to justify my work a posteriori. As more self-justification, the Aims & Scope of the IEEE Transactions on Information Theory does state that the journal aims to publish theoretical and experimental papers.)

Should I, therefore, try to publish in other fields and transactions? Probably. My supervisor, suggested more than once that I should spread the word, trying to convince scientists in signal processing to use higher-order statistics, i.e., IT quantities. I haven’t listened, though, because I feel at home in the IT community, and I would not want to miss meeting my IT friends regularly at our annual events. Even in hindsight, I would rather submit to orthodox pressure and provide operational characterizations rather than to publish in a different community. In the future, I hope that I can do both.

That is my decision. As the IT society, we all must decide: what are our traditions (really) and how strong shall we hold on to them? Especially when dealing with future directions of IT, as mentioned in arXiv:1507.05941, how should we contribute to make our quantities be used in fields such as signal processing, control theory, and machine learning? Or should we not? For example, should we propose clustering algorithms using IT quantities, or should we rather focus on deriving asymptotic limits for the clustering problem? You, as a PhD student, must make your own decision: Are you sufficiently “in IT” so as to be accepted by the community? If your research topic is on the border between IT and another field, in which direction do you want to go?

Tuesday, May 3, 2016

Busfahren in Graz

This post about the waiting time paradox is in German; there are several instances in the web which explain the paradox, such as the article on Mahalanobis (requires login) with a Poisson arrival process and the more general article on Implicit Note.

Heute machen wir eine ganz kleine Rechnung zum Wartezeitenparadoxon, welches folgendes Phänomen beschreibt: Wenn sich der Fahrer der Buslinie 52 exakt an den Fahrplan halten würde, würde an der Haltestelle "HTL-BULME" exakt alle 15 Minuten ein Bus ankommen. Wenn ein Schüler also zu einem zufälligen Zeitpunkt zur Haltestelle geht, wird er oder sie im Mittel 7,5 Minuten warten müssen.

Wenn der Busfahrer den Fahrplan aus irgendwelchen Gründen aber nicht einhalten kann, muss man länger warten. Auf den ersten Blick erscheint das unlogisch: Es fahren immer noch vier Busse pro Stunde, die Wartezeit von Bus zu Bus beträgt im Mittel immer noch 15 Minuten. Trotzdem ist die erwartete Wartezeit -- unter der Voraussetzung, dass man zu einem zufälligen Zeitpunkt zur Haltestelle geht -- strikt größer als 7,5 Minuten. Mathematisch: Wenn der Bus im Mittel alle $T$ Minuten fährt, die Abweichung vom Fahrplan aber $\sigma_T$ beträgt, wartet man im Mittel

$$ \frac{T}{2} + \frac{\sigma_T^2}{2T} $$

Minuten bis man in den Bus einsteigen kann. (Interessanterweise hängt dieses Ergebnis nicht von der Verteilung der Ankunftszeiten ab, sonden nur von deren Standardabweichung!)

Intuitiv kann man sich das so vorstellen: Nehmen wir an, es fahren vier Busse pro Stunde (also $T=15$), wobei diese Busse alle unmittelbar hintereinander fahren. D.h. die ersten vier Busse fahren hintereinander um 12:00, die nächsten vier erst wieder um 13:00, etc. Die mittlere Zeit zwischen den Bussen beträgt 15 Minuten, aber die Streuung $\sigma_T$ ist sehr groß:

$$ \sigma_T^2 = \frac{1}{4} (0-15)^2+ \frac{1}{4}  (0-15)^2+ \frac{1}{4}  (0-15)^2+ \frac{1}{4} (60-15)^2 = 675$$

und mit obiger Formel warten wir im Mittel 30 Minuten. Es ist, also ob nur ein einziger (großer) Bus fahren würde.

Mir wurde gesagt, dass die Grazer Linien um bis zu zwei Minuten vom Fahrplan abweichen dürfen (Referenz folgt hoffentlich noch). Das bedeutet, dass $\sigma_T$ für die Grazer Linien bei 1,15 Minuten liegt. Im Mittel warten die armen BULME-SchülerInnen auf den 52er statt 7.5 Minuten ganze 7,54 Minuten.

Tuesday, March 8, 2016

OMG, KLD is only LSC!

Recently, my colleage Georg came to my office to show me some exciting result he found in a book: A bound on mutual information (MI) based on the variational distance (VD) between the joint distribution and the product of marginal distributions. In other words, if $X$ and $Y$ are random variables with probability mass functions (PMFs) $p_X$ and $p_Y$, and a joint PMF $p_{XY}$, one can write

$$ I(X;Y) \le C \Vert p_{XY}-p_Xp_Y\Vert, $$

where $C$ only depends on the alphabet sizes of $X$ and $Y$ and where

$$\Vert p_{XY}-p_Xp_Y\Vert = \sum_{x,y} |p_{XY}(x,y) - p_X(x)p_Y(y)| .$$

This of course implies that if the variational distance goes to zero, then so does the mutual information. This result is exciting since in general it is not possible to bound Kullback-Leibler divergence (KLD; mutual information is a special case of a Kullback-Leibler divergence) via variational distance, at least not by a bound that does not depend on the distributions. The reverse is always possible: The Kullback-Leibler divergence bounds variational distance via the well-known Pinsker inequality.

While I was surprised by the existence of such a bound, I was not surprised by the weaker result that if VD goes to zero then also MI does. At least if $X$ and $Y$ have a finite alphabet, then the entropies $H(X)$, $H(Y)$, and $H(X,Y)$ are continuous in the PMFs, hence convergence of the PMFs implies convergence of the entropies, and hence, of the MI.

Very interestingly, however, it turned out that KLD is NOT continuous, but only lower semi-continuous (LSC), even if the supports of the distributions are finite:

A sequence of (probability) measures $\mu_n$ is said to converge setwise or strongly to another probability measure $\mu$ iff for every measurable set $A$ one has $\mu_n(A)\to \mu(A)$. I think if $\mu=n$ are discrete probability measures with finite alphabets, i.e., can be described by PMFs $p_n$ with finite support, then setwise convergence is equivalent to the fact that, for every point $x$ in the support, $p_n(x)\to p(x)$. I furthermore think that for such finite PMFs setwise convergence is equivalent to the fact that $\Vert p_n-p\Vert\to 0$. (I should check that.) Now let us take two sequences $p_n$ and $q_n$ of PMFs. Let both have a finite support. One would guess that if $p_n \to p$ and $q_n\to p$, then the KLD

$$ D(p_n\Vert q_n) = \sum_x p_n(x)\log\frac{p_n(x)}{q_n(x)} \to 0.$$

But that's not true. While entropies are continuous w.r.t. setwise convergence, KLD is only LSC. It follows that the right-hand side of above equation can be strictly positive, even if both $p_n$ and $q_n$ converge to $p$. Here is an example that you can easily work out yourself: $p_n(0) = 1-1/n$, $p_n(1)=1-p_n(0)$ and $q_n(0)=1-e^{-n^2}$, $q_n(1)=1-q_n(0)$.