The paper addresses the domain adaptation problem in statistical learning, where the training data and test data are drawn from different distributions. The authors propose a novel framework that treats in-domain and out-of-domain data as mixtures of "truly in-domain," "truly out-of-domain," and "general domain" distributions. They apply this framework to maximum entropy classifiers and their linear chain counterparts, using conditional expectation maximization (CEM) for efficient inference. The experimental results on four datasets from natural language processing show that their approach significantly improves performance compared to baseline models. The authors also discuss the internal workings of the model, providing insights into how it judges the degree of relatedness between domains.The paper addresses the domain adaptation problem in statistical learning, where the training data and test data are drawn from different distributions. The authors propose a novel framework that treats in-domain and out-of-domain data as mixtures of "truly in-domain," "truly out-of-domain," and "general domain" distributions. They apply this framework to maximum entropy classifiers and their linear chain counterparts, using conditional expectation maximization (CEM) for efficient inference. The experimental results on four datasets from natural language processing show that their approach significantly improves performance compared to baseline models. The authors also discuss the internal workings of the model, providing insights into how it judges the degree of relatedness between domains.