The paper introduces a measure of the information provided by an experiment, derived from Shannon's work on information theory. This measure is based on prior knowledge expressed through a probability distribution over the parameter space. The measure is used to compare pairs of experiments without reference to prior distributions, contrasting with methods proposed by Blackwell. The measure is applied to solve problems in experimental design, where the goal is to gain knowledge about the world rather than make decisions. The paper discusses the properties of the measure, including its additivity and convexity, and provides examples to illustrate its application. It also explores the relationship between the measure and other methods of comparing experiments, such as those based on loss functions and sufficient statistics. The paper concludes by discussing the implications of the measure for sequential experimentation and the choice of experiments based on the expected gain in information.The paper introduces a measure of the information provided by an experiment, derived from Shannon's work on information theory. This measure is based on prior knowledge expressed through a probability distribution over the parameter space. The measure is used to compare pairs of experiments without reference to prior distributions, contrasting with methods proposed by Blackwell. The measure is applied to solve problems in experimental design, where the goal is to gain knowledge about the world rather than make decisions. The paper discusses the properties of the measure, including its additivity and convexity, and provides examples to illustrate its application. It also explores the relationship between the measure and other methods of comparing experiments, such as those based on loss functions and sufficient statistics. The paper concludes by discussing the implications of the measure for sequential experimentation and the choice of experiments based on the expected gain in information.