Finitary Models of Language Users

Finitary Models of Language Users

| Unknown Author
This chapter discusses models and measures for describing talkers and listeners, emphasizing the distinction between language users and language itself. It highlights that natural languages cannot be fully characterized by linear grammars but require models that reflect human limitations. The chapter explores stochastic models, which use probability distributions to describe communication processes, and algebraic models that compare human and machine limitations. It introduces Markov sources, which model sequences of symbols with probabilities, and discusses their limitations in capturing complex linguistic patterns. The chapter also examines k-limited stochastic sources, which extend Markov models to handle longer dependencies. It argues that while these models can approximate natural language, they cannot fully capture the complexity of human language use. The chapter further introduces a measure of selective information, which quantifies the uncertainty in message selection, and discusses redundancy as a measure of how efficiently language is used. The analysis shows that natural languages have significant redundancy, indicating that they are not as efficient as theoretically possible. The chapter concludes that while stochastic and algebraic models provide useful insights, they must account for the limitations of human language users.This chapter discusses models and measures for describing talkers and listeners, emphasizing the distinction between language users and language itself. It highlights that natural languages cannot be fully characterized by linear grammars but require models that reflect human limitations. The chapter explores stochastic models, which use probability distributions to describe communication processes, and algebraic models that compare human and machine limitations. It introduces Markov sources, which model sequences of symbols with probabilities, and discusses their limitations in capturing complex linguistic patterns. The chapter also examines k-limited stochastic sources, which extend Markov models to handle longer dependencies. It argues that while these models can approximate natural language, they cannot fully capture the complexity of human language use. The chapter further introduces a measure of selective information, which quantifies the uncertainty in message selection, and discusses redundancy as a measure of how efficiently language is used. The analysis shows that natural languages have significant redundancy, indicating that they are not as efficient as theoretically possible. The chapter concludes that while stochastic and algebraic models provide useful insights, they must account for the limitations of human language users.
Reach us at info@study.space