VOLUME 5, NUMBER 4 OCTOBER, 1962 | MORTON FLESHLER AND HOWARD S. HOFFMAN
The article by Morton Fleshter and Howard S. Hoffman from the Pennsylvania State University discusses the development of a variable-interval (VI) schedule where the probability of reinforcement (P(rf)) remains constant over time. In typical VI schedules derived from arithmetic and geometric progressions, P(rf) increases with time since the last reinforcement. To address this issue, the authors propose a new progression function that ensures a constant P(rf) over time. This function is given by:
\[ P_d(t) = -(1 - p)^t \log_a(1 - p) \]
where \( P_d(t) \) is the probability distribution of the interval length \( t \), and \( p \) is the fixed probability of reinforcement within a unit interval. To approximate this ideal condition, the authors suggest dividing the distribution into \( N \) equal-area classes, each with a relative frequency of \( 1/N \). The mean of each class, derived from the theoretical distribution, forms a progression with the desired properties. The equation for this progression is:
\[ \bar{t}_n = [-\log_a(1 - p)]^{-1} [1 + \log_a N + (N - n) \log_a (N - n) - (N - n + 1) \log_a (N - n + 1)] \]
The authors provide an example of a VI with a mean of 30 seconds and \( N = 12 \), showing the progression of intervals. They also discuss the practical limitations of achieving perfect temporal discrimination in real organisms, suggesting that increasing \( N \) can help approximate the ideal conditions.The article by Morton Fleshter and Howard S. Hoffman from the Pennsylvania State University discusses the development of a variable-interval (VI) schedule where the probability of reinforcement (P(rf)) remains constant over time. In typical VI schedules derived from arithmetic and geometric progressions, P(rf) increases with time since the last reinforcement. To address this issue, the authors propose a new progression function that ensures a constant P(rf) over time. This function is given by:
\[ P_d(t) = -(1 - p)^t \log_a(1 - p) \]
where \( P_d(t) \) is the probability distribution of the interval length \( t \), and \( p \) is the fixed probability of reinforcement within a unit interval. To approximate this ideal condition, the authors suggest dividing the distribution into \( N \) equal-area classes, each with a relative frequency of \( 1/N \). The mean of each class, derived from the theoretical distribution, forms a progression with the desired properties. The equation for this progression is:
\[ \bar{t}_n = [-\log_a(1 - p)]^{-1} [1 + \log_a N + (N - n) \log_a (N - n) - (N - n + 1) \log_a (N - n + 1)] \]
The authors provide an example of a VI with a mean of 30 seconds and \( N = 12 \), showing the progression of intervals. They also discuss the practical limitations of achieving perfect temporal discrimination in real organisms, suggesting that increasing \( N \) can help approximate the ideal conditions.