The paper discusses the challenges of online learning in domains where the target concept depends on hidden contexts, leading to concept drift. It introduces a family of learning algorithms that can adapt to concept drift and leverage situations where contexts reappear. The core approach involves maintaining a window of training examples, using these examples to learn a concept-dependent function, and controlling both functions with a heuristic that monitors the system's behavior. The paper reports on experiments testing the systems' performance under various conditions, including different levels of noise and the extent and rate of concept drift.
The authors describe the *FLORA* framework, which represents concept descriptions as sets of attribute-value pairs and uses three sets: Accepted Descriptors (ADES), Negative Descriptors (NDES), and Potential Descriptors (PDES). The system updates these sets based on new examples, with the goal of maintaining a consistent and accurate hypothesis. The paper also introduces *FLORA2*, which dynamically adjusts the window size based on heuristic indicators of concept drift, and *FLORA3*, which stores and reuses old concept descriptions when contexts reappear. *FLORA4* is introduced to improve robustness against noise by using statistical confidence measures to evaluate hypotheses.
Experiments with the *STAGGER* concepts show that *FLORA2* and *FLORA3* perform similarly in terms of convergence and re-adjustment speed, but *FLORA4* outperforms them in noisy environments, demonstrating its ability to handle both concept drift and noise effectively. The paper concludes by discussing the trade-offs between stability and robustness in incremental learning and the effectiveness of the *FLORA* family of algorithms in various learning scenarios.The paper discusses the challenges of online learning in domains where the target concept depends on hidden contexts, leading to concept drift. It introduces a family of learning algorithms that can adapt to concept drift and leverage situations where contexts reappear. The core approach involves maintaining a window of training examples, using these examples to learn a concept-dependent function, and controlling both functions with a heuristic that monitors the system's behavior. The paper reports on experiments testing the systems' performance under various conditions, including different levels of noise and the extent and rate of concept drift.
The authors describe the *FLORA* framework, which represents concept descriptions as sets of attribute-value pairs and uses three sets: Accepted Descriptors (ADES), Negative Descriptors (NDES), and Potential Descriptors (PDES). The system updates these sets based on new examples, with the goal of maintaining a consistent and accurate hypothesis. The paper also introduces *FLORA2*, which dynamically adjusts the window size based on heuristic indicators of concept drift, and *FLORA3*, which stores and reuses old concept descriptions when contexts reappear. *FLORA4* is introduced to improve robustness against noise by using statistical confidence measures to evaluate hypotheses.
Experiments with the *STAGGER* concepts show that *FLORA2* and *FLORA3* perform similarly in terms of convergence and re-adjustment speed, but *FLORA4* outperforms them in noisy environments, demonstrating its ability to handle both concept drift and noise effectively. The paper concludes by discussing the trade-offs between stability and robustness in incremental learning and the effectiveness of the *FLORA* family of algorithms in various learning scenarios.