The paper presents a revised model of human visual search behavior, known as Guided Search 2.0 (GS2), which builds upon the original Guided Search model. GS2 distinguishes between a preattentive, massively parallel stage that processes basic visual features (color, motion, depth cues) across the visual field and a subsequent limited-capacity stage that performs more complex operations (e.g., face recognition, object identification) in a restricted portion of the visual field. The deployment of limited resources is guided by the output of the parallel processes. The model is organized into four parts: Part 1 introduces the model and its computer simulation; Part 2 reviews preattentive processing and shows how the simulation reproduces experimental results; Part 3 examines attentional deployment in conjunction and serial searches; and Part 4 discusses the model's shortcomings and unresolved issues. The paper emphasizes the importance of feature maps, bottom-up and top-down activation, and the cognitive control of attention deployment. It also highlights the model's ability to explain various visual search tasks, including those involving conjunctions of features.The paper presents a revised model of human visual search behavior, known as Guided Search 2.0 (GS2), which builds upon the original Guided Search model. GS2 distinguishes between a preattentive, massively parallel stage that processes basic visual features (color, motion, depth cues) across the visual field and a subsequent limited-capacity stage that performs more complex operations (e.g., face recognition, object identification) in a restricted portion of the visual field. The deployment of limited resources is guided by the output of the parallel processes. The model is organized into four parts: Part 1 introduces the model and its computer simulation; Part 2 reviews preattentive processing and shows how the simulation reproduces experimental results; Part 3 examines attentional deployment in conjunction and serial searches; and Part 4 discusses the model's shortcomings and unresolved issues. The paper emphasizes the importance of feature maps, bottom-up and top-down activation, and the cognitive control of attention deployment. It also highlights the model's ability to explain various visual search tasks, including those involving conjunctions of features.