The ATLAS Collaboration recorded 3.8 fb⁻¹ of proton–proton collision data at 13 TeV in 2015. The ATLAS trigger system, which selects events at a rate of ~1 kHz from up to 40 MHz of collisions, was upgraded during the first long shutdown (LS1) to handle the increased luminosity and pile-up in Run 2. This paper presents the performance of the trigger system and its components based on 2015 data, including changes to the trigger and data acquisition systems, the trigger menu, and the performance of various trigger signatures.
The ATLAS detector consists of an inner detector (ID), calorimeter system, and muon spectrometer (MS). The trigger system includes a hardware-based Level-1 (L1) trigger and a software-based High-Level Trigger (HLT). The L1 trigger is processed by the Central Trigger Processor (CTP), which applies dead-time management to avoid overlapping readout windows. The HLT processes events accepted by the L1 trigger, using region-of-interest (RoI) information for reconstruction. The HLT farm was merged with the L2 and EF farms to improve resource sharing and simplify hardware/software.
The L1 trigger system was upgraded with a new topological trigger (L1Topo) and improved algorithms for calorimeter and muon triggers. The L1Calo system was upgraded with new FPGA-based modules to reduce trigger rates and improve performance. The L1Muon trigger was enhanced with additional RPC layers and TGC coincidence requirements to reduce trigger rates from non-interaction events.
The trigger menu was optimized for various luminosity ranges, with primary triggers for physics analyses and support triggers for efficiency measurements. The HLT processing time was mainly determined by the trigger menu and pile-up interactions. The HLT farm CPU utilization was around 67% at an L1 rate of 80 kHz, with most processing time spent on inner detector tracking, muon spectrometer reconstruction, and calorimeter reconstruction.
The performance of the trigger system was evaluated for various signatures, including electrons, photons, muons, jets, tau leptons, and missing transverse momentum. The trigger system showed good efficiency and timing performance, with significant improvements in tracking and reconstruction algorithms. The performance of the trigger system was validated using Monte Carlo simulations and compared to data. The results demonstrate the effectiveness of the upgraded trigger system in handling the increased luminosity and pile-up in Run 2.The ATLAS Collaboration recorded 3.8 fb⁻¹ of proton–proton collision data at 13 TeV in 2015. The ATLAS trigger system, which selects events at a rate of ~1 kHz from up to 40 MHz of collisions, was upgraded during the first long shutdown (LS1) to handle the increased luminosity and pile-up in Run 2. This paper presents the performance of the trigger system and its components based on 2015 data, including changes to the trigger and data acquisition systems, the trigger menu, and the performance of various trigger signatures.
The ATLAS detector consists of an inner detector (ID), calorimeter system, and muon spectrometer (MS). The trigger system includes a hardware-based Level-1 (L1) trigger and a software-based High-Level Trigger (HLT). The L1 trigger is processed by the Central Trigger Processor (CTP), which applies dead-time management to avoid overlapping readout windows. The HLT processes events accepted by the L1 trigger, using region-of-interest (RoI) information for reconstruction. The HLT farm was merged with the L2 and EF farms to improve resource sharing and simplify hardware/software.
The L1 trigger system was upgraded with a new topological trigger (L1Topo) and improved algorithms for calorimeter and muon triggers. The L1Calo system was upgraded with new FPGA-based modules to reduce trigger rates and improve performance. The L1Muon trigger was enhanced with additional RPC layers and TGC coincidence requirements to reduce trigger rates from non-interaction events.
The trigger menu was optimized for various luminosity ranges, with primary triggers for physics analyses and support triggers for efficiency measurements. The HLT processing time was mainly determined by the trigger menu and pile-up interactions. The HLT farm CPU utilization was around 67% at an L1 rate of 80 kHz, with most processing time spent on inner detector tracking, muon spectrometer reconstruction, and calorimeter reconstruction.
The performance of the trigger system was evaluated for various signatures, including electrons, photons, muons, jets, tau leptons, and missing transverse momentum. The trigger system showed good efficiency and timing performance, with significant improvements in tracking and reconstruction algorithms. The performance of the trigger system was validated using Monte Carlo simulations and compared to data. The results demonstrate the effectiveness of the upgraded trigger system in handling the increased luminosity and pile-up in Run 2.