This paper presents modeling attacks on Physical Unclonable Functions (PUFs), demonstrating how several types of PUFs can be broken by constructing a computer algorithm that behaves indistinguishably from the original PUF on most challenge-response pairs (CRPs). The algorithm can then impersonate the PUF and be cloned and distributed arbitrarily, breaking the security of PUF-based applications and protocols. The attacks use machine learning techniques such as Logistic Regression and Evolution Strategies to model the PUF's behavior.
The paper discusses three types of PUFs: Strong PUFs, Controlled PUFs, and Weak PUFs. Strong PUFs are disordered physical systems with complex challenge-response behavior and many possible challenges. They are secure because they cannot be cloned or predicted, even with access to many CRPs. Controlled PUFs use a Strong PUF with additional control logic to prevent direct access to its responses. Weak PUFs are simpler and use a small number of challenges to derive a secret key, which is then processed by an embedding system.
The paper shows that modeling attacks can break Strong PUFs, including Arbiter PUFs, XOR Arbiter PUFs, Feed-Forward Arbiter PUFs, Lightweight Secure PUFs, and Ring Oscillator PUFs. These attacks use machine learning to predict the PUF's responses to arbitrary challenges with high accuracy. The number of CRPs required for successful attacks grows linearly or logarithmically with the PUF's complexity. The computation time for training the models is low-degree polynomial, except for XOR Arbiter and Lightweight Secure PUFs, where it grows super-polynomially with the number of XORs.
The paper also discusses the scalability of these attacks and the impact of errors in the CRPs. It shows that LR can handle errors well, provided enough CRPs are used. The paper concludes that the security of PUFs must be re-evaluated in light of these attacks, and that new design requirements are needed to ensure their security against modeling attacks. The results highlight the importance of considering both the internal entropy and the model's functionality in assessing PUF security.This paper presents modeling attacks on Physical Unclonable Functions (PUFs), demonstrating how several types of PUFs can be broken by constructing a computer algorithm that behaves indistinguishably from the original PUF on most challenge-response pairs (CRPs). The algorithm can then impersonate the PUF and be cloned and distributed arbitrarily, breaking the security of PUF-based applications and protocols. The attacks use machine learning techniques such as Logistic Regression and Evolution Strategies to model the PUF's behavior.
The paper discusses three types of PUFs: Strong PUFs, Controlled PUFs, and Weak PUFs. Strong PUFs are disordered physical systems with complex challenge-response behavior and many possible challenges. They are secure because they cannot be cloned or predicted, even with access to many CRPs. Controlled PUFs use a Strong PUF with additional control logic to prevent direct access to its responses. Weak PUFs are simpler and use a small number of challenges to derive a secret key, which is then processed by an embedding system.
The paper shows that modeling attacks can break Strong PUFs, including Arbiter PUFs, XOR Arbiter PUFs, Feed-Forward Arbiter PUFs, Lightweight Secure PUFs, and Ring Oscillator PUFs. These attacks use machine learning to predict the PUF's responses to arbitrary challenges with high accuracy. The number of CRPs required for successful attacks grows linearly or logarithmically with the PUF's complexity. The computation time for training the models is low-degree polynomial, except for XOR Arbiter and Lightweight Secure PUFs, where it grows super-polynomially with the number of XORs.
The paper also discusses the scalability of these attacks and the impact of errors in the CRPs. It shows that LR can handle errors well, provided enough CRPs are used. The paper concludes that the security of PUFs must be re-evaluated in light of these attacks, and that new design requirements are needed to ensure their security against modeling attacks. The results highlight the importance of considering both the internal entropy and the model's functionality in assessing PUF security.