20 Jan 2024 | Pengyu Wang, Dong Zhang, Linyang Li, Chenkun Tan, Xinghao Wang, Ke Ren, Botian Jiang, Xipeng Qiu
InferAligner is a novel inference-time alignment method that uses cross-model guidance to ensure harmlessness in large language models (LLMs). It leverages safety steering vectors extracted from safety-aligned models to modify the activations of the target model when responding to harmful inputs, thereby guiding the target model to provide harmless responses. The method is effective in domain-specific models in finance, medicine, and mathematics, as well as in multimodal large language models (MLLMs) like LLaVA. Experimental results show that InferAligner significantly reduces the Attack Success Rate (ASR) of harmful instructions and jailbreak attacks while maintaining almost unchanged performance in downstream tasks. It is simple to use, does not require training, and can be applied even without aligned models. InferAligner is the first to explore harmlessness alignment for MLLMs and introduces MM-Harmful Bench, a multimodal dataset for safety research. The method uses safety-related vectors (SRVs) derived from harmful and harmless prompts to guide the target model. It employs a guidance gate to selectively intervene on harmful inputs, ensuring the model's performance in other tasks is not affected. InferAligner demonstrates strong safety performance in both LLMs and MLLMs, and is effective even when no safety-aligned model exists. The method is scalable and adaptable across different models and series, and can be used in scenarios where a safety-aligned model is not available. Overall, InferAligner is a highly effective inference-time alignment method for harmlessness.InferAligner is a novel inference-time alignment method that uses cross-model guidance to ensure harmlessness in large language models (LLMs). It leverages safety steering vectors extracted from safety-aligned models to modify the activations of the target model when responding to harmful inputs, thereby guiding the target model to provide harmless responses. The method is effective in domain-specific models in finance, medicine, and mathematics, as well as in multimodal large language models (MLLMs) like LLaVA. Experimental results show that InferAligner significantly reduces the Attack Success Rate (ASR) of harmful instructions and jailbreak attacks while maintaining almost unchanged performance in downstream tasks. It is simple to use, does not require training, and can be applied even without aligned models. InferAligner is the first to explore harmlessness alignment for MLLMs and introduces MM-Harmful Bench, a multimodal dataset for safety research. The method uses safety-related vectors (SRVs) derived from harmful and harmless prompts to guide the target model. It employs a guidance gate to selectively intervene on harmful inputs, ensuring the model's performance in other tasks is not affected. InferAligner demonstrates strong safety performance in both LLMs and MLLMs, and is effective even when no safety-aligned model exists. The method is scalable and adaptable across different models and series, and can be used in scenarios where a safety-aligned model is not available. Overall, InferAligner is a highly effective inference-time alignment method for harmlessness.