This paper presents a novel approach to improve noise-robust speech recognition by leveraging large language models (LLMs) for generative error correction (GER). The authors extend the existing GER benchmark to noisy conditions and introduce a new dataset called Robust HyPoradise, containing 113,000 hypotheses-transcription pairs from various noisy ASR corpora. They propose a noise-aware GER approach, RobustGER, which uses a language-space noise embedding derived from N-best hypotheses to represent the noise conditions of the source speech. This embedding is then used to teach LLMs to perform denoising in the language space, leading to significant improvements in word error rate (WER) compared to traditional methods. The approach also incorporates a knowledge distillation technique to enhance the representation of audio noise in the language embedding. Experiments on various LLMs show that RobustGER achieves up to 53.9% improvement in WER on the RobustHP test set with limited training data. The results demonstrate that the proposed language-space noise embedding effectively captures the noise conditions of the source speech, enabling LLMs to perform strong language-space denoising. The study highlights the potential of LLMs in noise-robust speech recognition and provides a new benchmark for future research.This paper presents a novel approach to improve noise-robust speech recognition by leveraging large language models (LLMs) for generative error correction (GER). The authors extend the existing GER benchmark to noisy conditions and introduce a new dataset called Robust HyPoradise, containing 113,000 hypotheses-transcription pairs from various noisy ASR corpora. They propose a noise-aware GER approach, RobustGER, which uses a language-space noise embedding derived from N-best hypotheses to represent the noise conditions of the source speech. This embedding is then used to teach LLMs to perform denoising in the language space, leading to significant improvements in word error rate (WER) compared to traditional methods. The approach also incorporates a knowledge distillation technique to enhance the representation of audio noise in the language embedding. Experiments on various LLMs show that RobustGER achieves up to 53.9% improvement in WER on the RobustHP test set with limited training data. The results demonstrate that the proposed language-space noise embedding effectively captures the noise conditions of the source speech, enabling LLMs to perform strong language-space denoising. The study highlights the potential of LLMs in noise-robust speech recognition and provides a new benchmark for future research.