LYT-NET: LIGHTWEIGHT YUV TRANSFORMER-BASED NETWORK FOR LOW-LIGHT IMAGE ENHANCEMENT

LYT-NET: LIGHTWEIGHT YUV TRANSFORMER-BASED NETWORK FOR LOW-LIGHT IMAGE ENHANCEMENT

April 4, 2024 | Alexandru Brateanu, Raul Balmez, Adrian Avram, Ciprian Orhei
LYT-Net, or Lightweight YUV Transformer-based Network, is a novel approach for low-light image enhancement (LLIE) that leverages the YUV color space to separate luminance (Y) and chrominance (U and V). Unlike traditional Retinex-based models, LYT-Net uses transformers to capture long-range dependencies, ensuring comprehensive contextual understanding while maintaining reduced model complexity. The proposed method employs a hybrid loss function that includes Smooth L1 loss, perceptual loss, histogram loss, PSNR, color loss, and MS-SSIM loss, which significantly enhances its training efficiency and performance. LYT-Net achieves state-of-the-art results on LLIE datasets while being more compact and computationally efficient compared to other methods. The model's effectiveness is demonstrated through both quantitative and qualitative evaluations, showing improved visibility, contrast, and detail in low-light images. The source code and pre-trained models are available on GitHub.LYT-Net, or Lightweight YUV Transformer-based Network, is a novel approach for low-light image enhancement (LLIE) that leverages the YUV color space to separate luminance (Y) and chrominance (U and V). Unlike traditional Retinex-based models, LYT-Net uses transformers to capture long-range dependencies, ensuring comprehensive contextual understanding while maintaining reduced model complexity. The proposed method employs a hybrid loss function that includes Smooth L1 loss, perceptual loss, histogram loss, PSNR, color loss, and MS-SSIM loss, which significantly enhances its training efficiency and performance. LYT-Net achieves state-of-the-art results on LLIE datasets while being more compact and computationally efficient compared to other methods. The model's effectiveness is demonstrated through both quantitative and qualitative evaluations, showing improved visibility, contrast, and detail in low-light images. The source code and pre-trained models are available on GitHub.
Reach us at info@study.space