This paper proposes a novel bi-directional wavelet guidance (BWG) mechanism for learning generalized segmentation of foggy scenes under domain generalization. The method aims to enhance content representation, decouple urban-style and fog-style variations, and improve segmentation performance in foggy conditions. The BWG mechanism uses Haar wavelet transformation to separate low-frequency content components from high-frequency style components. Low-frequency components are concentrated in content enhancement self-attention, while high-frequency components are shifted to style and fog self-attention for de-correlation. The BWG is integrated into existing mask-level Transformer segmentation pipelines in a learnable fashion. The method is evaluated on four foggy-scene segmentation datasets, including ACDC-fog, Foggy-Zurich, Foggy-Driving, and Foggy-CityScapes. The proposed method significantly outperforms existing directly-supervised, curriculum domain adaptation, and domain generalization segmentation methods. It achieves up to 11.8% mIoU improvement on Foggy Zurich and up to 16.7% mIoU improvement on ACDC-fog. The method is effective in handling foggy scenes with varying urban styles and fog styles, and is more practical and general than prior curriculum domain adaptation approaches. The BWG mechanism is shown to be effective in separating content, urban-style, and fog-style components, and in improving segmentation performance under domain generalization. The method is implemented using a mask-level Transformer segmentation pipeline, with a Swin-base Transformer as the image encoder and a Mask2Former as the image decoder. The method is evaluated on multiple datasets and compared with state-of-the-art domain generalized and domain adaptation methods. The results show that the proposed method outperforms existing methods in terms of mIoU on various foggy scene datasets. The method is also tested on different foggy densities and shows robust performance across different fog conditions. The proposed method is effective in handling foggy scenes with varying urban styles and fog styles, and is more practical and general than prior curriculum domain adaptation approaches. The BWG mechanism is shown to be effective in separating content, urban-style, and fog-style components, and in improving segmentation performance under domain generalization. The method is implemented using a mask-level Transformer segmentation pipeline, with a Swin-base Transformer as the image encoder and a Mask2Former as the image decoder. The method is evaluated on multiple datasets and compared with state-of-the-art domain generalized and domain adaptation methods. The results show that the proposed method outperforms existing methods in terms of mIoU on various foggy scene datasets. The method is also tested on different foggy densities and shows robust performance across different fog conditions.This paper proposes a novel bi-directional wavelet guidance (BWG) mechanism for learning generalized segmentation of foggy scenes under domain generalization. The method aims to enhance content representation, decouple urban-style and fog-style variations, and improve segmentation performance in foggy conditions. The BWG mechanism uses Haar wavelet transformation to separate low-frequency content components from high-frequency style components. Low-frequency components are concentrated in content enhancement self-attention, while high-frequency components are shifted to style and fog self-attention for de-correlation. The BWG is integrated into existing mask-level Transformer segmentation pipelines in a learnable fashion. The method is evaluated on four foggy-scene segmentation datasets, including ACDC-fog, Foggy-Zurich, Foggy-Driving, and Foggy-CityScapes. The proposed method significantly outperforms existing directly-supervised, curriculum domain adaptation, and domain generalization segmentation methods. It achieves up to 11.8% mIoU improvement on Foggy Zurich and up to 16.7% mIoU improvement on ACDC-fog. The method is effective in handling foggy scenes with varying urban styles and fog styles, and is more practical and general than prior curriculum domain adaptation approaches. The BWG mechanism is shown to be effective in separating content, urban-style, and fog-style components, and in improving segmentation performance under domain generalization. The method is implemented using a mask-level Transformer segmentation pipeline, with a Swin-base Transformer as the image encoder and a Mask2Former as the image decoder. The method is evaluated on multiple datasets and compared with state-of-the-art domain generalized and domain adaptation methods. The results show that the proposed method outperforms existing methods in terms of mIoU on various foggy scene datasets. The method is also tested on different foggy densities and shows robust performance across different fog conditions. The proposed method is effective in handling foggy scenes with varying urban styles and fog styles, and is more practical and general than prior curriculum domain adaptation approaches. The BWG mechanism is shown to be effective in separating content, urban-style, and fog-style components, and in improving segmentation performance under domain generalization. The method is implemented using a mask-level Transformer segmentation pipeline, with a Swin-base Transformer as the image encoder and a Mask2Former as the image decoder. The method is evaluated on multiple datasets and compared with state-of-the-art domain generalized and domain adaptation methods. The results show that the proposed method outperforms existing methods in terms of mIoU on various foggy scene datasets. The method is also tested on different foggy densities and shows robust performance across different fog conditions.