This paper presents a semantic segmentation network for 3D brain tumor segmentation from multimodal MRI images, which won first place in the BraTS 2018 challenge. The approach uses an encoder-decoder architecture with an additional variational autoencoder (VAE) branch to regularize the shared decoder and improve segmentation accuracy. The encoder extracts deep image features using ResNet blocks with group normalization, while the decoder reconstructs dense segmentation masks. The VAE branch reconstructs the input image to regularize the encoder, enhancing performance in the presence of limited training data. The loss function combines dice loss, L2 loss, and KL divergence to optimize segmentation accuracy and image reconstruction. The network was trained on the BraTS 2018 dataset, which includes 285 cases with four 3D MRI modalities. The model achieved high performance on both validation and testing sets, with a single submission winning the challenge. The approach uses a crop size of 160x192x128 and a batch size of 1 to fit within GPU memory limits. The model was implemented in TensorFlow and trained on an NVIDIA Tesla V100 GPU, achieving a training time of 9 minutes per epoch. The model's performance was evaluated using dice coefficients, sensitivity, specificity, and Hausdorff distances. The results showed that the model achieved high accuracy in segmenting three tumor subregions: enhancing tumor core, whole tumor, and tumor core. The approach also used test time augmentation and ensemble learning to further improve performance. The model's success demonstrates the effectiveness of deep learning in automated brain tumor segmentation.This paper presents a semantic segmentation network for 3D brain tumor segmentation from multimodal MRI images, which won first place in the BraTS 2018 challenge. The approach uses an encoder-decoder architecture with an additional variational autoencoder (VAE) branch to regularize the shared decoder and improve segmentation accuracy. The encoder extracts deep image features using ResNet blocks with group normalization, while the decoder reconstructs dense segmentation masks. The VAE branch reconstructs the input image to regularize the encoder, enhancing performance in the presence of limited training data. The loss function combines dice loss, L2 loss, and KL divergence to optimize segmentation accuracy and image reconstruction. The network was trained on the BraTS 2018 dataset, which includes 285 cases with four 3D MRI modalities. The model achieved high performance on both validation and testing sets, with a single submission winning the challenge. The approach uses a crop size of 160x192x128 and a batch size of 1 to fit within GPU memory limits. The model was implemented in TensorFlow and trained on an NVIDIA Tesla V100 GPU, achieving a training time of 9 minutes per epoch. The model's performance was evaluated using dice coefficients, sensitivity, specificity, and Hausdorff distances. The results showed that the model achieved high accuracy in segmenting three tumor subregions: enhancing tumor core, whole tumor, and tumor core. The approach also used test time augmentation and ensemble learning to further improve performance. The model's success demonstrates the effectiveness of deep learning in automated brain tumor segmentation.