This paper introduces the use of rectified linear units (ReLU) as the classification function in deep neural networks (DNNs), a departure from the conventional softmax function. The authors propose using the activation of the penultimate layer, multiplied by weight parameters, to generate raw scores, which are then thresholded using ReLU. Class predictions are made using the.argmax function. The study compares the performance of DNN-ReLU models with DNN-Softmax models on the MNIST, Fashion-MNIST, and Wisconsin Diagnostic Breast Cancer (WDBC) datasets. The Adam optimization algorithm is used for weight parameter learning. The results show that while DNN-ReLU models generally perform similarly to DNN-Softmax models, they may suffer from slower convergence due to the dying neurons problem. Future work could involve further investigation into the gradients during backpropagation and exploring variants of ReLU.This paper introduces the use of rectified linear units (ReLU) as the classification function in deep neural networks (DNNs), a departure from the conventional softmax function. The authors propose using the activation of the penultimate layer, multiplied by weight parameters, to generate raw scores, which are then thresholded using ReLU. Class predictions are made using the.argmax function. The study compares the performance of DNN-ReLU models with DNN-Softmax models on the MNIST, Fashion-MNIST, and Wisconsin Diagnostic Breast Cancer (WDBC) datasets. The Adam optimization algorithm is used for weight parameter learning. The results show that while DNN-ReLU models generally perform similarly to DNN-Softmax models, they may suffer from slower convergence due to the dying neurons problem. Future work could involve further investigation into the gradients during backpropagation and exploring variants of ReLU.