This paper proposes a transferable neural network (TransNet) for solving partial differential equations (PDEs). The key idea is to construct a pre-trained neural feature space without using any PDE information, enabling the pre-trained feature space to be transferred to various PDEs with different domains and boundary conditions. The method focuses on shallow neural networks, which have only one hidden layer and are suitable for low-dimensional PDEs commonly used in science and engineering. Each hidden neuron is treated as a basis function, and the parameters determining the neuron's location and the shape of the activation function are separated. A simple yet effective approach is developed to generate uniformly distributed neurons in the unit ball, and the uniform distribution is rigorously proven. The shape parameters of the neurons are tuned using auxiliary functions, which are realizations of a Gaussian process. The proposed feature space is used as the pre-determined feature space of a random feature model, and existing least squares solvers are used to obtain the weights of the output layer. Extensive numerical experiments show that the proposed method has significantly improved transferability and superior accuracy, with mean squared error several orders of magnitude smaller than state-of-the-art methods. The method does not require information about the target PDEs for pre-training, making it more flexible and efficient for solving a wide class of PDEs.This paper proposes a transferable neural network (TransNet) for solving partial differential equations (PDEs). The key idea is to construct a pre-trained neural feature space without using any PDE information, enabling the pre-trained feature space to be transferred to various PDEs with different domains and boundary conditions. The method focuses on shallow neural networks, which have only one hidden layer and are suitable for low-dimensional PDEs commonly used in science and engineering. Each hidden neuron is treated as a basis function, and the parameters determining the neuron's location and the shape of the activation function are separated. A simple yet effective approach is developed to generate uniformly distributed neurons in the unit ball, and the uniform distribution is rigorously proven. The shape parameters of the neurons are tuned using auxiliary functions, which are realizations of a Gaussian process. The proposed feature space is used as the pre-determined feature space of a random feature model, and existing least squares solvers are used to obtain the weights of the output layer. Extensive numerical experiments show that the proposed method has significantly improved transferability and superior accuracy, with mean squared error several orders of magnitude smaller than state-of-the-art methods. The method does not require information about the target PDEs for pre-training, making it more flexible and efficient for solving a wide class of PDEs.