This paper introduces Neural Architecture Search (NAS), a method that uses a recurrent neural network (RNN) to generate and optimize neural network architectures. The RNN, acting as a controller, generates model descriptions and is trained using reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. The method is evaluated on two datasets: CIFAR-10 for image classification and Penn Treebank for language modeling. On CIFAR-10, NAS finds a novel ConvNet architecture that outperforms human-invented models in terms of test set accuracy, achieving 3.65% error rate. On Penn Treebank, NAS designs a novel recurrent cell that outperforms existing RNN and LSTM cells, achieving a test set perplexity of 62.4. The paper also discusses related work, including hyperparameter optimization, neuro-evolution, and program synthesis, and provides experimental details and results.This paper introduces Neural Architecture Search (NAS), a method that uses a recurrent neural network (RNN) to generate and optimize neural network architectures. The RNN, acting as a controller, generates model descriptions and is trained using reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. The method is evaluated on two datasets: CIFAR-10 for image classification and Penn Treebank for language modeling. On CIFAR-10, NAS finds a novel ConvNet architecture that outperforms human-invented models in terms of test set accuracy, achieving 3.65% error rate. On Penn Treebank, NAS designs a novel recurrent cell that outperforms existing RNN and LSTM cells, achieving a test set perplexity of 62.4. The paper also discusses related work, including hyperparameter optimization, neuro-evolution, and program synthesis, and provides experimental details and results.