This paper presents a method for Hamiltonian simulation using qubitization, which allows for efficient simulation of time-evolution operators $ e^{-i\hat{H}t} $ with high accuracy. The Hamiltonian $ \hat{H} $ is defined as the projection of a unitary oracle $ \hat{U} $ onto a state $ |G\rangle $. The algorithm achieves optimal query complexity $ \mathcal{O}(t + \log(1/\epsilon)) $ with respect to all parameters, using at most two additional ancilla qubits. This approach subsumes prior methods for simulating d-sparse Hamiltonians and linear combinations of unitaries, leading to significant improvements in space and gate complexity. The key technique, qubitization, embeds any Hamiltonian $ \hat{H} $ in an invariant SU(2) subspace, enabling efficient computation of operator functions of $ \hat{H} $, including $ e^{-i\hat{H}t} $. The method is applicable to a wide range of Hamiltonians, including density matrices, and provides a systematic framework for quantum signal processing. The algorithm is shown to be optimal in terms of query complexity and has low overhead, making it suitable for practical applications. The paper also discusses the application of qubitization to various quantum algorithms and provides a detailed analysis of the quantum signal processor, which is a general framework for implementing operator functions of Hermitian matrices. The results demonstrate that qubitization can be used to achieve quadratic speed-ups in precision simulations and improve error scaling compared to existing methods. The paper concludes with a discussion of the broader implications of the results for quantum computing and the potential for further improvements in the field.This paper presents a method for Hamiltonian simulation using qubitization, which allows for efficient simulation of time-evolution operators $ e^{-i\hat{H}t} $ with high accuracy. The Hamiltonian $ \hat{H} $ is defined as the projection of a unitary oracle $ \hat{U} $ onto a state $ |G\rangle $. The algorithm achieves optimal query complexity $ \mathcal{O}(t + \log(1/\epsilon)) $ with respect to all parameters, using at most two additional ancilla qubits. This approach subsumes prior methods for simulating d-sparse Hamiltonians and linear combinations of unitaries, leading to significant improvements in space and gate complexity. The key technique, qubitization, embeds any Hamiltonian $ \hat{H} $ in an invariant SU(2) subspace, enabling efficient computation of operator functions of $ \hat{H} $, including $ e^{-i\hat{H}t} $. The method is applicable to a wide range of Hamiltonians, including density matrices, and provides a systematic framework for quantum signal processing. The algorithm is shown to be optimal in terms of query complexity and has low overhead, making it suitable for practical applications. The paper also discusses the application of qubitization to various quantum algorithms and provides a detailed analysis of the quantum signal processor, which is a general framework for implementing operator functions of Hermitian matrices. The results demonstrate that qubitization can be used to achieve quadratic speed-ups in precision simulations and improve error scaling compared to existing methods. The paper concludes with a discussion of the broader implications of the results for quantum computing and the potential for further improvements in the field.