This paper introduces a new stochastic approximation (SA) method, the randomized stochastic gradient (RSG) method, for solving a class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. The authors establish the complexity of the RSG method for computing an approximate stationary point of a nonlinear programming problem and show that it possesses a nearly optimal rate of convergence if the problem is convex. They also propose a two-phase randomized stochastic gradient (2-RSG) method, which includes a post-optimization phase to evaluate a short list of solutions generated by multiple independent runs of the RSG method, to improve the large-deviation properties of the algorithm. The methods are specialized for solving simulation-based optimization problems where only stochastic zeroth-order information is available. The paper provides theoretical guarantees for the convergence and complexity of these methods, demonstrating their effectiveness in handling nonconvex SP problems.This paper introduces a new stochastic approximation (SA) method, the randomized stochastic gradient (RSG) method, for solving a class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. The authors establish the complexity of the RSG method for computing an approximate stationary point of a nonlinear programming problem and show that it possesses a nearly optimal rate of convergence if the problem is convex. They also propose a two-phase randomized stochastic gradient (2-RSG) method, which includes a post-optimization phase to evaluate a short list of solutions generated by multiple independent runs of the RSG method, to improve the large-deviation properties of the algorithm. The methods are specialized for solving simulation-based optimization problems where only stochastic zeroth-order information is available. The paper provides theoretical guarantees for the convergence and complexity of these methods, demonstrating their effectiveness in handling nonconvex SP problems.