This document presents research on using recurrent neural networks (RNNs), specifically gated recurrent units (GRUs) and long short-term memory (LSTM) units, for English-Hindi machine translation. The experiments demonstrate that these neural network architectures significantly improve translation quality compared to traditional rule-based and statistical methods, particularly highlighting the efficacy of bi-directional LSTMs with an attention mechanism. The findings suggest promising future work opportunities in translating morphologically rich languages and resource-scarce language scenarios.