The paper investigates hybrid approaches to solving grammatical error correction (GEC) problems. In our approach, we develop statistical machine translation (SMT) and neural machine translation (NMT) models, and build a series of hybrid systems incorporating them. The SMT method involves preprocessing annotated learner corpora, constructing a translation model, training a language model, and finally generating correction with a decoder. Annotated sentences are then converted into parallel sentence pairs to train NMT models. We use re-scoring, voting, and pipeline techniques to integrate SMT and NMT models. Experiments on public testsets indicate that our hybrid systems effectively exploit the strength of both SMT and NMT models and achieve the best performance. Finally, we discuss the result and address the challenges facing in the GEC field.