The introduction of programming education in K-12 schools to promote computational thinking has attracted a great deal of attention from scholars and educators. Debugging code is a central skill for students, but is also a considerable challenge when learning to program. Learners at the K-12 level often lack confidence in programming debugging due to a lack of effective learning feedback and programming fundamentals (e.g., correct syntax usage). With the development of technology, large language models (LLMs) provide new opportunities for novice programming debugging training. We proposed a method for incorporating an LLM into programming debugging training, and to test its validity, 80 K-12 students were selected to participate in a quasi-experiment with two groups to test its effectiveness. The results showed that through dialogic interaction with the model, students were able to solve programming problems more effectively and improve their ability to solve problems in real-world applications. Importantly, this dialogic interaction increased students' confidence in their programming abilities, thus allowing them to maintain motivation for programming learning.