Deep Neural Networks (DNNs) are widely used to build artificial intelligence (AI) applications due to their powerful features. However, security concerns are emerging. Hiding malware inside the model is an attack surface. At the same time, the model format has an insecure deserialization vulnerability. Combining these two weaknesses can fulfill an attack flow. The challenges in this attack are the embedding rate, accuracy degradation, and extraction effort. Thus, this thesis provided injecting rules for embedding malware in classification neural network models and proposed a malware injection method using injecting rules. A more comprehensive methodology is offered to achieve a high embedding rate, low accuracy degradation, and less extraction effort.