A Dynamically Reconfigurable Accelerator for Convolutional Neural Networks (CNNs) is proposed to address the issues of high power consumption, hardware resource constraints, and low flexibility in deploying CNNs on embedded edge computing devices. Firstly, a general dynamic reconfigurable accelerator architecture is designed using the low‐power, parallel computing, and resource reconfigurable characteristics of Field‐Programmable Gate Arrays (FPGAs). Secondly, the operation modules are designed for reconfigurable modules and parallel computing optimization, generating corresponding configurable bitstream files. The accelerator is then deployed on the heterogeneous platform of ARM and FPGA through software‐hardware co‐design. Finally, the dynamic configuration of the accelerator is completed by controlling and loading different bitstream files from the upper computer, accelerating the forward inference operation of CNNs. Experimental results show that compared to current typical accelerator designs, the proposed approach significantly reduces resource consumption and reduces power consumption, and has a certain application value.