For practical applications of semantic segmentation tasks, such as autonomous driving, we hope that it should be able to process high-resolution images quickly and with high accuracy. This is a challenging goal. In order to design such an algorithm, we need to solve the fusion problem and contradiction between high-resolution spatial positioning information and low-resolution semantic classification information in the semantic segmentation task. For the above problems, we propose the multiscale convolution based repeat fusion network (MC-RFNet). For the problem of missing multiscale information and insufficient receptive field, we propose the separable multiscale convolutional module, so that each layer of the network has the ability to capture multiscale information. In view of the situation that shallow information is difficult to directly recover resolution the high-resolution feature map, we design the repeat fusion module of high and low resolution. On the one hand, we reduce the occupation of computing resources generated directly calculated on high-resolution feature maps, and on the other hand, high-resolution maps gradually have deep semantic information through fusions and convolutions.