Each kind of sensor is designed to adapt to the specific environment and using scope. Fusing the image of the same target or scene can solve some problems such as insufficient information and multivariate data redundancy of single image, and make the description of the scene or the target of the image more accurately and more comprehensively. A new multi-sensor image fusion method based on adaptive weighting is presented. Firstly, the original image is decomposed with nonsubsampled contourlet transform to obtain a series of different frequency subbands of diverse scales and directions. Secondly, the low frequency subbands are fused by the rule of adaptive weighting, and the high subbands are fused by the rule of the largest gradient value. Lastly, the fused image is obtained by the inverse nonsubsampled contourlet transform. By means of infrared image, visible image and SAR image fusion experiments, the proposed image fusion method can effectively preserve a large amount of information and significantly improve the performance of the fused image in terms of visual quality and objective evaluation indicators.