This paper presents a seemingly novel approach to accomplishing mobile robot localization and navigation in a large unknown workspace with a set of vision sensors based on an effective on-line calibration strategy. Each adjacent Internet protocol (IP) camera pair for visual sensing is assumed to have an overlapping field of view, but the positions as well as orientations with respect to either the Cartesian space or the mobile robot are unknown. The idea is to control the mobile robot to actively perform calibration with any stationary IP camera that can observe the five pre-selected color-coded features onboard the mobile robot, and thus establish coordinate transformations among these IP cameras. In particular, the coordinate transformation from the mobile robot to the IP camera is recursively updated based on all observed data and the kinematic model of the mobile robot. The proposed system only requires a PC and a network of stationary IP cameras without performing any off-line calibration procedure.