We present an aggressive region-based visibility sampling algorithm for general 3D scenes. Our algorithm exploits the depth and color information of samples in the image space to construct an importance function that represents the reliability of the potentially visible set (PVS) of a view cell boundary, and places samples at the optimal positions according to the importance function. The importance function indicates and guides visibility samples to depth discontinuities of the scene such that more visible objects can be sampled to reduce the visual errors. The color information can help judge whether the visual errors are significant or not. Our experiments show that our sampling approach can effectively improve the PVS accuracy and computational speed compared to the adaptive approach proposed in [NB04] and the object-based approach in [WWZ+06].