This paper will attempt to demonstrate though the use of the multiple input sources of the Microsoft Kinect, by utilizing both the color and depth stream information, finger recognition on a hand can be achieved for the purpose of gesture control. Using the data from the Kinect skeleton-tracking feature, the positional data of the hand and other joints can be extracted. The data accessed from both the depth sensor and color image are used to extrapolate the tips of the fingers. By cropping out the data within a certain distance around the hand, only the color and depth data pertaining to the hand remains. Depth data that falls outside a threshold from the depth point of the hand joint is removed, thereby filtering out the erroneous background data. Similarly in the color stream, points located at and around the hand joint will determine the average skin color and will only retain pixels that match. Cross-referencing between these two data plots allow for more a precise hand shape while retaining accurate depth values in the depth stream. After obtaining an outline using this hand shape, finger tips are determined by the path the outline travels around the hand. Hand shapes can be defined depending on the position of the finger tips, and by adding defined parameters, commands can be defined from the gestures from the hand shapes. The experiments will demonstrate the capability of capturing finger tips and using their data to recognize hand shape for gesture control.