Selecting objects in real-world settings is currently difficult to automate and requires significant manual effort. We propose a gaze-based gesture approach using wearable eye trackers. However, achieving effective gaze-based selection of real-world object has several challenges, such as the issue of Double Role and Midas touch. Prior studies required explicit manual activation/deactivation to confirm the user’s intention, which impede fast and continuous interaction. We present EyeLasso - a fast gaze-based selection technique that allows users to select the target they see with only a single Lasso gaze gesture, without requiring additional manual input. EyeLasso uses Random Forest learning for gesture detection and GrabCut using OpenCV for improving the accuracy of target selection. Results from our 6-user experiments and 10-object-selection tasks in both gesture detection and item selection show that EyeLasso selected the target with 90% accuracy, without requiring manual input (0.17 times unintended selections in two minutes, 10% false negative rate).