From industry to entertainment, wearable devices are becoming more and more popular nowadays. With the increasing length recorded by wearable cameras, how to extract the interesting part the user need on the first-person view videos (egocentric videos) has become more important, and the work includes many levels of computer vision tasks. This work proposes a wearable social camera, which is an egocentric camera that can summarize all social interaction activities between people and camera wearer from the whole video. The core technology of the wearable social camera is to do the “egocentric video summarization for social interaction”. Different from other works of second-person action/interaction recognition in egocentric videos, which focus on distinguishing different actions, this work tries to find the common features among all the interactions, the common features named Interaction Features (IF) is proposed to be composed of three parts: physical information of head, body languages and mouth expression. Furthermore, HMM (Hidden Markov Model) is employed to model the interaction sequences, and a summarized video is generated with HM-SVM (Hidden Markov Support Vector Machine). Experimental results tested by a large amount of life-log dataset show that the proposed system performs well for summarizing life-log videos. Furthermore, we design and implement the work by ASIC (Application-Specific Integrated Circuits) architecture, and fulfil the system with face landmark regression pipeline on DE2-115 FPGA (Field Programmable Gate Array).