Synthesized speech needs not be expressionless. In fact, by identifying the effects of emotion on speech and choosing an appropriate representation, the generation of affect is possible and can become computational. Nonetheless, whether emotion conveyed in synthesized voice affects perceptions of emotional valence of content has not be verified. This study applied Electroencephalogram (BEG) and questionnaire survey to examine the feasibility of conveying emotion of content by synthesized voice. A 2x2 within-subject experiment was conducted which includes 2 voices (happy voice/sad voice) and 2 contents (happy content/sad content), the consistency of voice emotion and content emotion in 2 news stories are matched, the other 2 are mismatched. The BEG responses and the data of questionnaire were collected and analyzed, the results of the experiment showed that (1) synthesized voice significantly affected perceptions of emotional valence of content, (2) credibility would not be influenced regardless of whether voice emotion and content emotion were matched or mismatched, (3) synthesized voice significantly affected participants' perceptions of suitability. Implications for design are discussed in this paper.