We present a privacy-preserving secure aggregation system to compute global model in federated learning while preserving each participant’s sensitive data in the training process. As long as there exists one and another honest participant, privacy of honest participant’s sensitive data can be guaranteed in secure aggregation. We utilize decentralized anonymous and data obfuscation to make malicious attackers with corrupted participants only learn the aggregated model update instead of sensitive data of particular participant. Extending the primitive secure aggregation, we relax the privacy-preserving limitation from that the secure aggregation preserves privacy with the majority of honest participants to one and another honest participant by decentralized anonymity and prevent single point attack of aggregator by dynamic aggregators selection without trusted third party.