This paper aims to develop an ethical theory of risks, for assessing technological risks caused by a variety of techniques and artefacts. Although this theory aims to assess technological risks, it achieves this aim in a non-scientific and non-quantitative way. This ethical theory of risks claims that there are four superior values-humanity, fairness, exemption, and sustainable development-in our risk society and argues that these values help regulate technological risks more properly than do other values, such as utility, freedom, and beneficence, and, therefore, should be considered prior to them. In this paper, I will apply this theory to two social controversies surrounding technological risk in Taiwan: the first case is the controversy that swirled around genetically modified organisms in the 1990s; the second is the controversy currently engulfing artificial intelligence.