This paper focuses on normative aspects of AI prediction-that is, technologies used to predict the future through analyses of big data concerning the past. While this technology seems promising in forecasting extreme weather or rehabilitating endangered wildlife, it is controversial when applied to human beings, e.g., an Israeli company is using AI prediction to identify possible terrorists, and China's government to locate potential dissidents. This paper explores some of the normative issues and argues: (1) AI-derived conclusions are inexplicable not because machines fail to provide mechanical steps, but because our limited cognitive power cannot assign meaning to the, probably billions of, steps, and thus we fail to understand the conclusions reached by AI; (2) while AI is considered to have an inductive problem, to be a black box, and to have other epistemological issues, these worries apply to the human brain as well. AI and the human brain are different in degree rather than type; (3) the necessity argument and the reality condition cannot be used to exclude radical cases (e.g., China's social credit system) without excluding existing laws or social norms; and (4) the principle of autonomy has advantages, which include balancing power and responsibility, and reduces public distrust.