The paper begins with definitions of weak AI (artificial intelligence) and strong AI. Strong AI seeks to create artificial persons and is the central topic of this paper. Most would agree that the identifying characteristics of human persons (minds) are intentionality, intelligence and consciousness. The current analytical tradition of philosophy is, in general, based on a physicalist ontology which leads to difficulties in understanding the nature of the human mind, especially intentionality and consciousness. The author then goes on to consider knowledge in the thought of Bernard Lonergan and especially "consciousness" as a self-awareness immanent in cognitional and ethical activities. Many would argue that in individual development and in evolutionary history, consciousness is a strongly emergent property. Can there be computing machines that bypass the biological and neurological levels and are truly intelligent and conscious? In conclusion, different philosophical understandings of the human mind come to different conclusions concerning the possibility of strong AI. If it is at all possible, strong AI is still in the future. Future developments may force a re-thinking of current philosophical principles. What should be our real concerns? Should there be limits on efforts to create intelligent machines? Or is it possible that misplaced concerns about the emergence of intelligent machines prevents us from dealing with the more immediate issues resulting from current technologies?