•  
  •  
 
Exigence

Abstract

This paper discusses the development of AI and the threat posed by the theoretical achievement of artificial superintelligence. AI is becoming an increasingly significant fixture in our lives and this will only continue in the future. The development of artificial general intelligence (AGI) would quickly lead to artificial superintelligence (ASI). AI researcher Steve Omohundro’s universal drives of rational systems demonstrate why ASI could behave in ways unanticipated by its designers. A technological singularity may occur if AI is allowed to undergo uncontrolled rapid self-improvement, which could pose an extinction-level risk to the human race. Two possible safety measures, AI “boxing” and AI safety engineering, are explored, with reference to the writings of computer scientist Roman Yampolskiy and AI researcher Joshua Fox.

Share

COinS