Currently, there are several active projects that seek to develop advanced Artificial Intelligence (AI).
Here, we will define advanced AI as:
Artificial General Intelligence (AGI): An AI system with broad intelligence capabilities. Intelligence being defined as “the ability of an agent to achieve goals in a wide range of environments” (Legg & Hutter, 2007). Generalization capability is usually characterized as one of the most sophisticated aspects of intelligence, something that our modern AI systems are not yet able to present efficiently.
Human-level AI: An AI system with capabilities comparable to that of a human being. It is important to note that an AI system need not necessarily possess the same type of intelligence as ours. AGI could be as advanced as humans, but with a quite different type of intelligence, i.e., a different type of intelligence than human cognition (Goertzel, 2014).
Superintelligence: Term coined and proposed by Irving J. Good: “Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however intelligent he may be” (Good, 1965, p. 33).
In 2017, Baum (2017) identified 45 R&D projects with the goals of developing advanced AI. Of the 45 projects reviewed, only 13 had active involvement with the area of AI safety, while the vast majority did not specify any type of research focused on AI safety.
Fitzgerald et al. (2020) updated Baum's (2017) findings by increasing the project count to 72 active R&D projects focused on developing advanced AI. Of the 72 projects listed, only 18 have active engagement with the area of AI safety.
The table below is a summary of the findings from Fitzgerald et al. (2020), “2020 Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy,” a paper commissioned by the Global Catastrophic Risk Institute. The projects cited in this table are projects that seek in some way to produce advanced AI (i.e., IAG, Human-level AI, Superintelligence).
In this table are:
The name of the Project (with a link to its webpage);
Country/leader hosting it;
Responsible Institution (Academic/Private Corporation/Public Corporation/Government/NGO);
If such project has links with the Military sector;
Whether the project is Open Source;
The size of the project (Small/Medium/Large);
Engagement with AI Safety (Unspecified /Moderate/Active/Dismissive*).
Table References
Cerenaut Research: According to David Rawlinson, principal researcher at Cerenaut Research: “We’re not worried about runaway “paperclip maximizers” or “skynet-style” machine coups. Despite good intentions, AI risk-awareness groups such as MIRI may cause more harm than good by focusing public debate on the more distant existential risks of AI, while distracting from the immediate risks and harm being perpetrated right now using AI & ML”;
HTM: Jeffrey Hawkins, the lead researcher on the HTM project, dismisses IAG-related concerns, stating that: “I don't see machine intelligence posing any threat to humanity”;
Omega: Eray Özkural, founder of Omega corporation, describes AI security and AGI-related risks as “comical” and “schizophrenic delusions”;
Sanctuary AI: Suzanne Gildert, founder of Sanctuary AI, in a video interview says, “I don’t see [AGI] as a threat to humanity, but a mirror that’s held up to our civilization that allows us to ask deep questions about ourselves”;
If you are interested in advocating for the banning of autonomous weapons, and the separation of the AI industry from the Military Sector, go to Campaign To Stop Killer Robots. Autonomous weapons should never be created.
Nicholas Kluge Corrêa
President of AIRES at PUCRS
Master in Electrical Engineering and PhD candidate in Philosophy (PUCRS)
nicholas.correa@acad.pucrs.br, ORCID: 0000-0002-5633-6094
References
Fitzgerald, M., Boddy, A., & Baum, S. B. (2020). 2020 Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy. Global Catastrophic Risk Institute Technical Report 20-1.
Baum, S. (2017). A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy. Global Catastrophic Risk Institute, Working Paper, 1-17.
Legg, S., & Hutter, M. (2007). Universal Intelligence: A Definition of Machine Intelligence. Minds and Machines, 17 (4), 391–444. doi:10.1007/s11023-007- 9079-x. Goertzel, B. (2014). Artificial general intelligence: Concept, state of the art, and future prospects. Journal of Artificial General Intelligence, 5(1), 1-48. doi: 10.2478/jagi-2014-0001. Good, I. (1965). Speculations concerning the first ultraintelligent machine. In Academic Press. Advances in Computers, 6, 31-88. doi:10.1016/S0065-2458(08)60418-0.
Comentários