AI research is living in an era where models are scaled to sizes that a few years ago were not considered possible.
Examples like GPT-3 (language models with 175B parameters created by OpenAI) and WuDao (a multimodal model with 1.7T parameters created by the Beijing Academy of Artificial Intelligence) show us that AI models are being scaled to become general-purpose algorithms.
However, with great capabilities comes great risks. Many of these models have the potential to be used for malicious purposes, something we should certainly keep an eye on and be vigilant about.
To help with this monitoring, November 2021 saw the release of AI Tracker, an online tool (in dashboard format) that lists the largest (and most proficient) models ever created, along with an analysis of the potential risks associated with each. For example, AlphaFold2 (i.e., an AI system capable of predicting the three-dimensional structure a protein will adopt based solely on its amino acid sequence) could be used to develop biological weapons.
The platform was developed to help researchers and regulators better understand the safety risk landscape of AI.
For more information, go to this link!
For a list of R&D projects seeking to develop advanced Artificial Intelligence (and data such as their safety involvement and ties to the military), go to this link.
Comments