When we talk about AI Safety, we often limit ourselves to thinking about “safety” within a scope limited to areas such as, for example, adversarial machine learning, machine learning fairness, and interpretability. However, we must not forget that models used in real applications almost never exist in a vacuum.
ML models are usually embedded within a context that, no matter how robust/fair/interpretable the model in question may be, may still be vulnerable to attackers with malicious agendas. As some authors already point out, we cannot forget that in ML (as in many other areas), we have a problem of Systemic Security:
"Research in systemic security aims to address broader contextual risks to how ML systems are handled. Both cyber security and decision making can decisively affect whether ML systems will fail or be misdirected. Machine learning systems do not exist in a vacuum, and the security of the larger context can influence how such systems are handled. ML systems are more likely to fail or be misdirected if the larger context in which they operate is insecure or turbulent. " (Hendrycks et al. 2022)
Thus, when we talk about “security” in AI Safety, we cannot forget that Information Security must be part of our larger context. It will do no good to have a robust system exposed by poorly developed software, or poorly managed organizations.
Information security is the practice of (literally) protecting information, where we seek to prevent or reduce the likelihood of unauthorized/inappropriate access to information systems, whether for illegal use, disclosure, disruption, deletion, corruption, modification, inspection, logging, or downgrading of said system.
Cyber Security is a vast and extremely dynamic field of study, with vulnerabilities, patches, and vulnerabilities about patches being published constantly. Attackers often employ a wide arsenal of attacks to test their targets’ defenses, from SQL injections and Cross-site Scripting, to DNS Cache Poisoning and Slowloris attacks. Thus, organizations must be prudent and cautious when developing their informational context.
Perhaps one of the ways (certainly one of the most familiar to all of us) that we can begin to intervene in order to create a culture of security, is by improving how "we secure the keys that assure us ": our passwords.
To this end, we are making available on our web site a page entitled “Password Security”. There you will find information on how certain types of attacks (Password Cracking) can be managed, which tools attackers use, how to create “entropically” secure passwords, and how you can measure your own password security using some tools imported from Information Theory. All this can be done interactively on our page.
Our web page has been developed for educational purposes only. Passwords and broken hashes have been created for this specific purpose. SHA-1 is deprecated and should not be used for security purposes. Using password cracking methods to access your own password is lawful. Using password cracking methods to gain access to someone else’s password may lead to criminal charges.
コメント