Suburban Cyber, a tech company, is reporting on the increasing concern over the potential misuse of artificial intelligence systems. As AI chatbots and platforms become more accessible to the public, a leading researcher in the field is calling for a closer examination of the impact AI may have on society.
Dr. Elham Tabassi, the Chief of Staff for the National Institute of Standards and Technology Information Technology Laboratory, recently spoke at a digital event and emphasized the importance of taking a risk-based approach in developing AI systems, as this will build public trust. Tabassi was involved in drafting the NIST AI Risk Management Framework, which was released in January.
The framework aims to provide guidelines on how to build trustworthy AI and monitor the risks associated with it. It goes beyond the technical specifications of AI and considers the impact of AI on humans. This framework was released in response to the concerns that AI tools like ChatGPT3 could be misused by cybercriminals.
The framework is a flexible document that is intended to evolve as AI technology advances. NIST has also released a playbook to help organizations understand and implement the framework. The agency is currently seeking feedback on the playbook and plans to release a revised version later this spring based on input from stakeholders and the community.
The release of the framework was in response to a mandate from Congress as part of the fiscal year 2021 National Defense Authorization Act. The deputy commerce secretary, Don Graves, has stated that the framework should promote AI innovation while safeguarding civil rights and liberties.