NIST Releases Draft Guidance on AI Safety and Standards

The U.S. National Institute of Standards and Technology (NIST) became proactive in resolving the challenges associated with artificial intelligence (AI) by issuing four draft publications designed to ensure the safety, security, and reliability of AI systems. During this period, which ends on 2nd June 2024, the drafts are open to public comments. This reflects NIST’s response to the October 2023 AI Executive Order that emphasizes mitigation of the effects of AI technologies while promoting responsible innovation and the maintenance of the US technological leadership.

Alleviating generative AI risks

A main area of concern in NIST’s publications in drafts is security threats arising from generative AI technologies. The AI RMF AI Generative AI AI profile provides 12 risks which range from high accessibility to sensitive information to the propagation of hate speech and malicious content. Addressing these risks has been a key area of focus for NIST, which has identified over 400 potential risk management actions that organizations can consider. This framework provides a structure developers can follow, and align their goals and priorities.

Minimizing training data risks 

Another key content presented in the drafts is how to secure the data used in training AI systems. The draft publication on Secure Software Development Practices for Generative AI and Dual-Use Foundation Models which is part of NIST’s existing guidance is generated to guarantee the integrity of AI systems amidst worries about malicious training data. NIST suggests some ways to make the computer code secure and gives solutions for data problems including data collection and use. This will make the AI systems more secure against possible threats.

Encouraging transparency in AI-created content

As a response to the rapidly increasing number of synthesized digital materials, NIST is developing mitigation measures to combat the risks posed by them in their upcoming document on Reducing Risk posed by Synthetic Material. Through digital watermarking and metadata recording, NIST is trying to make it possible to track and identify altered media, which will hopefully prevent some negative outcomes including the distribution of non-consensual intimate images and child s****l abuse material.

Driving global engagement on AI standards

Recognizing the fact that international cooperation is one of the keys to establishing AI-related standards, NIST produced a draft of the Global Engagement Plan on AI Standards. The objective of this plan is to encourage cooperation and coordination among international allies, standards-developing organizations and the private sector to speed up technology standards for AI. By making content origin awareness and test methods a top priority, NIST aims to develop a strong regime that ensures the safe and ethically sound operations of AI technologies all around the globe.

Initiating NIST GenAI

Moreover, the institute has created NIST GenAI, a software tool that assesses and quantifies the capabilities of generative artificial intelligence tools. The NIST GenAI will be an instrument with which the NIST AI Safety Institute can issue challenge problems and pilot evaluations to help the U.S. AI Safety Institute at NIST differentiate between the input of the AI algorithm and human-produced content. By design, the main goal of this initiative is to promote information reliability and provide guidance on the ethical dimension of content creation in the AI era.

NIST´s announcement of these draft reports and the launch of NIST GenAI mark an active and development-oriented initiative to solve AI-related problems that challenge our society at the same time keep the innovation of society secure. Through NIST’s solicitations for input from key stakeholders, such as companies that have either developed or deployed AI technologies, the platform can influence the determination of AI safety and standards guidelines. Through active involvement in this process, stakeholders can help in setting up the most preferred practices and the industry’s standard approach, which ultimately leads to a safe level and trustworthy AI ecosystem.


Earn more CFN tokens by sharing this post. Copy and paste the URL below and share to friends, when they click and visit Parrot Coin website you earn: https://cryptoforum.news0


CFN Comment Policy

Your comments MUST BE constructive with vivid and clear suggestion relating to the post.

Your comments MUST NOT be less than 5 words.

Do NOT in any way copy/duplicate or transmit another members comment and paste to earn. Members who indulge themselves copying and duplicating comments, their earnings would be wiped out totally as a warning and Account deactivated if the user continue the act.

Parrot Coin does not pay for exclamatory comments Such as hahaha, nice one, wow, congrats, lmao, lol, etc are strictly forbidden and disallowed. Kindly adhere to this rule.

Constructive REPLY to comments is allowed

Leave a Reply