Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Breadcrumb

  1. Home

Artificial Intelligence

Earned Trust through AI System Assurance

The emergence of artificial intelligence (AI) has brought about a new era of technology and  the potential to revolutionize industries, increase productivity, and to improve the quality of life for people across the world. 

AI can play a vital role in healthcare by enabling the early detection of diseases, providing personal treatment options, and improving patient outcomes. In finance, AI can aid in detecting fraudulent activities, managing risks, and providing customized investment advice. In education, AI can personalize learning for individual students by assessing their strengths and weaknesses and adapting curricula to meet their needs. And in agriculture, it can help farmers make data-driven decisions to optimize crop yields, reduce carbon footprints, and increase efficiency.

While AI offers many potential advantages, it also poses harms and it is essential that these be accounted for both before and after the deployment of AI systems.

The automation capabilities of AI and machine learning have the potential to displace jobs and cause significant shifts in the labor market. AI systems can mirror and amplify existing biases and discrimination in society, leading to unfair and unjust outcomes. The vast quantities of data collected and processed by AI systems present high-profile targets for cyber attacks, data breaches, and misuse. AI systems can manipulate individuals and undermine democratic processes. Moreover, insufficient transparency and accountability around AI systems may hurt individuals, communities, and businesses. Finally, there are emergent risks that are not yet clear.

It is important that those who develop and deploy AI systems are accountable for their design and performance. That’s why NTIA is embarking on a new initiative to gather comment and inform policymakers and other stakeholders about what steps might help to ensure these systems are safe, effective, responsible, and lawful.

The Department of Commerce’s National Telecommunications and Information Administration is the President’s principal advisor on information technology and telecommunications policy. In this role, the agency will help develop the policies necessary to verify that AI systems work as they claim – and without causing harm.Our initiative will help build an ecosystem of AI audits, assessments, certifications, and other policies to support AI system assurance and create earned trust.

President Biden has been clear that when it comes to AI, we must both support responsible innovation and ensure appropriate guardrails to protect Americans’ rights and safety. The White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights provides an important framework to guide the design, development, and deployment of AI and other automated systems. The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework serves as a voluntary tool that organizations can use to manage risks posed by AI systems.

 

Related content


NTIA Seeks Comments on Supporting U.S. Data Center Growth

September 04, 2024

WASHINGTON – The Department of Commerce’s National Telecommunications and Information Administration (NTIA) today launched an inquiry into how federal policy can support the growth of U.S. data centers to meet the coming demand from artificial intelligence (AI) and other emerging technologies.

NTIA Supports Open Models to Promote AI Innovation

July 30, 2024

WASHINGTON – Today, the Department of Commerce’s National Telecommunications and Information Administration (NTIA) issued policy recommendations embracing openness in artificial intelligence (AI) while calling for active monitoring of risks in powerful AI models.

NTIA’s Report on Dual-Use Foundation Models with Widely Available Model Weights recommends the U.S. government develop new capabilities to monitor for potential risks, but refrain from immediately restricting the wide availability of open model weights in the largest AI systems.

Subscribe to Artificial Intelligence RSS feed