Skip to content

Commit

Permalink
Added new module on AI security
Browse files Browse the repository at this point in the history
  • Loading branch information
banreyms committed Apr 4, 2024
1 parent f36edf9 commit 8c63505
Show file tree
Hide file tree
Showing 5 changed files with 155 additions and 1 deletion.
62 changes: 62 additions & 0 deletions 8.1 AI security key concepts.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# AI security key concepts

## How does AI security differ from traditional cyber security?

Securing AI systems presents unique challenges compared to traditional cybersecurity, mainly due to the nature of AI’s learning capabilities and decision-making processes. Here are some key differences:

- **Data Integrity**: AI systems rely heavily on data for learning. [Ensuring the integrity of this data is crucial, as attackers can manipulate the data to influence AI behavior, a tactic known as data poisoning
- **Model Security**: The AI’s decision-making model itself can be a target. [Attackers may attempt to reverse-engineer the model or exploit its weaknesses to make incorrect or harmful decisions
- **Adversarial Attacks**: AI systems can be susceptible to adversarial attacks, where slight, often imperceptible alterations to input data can cause the AI to make errors or incorrect predictions
- **Infrastructure Security**: While traditional cybersecurity also focuses on protecting infrastructure, AI systems may have additional layers of complexity, such as cloud-based services or specialized hardware, that require specific security measures
- **Ethical Considerations**: The use of AI in security brings ethical considerations, such as privacy concerns and the potential for bias in decision-making, which must be addressed in the security strategy

Overall, securing AI systems requires a different approach that considers the unique aspects of AI technology, including the protection of data, models, and the AI’s learning process, while also addressing the ethical implications of AI deployment


AI security and traditional cybersecurity share many similarities, but they also have some distinct differences due to the unique characteristics and capabilities of artificial intelligence systems. Here's how they differ:

- **Complexity of Threats**: AI systems introduce new layers of complexity to cybersecurity. Traditional cybersecurity primarily deals with threats like malware, phishing attacks, and network intrusions. However, AI systems can be vulnerable to attacks such as adversarial attacks, data poisoning, and model evasion, which specifically target the machine learning algorithms themselves.

- **Attack Surface**: AI systems often have larger attack surfaces compared to traditional systems. This is because they not only rely on software but also on data and models. Attackers can target the training data, manipulate models, or exploit vulnerabilities in the algorithms themselves.

- **Adaptability of Threats**: AI systems can adapt and learn from their environment, which can make them more susceptible to adaptive and evolving threats. Traditional cybersecurity measures may not be sufficient to defend against attacks that constantly evolve based on the behavior of the AI system.

- **Interpretability and Explainability**: Understanding why an AI system made a particular decision is often more challenging compared to traditional software systems. This lack of interpretability and explainability can make it difficult to detect and mitigate attacks on AI systems effectively.




- **Data Privacy Concerns**: AI systems often rely on large amounts of data, which can introduce privacy risks if not properly handled. Traditional cybersecurity measures may not adequately address these data privacy concerns specific to AI systems.




- **Regulatory Compliance**: The regulatory landscape for AI security is still evolving, with specific regulations and standards emerging to address the unique challenges posed by AI systems. Traditional cybersecurity frameworks may need to be extended or adapted to ensure compliance with these new regulations.




- **Ethical Considerations**: AI security involves not only protecting systems from malicious attacks but also ensuring that AI systems are used in an ethical and responsible manner. This includes considerations such as fairness, transparency, and accountability, which may not be as prominent in traditional cybersecurity.


## How is AI the same as securing traditional IT systems?

Securing AI systems shares several fundamental principles with traditional cybersecurity:

- **Threat Protection**: Both AI and traditional systems need to be safeguarded against unauthorized access, data modification, and destruction, as well as other common threats.
- **Vulnerability Management**: Many vulnerabilities that affect traditional systems, such as software bugs or misconfigurations, can also impact AI systems.
- **Data Security**: The protection of processed data is crucial in both domains to prevent data breaches and ensure confidentiality.
- **Supply Chain Security**: Both types of systems are susceptible to supply chain attacks, where a compromised component can undermine the security of the entire system.

These similarities highlight that while AI systems introduce new security challenges, they also require the application of established cybersecurity practices to ensure robust protection. It’s a blend of leveraging traditional security wisdom while adapting to the unique aspects of AI technology.

## Further reading

- [Not with a Bug, But with a Sticker [Book] (oreilly.com)](https://www.oreilly.com/library/view/not-with-a/9781119883982/)

- [Intro to AI Security Part 1: AI Security 101 | by HarrietHacks | Medium](https://medium.com/@harrietfarlow/intro-to-ai-security-part-1-ai-security-101-b8662a9efe5)

- [Best practices for AI security risk management | Microsoft Security Blog](https://www.microsoft.com/en-us/security/blog/2021/12/09/best-practices-for-ai-security-risk-management/?WT.mc_id=academic-96948-sayoung)

- [OWASP AI Security and Privacy Guide | OWASP Foundation](https://owasp.org/www-project-ai-security-and-privacy-guide/)

30 changes: 30 additions & 0 deletions 8.2 AI security capabilities.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# AI security capabilities

## What tools and capabilities do we have to secure AI systems currently?

Currently, there are several tools and capabilities available to secure AI systems:

- **Counterfit**: An open-source automation tool for security testing of AI systems, designed to help organizations conduct AI security risk assessments and ensure the robustness of their algorithms.
- **Adversarial Machine Learning Tools**: These tools evaluate the robustness of machine learning models against adversarial attacks, helping to identify and mitigate vulnerabilities.
- **AI Security Toolkits**: There are open-source toolkits available that provide resources for securing AI systems, including libraries and frameworks for implementing security measures.
- **Collaborative Platforms**: Partnerships between companies and AI communities to develop AI-specific security scanners and other tools to secure the AI supply chain.

These tools and capabilities are part of a growing field dedicated to enhancing the security of AI systems against a variety of threats. They represent a combination of research, practical tools, and industry collaboration aimed at addressing the unique challenges posed by AI technologies.

## What about AI red teaming? How does that differ from traditional security red teaming?

AI red teaming differs from traditional security red teaming in several key aspects:

- **Focus on AI Systems**: AI red teaming specifically targets the unique vulnerabilities of AI systems, such as machine learning models and data pipelines, rather than traditional IT infrastructure.
- **Testing AI Behavior**: It involves testing how AI systems respond to unusual or unexpected inputs, which can reveal vulnerabilities that could be exploited by attackers.
- **Exploring AI Failures**: AI red teaming looks at both malicious and benign failures, considering a broader set of personas and potential system failures beyond just security breaches.
- **Prompt Injection and Content Generation**: AI red teaming also includes probing for failures like prompt injection, where attackers manipulate AI systems to produce harmful or ungrounded content.
- **Ethical and Responsible AI**: It’s part of ensuring responsible AI by design, making sure AI systems are robust against attempts to make them behave in unintended ways.

Overall, AI red teaming is an expanded practice that not only covers probing for security vulnerabilities but also includes testing for other types of system failures specific to AI technologies. It’s a crucial part of developing safer AI systems by understanding and mitigating novel risks associated with AI deployment.

## Further reading

- [Microsoft AI Red Team building future of safer AI | Microsoft Security Blog](https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/?WT.mc_id=academic-96948-sayoung)
- [Announcing Microsoft’s open automation framework to red team generative AI Systems | Microsoft Security Blog](https://www.microsoft.com/en-us/security/blog/2024/02/22/announcing-microsofts-open-automation-framework-to-red-team-generative-ai-systems/?WT.mc_id=academic-96948-sayoung)
- [AI Security Tools: The Open-Source Toolkit | Wiz](https://www.wiz.io/academy/ai-security-tools)
55 changes: 55 additions & 0 deletions 8.3 Responsible AI.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Responsible AI

## What is responsible AI and how does it relate to AI security?

Responsible AI refers to the development and use of artificial intelligence in a way that is ethical, transparent, and aligns with societal values. It encompasses principles such as fairness, accountability, and robustness, ensuring that AI systems are designed and operated to benefit individuals, communities, and society as a whole.

The relationship between responsible AI and AI security is significant because:

- **Ethical Considerations**: Responsible AI involves ethical considerations that directly impact security, such as privacy and data protection. Ensuring that AI systems respect user privacy and secure personal data is a key aspect of responsible AI.
- **Robustness and Reliability**: AI systems must be robust against manipulation and attacks, which is a core principle of both responsible AI and AI security. This includes protecting against adversarial attacks and ensuring the integrity of AI decision-making processes.
- **Transparency and Explainability**: Part of responsible AI is making sure that AI systems are transparent and their decisions can be explained. This is crucial for security, as stakeholders need to understand how AI systems operate to trust their security measures.
- **Accountability**: AI systems should be accountable for their actions, which means there must be mechanisms in place to trace decisions and rectify any issues. This aligns with security practices that monitor and audit system activities to prevent and respond to breaches.

In essence, responsible AI and AI security are intertwined, with responsible AI practices enhancing the security of AI systems and vice versa. Implementing responsible AI principles helps create AI systems that are not only ethically sound but also more secure against potential threats.

## How can I ensure my AI system is both secure and ethical?

Ensuring that your AI system is both secure and ethical involves a multi-faceted approach that includes the following steps:

- **Adhere to Ethical Principles**: Follow established ethical guidelines which emphasize human, societal, and environmental wellbeing; fairness; privacy protection; reliability; transparency; contestability; and accountability.

- **Implement Robust Security Measures**: Use proactive security testing and AI trust, risk, security management programs to protect against threats and vulnerabilities.

- **Engage Diverse Stakeholders**: Involve a wide range of participants in the AI development process, including ethicists, social scientists, and representatives from affected communities to ensure diverse perspectives and values are considered.

- **Ensure Transparency and Explainability**: Make sure that the AI’s decision-making processes are transparent and can be explained, allowing for greater trust and easier identification of potential biases or errors.

- **Maintain Data Privacy**: Protect the privacy and authenticity of data through encryption and other data protection measures to respect users’ privacy rights.

- **Enable Human Oversight**: Implement mechanisms for human oversight to allow for the contestability of decisions made by AI systems and to ensure accountability.

- **Stay Informed on AI Safety**: Keep up-to-date with the latest research and discussions on AI safety to understand the evolving landscape of AI security and ethics.

- **Comply with Regulations**: Ensure that your AI system complies with all relevant laws and regulations, which may include data protection laws, anti-discrimination laws, and industry-specific guidelines.

## Can you give me some examples of a security issue caused by unethical use of AI?

Here are some examples of security issues that can arise from the unethical use of AI:

- **Biased Decision-Making**: AI systems can perpetuate and amplify existing biases if they are trained on biased data sets. For instance, if a search engine is trained on data that reflects societal stereotypes, it may display biased search results, which can lead to unfair treatment or discrimination.

- **AI in Judicial Systems**: The use of AI in legal decision-making can raise ethical concerns, especially if the AI’s decision-making process lacks transparency or is influenced by biased data. This could result in unjust legal outcomes and infringe on individuals’ rights.

- **Manipulation of AI Systems**: AI systems can be susceptible to adversarial attacks, where slight modifications to input data can cause incorrect outcomes. For example, autonomous vehicles could be misled to misinterpret traffic signs, leading to safety risks.

- **AI-Powered Surveillance**: The deployment of AI for surveillance purposes can lead to privacy violations, especially if used without proper consent or in ways that infringe on individual freedoms. This can be particularly problematic in authoritarian regimes that may use AI to monitor and suppress dissent.

These examples highlight the importance of ethical considerations in the development and deployment of AI systems to prevent security issues and protect individuals’ rights and privacy.

## Further reading

- [Microsoft Responsible AI Standard v2 General Requirements](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5cmFl?culture=en-us&country=us&WT.mc_id=academic-96948-sayoung)
- [Responsible AI (mit.edu)](https://sloanreview.mit.edu/big-ideas/responsible-ai/)
- [13 Principles for Using AI Responsibly (hbr.org)](https://hbr.org/2023/06/13-principles-for-using-ai-responsibly)

4 changes: 4 additions & 0 deletions 8.4 End of module quiz.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# End of module quiz


[**End of module quiz**](https://forms.office.com/Pages/DesignPageV2.aspx?prevorigin=Marketing&origin=NeoPortalPage&subpage=design&id=v4j5cvGGr0GRqy180BHbR1irKnVJZ_RBhccteqa39A9UQkdMNkdBMFVCWlZYMURINENWM1ZCT0FaUy4u)
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,10 @@ Ultimately, you could consider taking the [Exam SC-900: Microsoft Security, Comp
| **7.1** | Data security fundamentals | [Data security key concepts](https://github.com/microsoft/Security-101/blob/main/7.1%20Data%20security%20key%20concepts.md) | Learn about data classification and retention and why this is important to an organization. |
| **7.2** | Data security fundamentals | [Data security capabilities](https://github.com/microsoft/Security-101/blob/main/7.2%20Data%20security%20capabilities.md) | Learn about data security tooling – DLP, inside risk management, data governance, etc. |
| **7.3** | [End of module quiz](https://github.com/microsoft/Security-101/blob/main/7.3%20End%20of%20module%20quiz.md) |

| **8.1** | AI security fundamentals | [AI security key concepts](https://github.com/microsoft/Security-101-beginners/blob/main/8.1%20AI%20security%20key%20concepts.md) | Learn about confidentiality, availability and integrity. Also authenticity and also nonrepudiation and privacy. |
| **8.2** | AI security fundamentals | [AI security capabilities](https://github.com/microsoft/Security-101-beginners/blob/main/8.2%20AI%20security%20capabilities.md) | Learn about AI security tooling and the controls that can be used to secure AI. |
| **8.3** | AI security fundamentals | [Responsible AI](https://github.com/microsoft/Security-101-beginners/blob/main/8.3%20Responsible%20AI.md) | Learn about what responsible AI is and AI specific harms that security professionals need to be aware of. |
| **8.4** | [End of module quiz](https://github.com/microsoft/Security-101/blob/main/8.4%20End%20of%20module%20quiz.md)
## 🎒 Other Courses

Our team produces other courses! Check out:
Expand Down

0 comments on commit 8c63505

Please sign in to comment.