Plan

Completed

The Plan section of the AI Toolkit supports responsible AI practices, provides insight into Microsoft's approach to responsible AI, and suggests ways that you can get your institution and policies AI-ready.

Understand principles of responsible AI

It's important to consider responsible AI principles when implementing AI in education to ensure these technologies are used responsibly, safely, and in ways that enhance educational opportunities for students while preparing them for the future. Responsible AI principles can be used to help inform policies and usage guidelines adopted by districts, states, and ministries of education. They help ensure that AI is used in ways that are fair, transparent, and respect the privacy of students and staff. At the core of Microsoft's AI work are six key principles that define and guide responsible AI use: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

Graphic showing Microsoft’s responsible AI principles: Fairness, Reliability and safety, Privacy and security, Inclusiveness, Transparency, and Accountability.

Use these principles to develop, introduce, and refine an AI program that meets your instructional, technology, policy, and community needs.

Engage your community

Implementing AI requires thoughtful planning, clear communication, and collaboration with administrators, educators, students, parents, and community members. This section provides strategies for addressing concerns and building support for AI adoption.

Additionally, it offers practical advice and strategies for:

  • Building trust and support for AI-powered tools in education
  • Understanding and addressing your community's concerns
  • Matching tools to your goals and needs
  • Establishing a shared vision with your community

Understand and address your community's concerns

As you meet with community members, you'll encounter diverse concerns, interests, and needs. Listen actively, demonstrate your AI knowledge and leadership, and build support for your initiative. The AI Toolkit provides sample concerns with suggested responses and supporting resources. These are two examples:

Role Concern Sample response
Education leaders School leaders may have concerns about equity and accessibility when integrating AI. Schools want to ensure that AI tools are accessible for all students, including those with disabilities, and that the tools don't worsen existing inequalities. We'll evaluate all AI tools to ensure fair access for students from various socio-economic backgrounds and learning needs, as exemplified by institutions like the University of Texas.
Educators Based on past experiences, educators may feel that new programs and initiatives are introduced, supported for a short time, and then forgotten. Some teachers are hesitant to adopt technology unless they're comfortable with their own skills and can support any questions or issues their students may meet. We're committed to responsible AI use with age-appropriate materials, conversation starters, and iterative policy adaptation. Refer to Microsoft Learn's Equip your students with AI and tech skills for today—and tomorrow and Empower educators to explore the potential of artificial intelligence modules for self-paced learning.

Practical steps for education leaders

One of the central challenges in adopting generative AI in education is helping leaders translate guidelines and frameworks into actionable steps. The AI Toolkit addresses this by providing practical help. This example highlights what you can expect.

Need Suggested actions Resources
Revise policies to address generative AI Leaders can use Copilot to review policy documentation like Acceptable Use Policies to incorporate language about the use of generative AI. Rethinking Acceptable Use Policies in the Age of AI, District Administration

Policy considerations

Establishing policies creates structure and guidelines for your faculty, staff, students, and community. The AI Toolkit helps you get started. Review the entire policy section and begin with these practical suggestions:

  • Start now. Your students and staff are likely using AI already and need guidance. Read this entire section of the AI Toolkit and then create initial policies.
  • Identify key areas of need and critical questions to guide your process.
  • Establish what needs a policy and what doesn't. Focus on the largest areas of impact.
  • Learn from peers and familiarize yourself with resources like the TeachAI toolkit, developed with support from Microsoft.
  • Plan to iterate as you go. Refine policies based on feedback and evolving AI capabilities.

Organizational policy considerations

The policy process includes critical stages like creation, revision, and communication. The AI Toolkit provides several resources that help you through this cycle and includes multiple policy examples like the following to get started.

  • Academic integrity
  • Data protection and privacy
  • Staff and faculty use
  • Classroom syllabi
  • Accessibility and Universal Design for Learning (UDL)

Each of these examples includes guiding questions and sample policies from schools and higher education institutions that can serve as a model.

Use Copilot to update a policy

Generative AI tools like Copilot can serve as helpful partners when updating existing policies. The AI Toolkit guides you through one way that your team could use Copilot to update an academic integrity policy in just a few steps. Each step in the evolution includes a sample policy which is followed by a quick analysis of its effectiveness.

Method Policy Analysis
Initial policy written by humans Presenting another person's work as your own is an act of dishonesty. This behavior undermines your integrity and contradicts the principles upheld by [our institution]. We support the belief that academic success is contingent upon the dedication you invest in your studies. This policy addresses human-authored texts. Given the many ways that students can use generative AI tools, clear guidance on responsible AI use is essential to maintain academic integrity and prevent plagiarism.
Revised by humans Presenting another person's work or content created by a generative AI tool as your own is an act of dishonesty. This behavior undermines your integrity and contradicts the principles upheld by [our school]. We support the belief that academic success is contingent upon the dedication you invest in your studies. We expect you'll approach your assignments honestly, as your work reflects your capabilities. This policy covers generative AI. It broadens the range of permitted uses for students beyond mere assignment copying but doesn't offer appropriate uses for AI. We recommend setting guidelines for other uses of generative AI like revising work, seeking formative feedback, and using AI as a brainstorming partner.

Your leadership teams can use Copilot to assess existing policies, review biases, and generate simplified versions. Try these approaches with your policies at copilot.microsoft.com.

Strengthen cybersecurity and data governance

Education and technology leaders prioritize data protection and cyberattack prevention to ensure learning environments are safe, secure, and effective. Bad actors and cybercriminals target data-rich organizations like schools, universities, and ministries of education, as shown by increased attacks and new social engineering threats.

Break down data silos

Breaking down data silos is essential for maximizing AI’s potential in education. Siloed data limits collaboration and the effectiveness of AI-driven solutions. A unified data strategy enhances accessibility and decision-making, enabling AI systems to access the quality and diverse data they need.

The toolkit provides guidance on understanding the importance of data quality and diversity. By connecting different data types into a unified system with security measures like encryption and role-based access controls, institutions can create the foundation needed for successful AI adoption.

Enhance cybersecurity

Governments are calling for increased cybersecurity protection and closer examination of security and privacy in AI systems. In response, school districts are adopting safe AI policies with support from Microsoft and educational organizations. Microsoft works closely with institutions to deploy copilots with enterprise-grade security.

The Plan section also helps you safely and securely implement generative AI tools. You'll learn how Microsoft's AI systems and A3/A5 Microsoft 365 Education plans provide security tools that give you control and protection in managing AI in your school's infrastructure. Here are some considerations from the AI Toolkit.

  • Understand the importance of a responsible AI framework
  • Identify outcomes and data sources for AI systems
  • Establish data governance, roles, and responsibilities
  • Determine data privacy procedures and safeguards
  • Develop an incident response plan to address issues that arise

Each consideration has an overview, guiding questions, resources to follow, and suggested steps that your team can take to ensure your infrastructure and IT team can support a successful AI program.

Security tools for AI implementation

Microsoft 365 Education A3 and A5 plans include applications that help monitor AI activities and data flow, supporting secure AI deployment. The toolkit highlights key tools your institution can use:

  • Microsoft Defender for Cloud: Monitor AI system usage across cloud, multicloud, or hybrid infrastructures. Understand associated risks and approve or block access by browsing a catalog of 400+ generative AI applications.
  • Microsoft Purview: Detect data security risks in Copilot through Purview's AI hub. The AI hub aggregates usage statistics and applies risk levels to over 100 of the most common AI applications. Purview also uses sensitivity label citation and inheritance for additional security.
  • Microsoft Purview eDiscovery: Identify, preserve, and collect relevant AI data and interactions for litigation, investigations, audits, and inquiries.
  • Microsoft Entra ID: Manage access to Microsoft Copilot tools and underlying data with secure authentication procedures and risk-based adaptive policies.
  • Intune for Education: Apply security, configuration, and compliance policies to devices so that school-issued endpoints have baseline protection when working with AI systems.

These tools work together to create a comprehensive security framework that protects sensitive data while enabling your institution to use AI effectively.

Data governance

Creating a strong security posture includes a well-defined data governance framework. Data governance and security are fundamentally intertwined, each reinforcing the other to safeguard the confidentiality, availability, and integrity of data. By combining effective data governance with robust security measures, your organization can defend against a wide range of cyber threats, ensuring that your data is both well-managed and highly secure. Here's a sample of cybersecurity needs to consider:

  • Cloud data consolidation
  • Data governance and privacy needs
  • Data governance in AI
  • Data privacy considerations in AI-driven education
    • Student, educator, and faculty data privacy
    • Compliance

Illustration of IT admin using Microsoft’s simplified platform to ensure data governance and compliance.

As you explore the data governance tenets, consider who your stakeholders are, identify the decision makers, and delegate responsibilities to specific technology team members to help ensure that your school data is safe and follows established policies. Share these practical tips with your leadership team.

  • Ensure the data you collect and use is the minimum needed for the task. The less data collected, the lower the risk of a harmful data breach.
  • Where possible, anonymize student data to protect student identities. This could involve removing personally identifiable information (PII) or replacing it with pseudonyms.
  • Have a plan in place for responding to data breaches. This should include steps for identifying and containing the breach, notifying affected individuals, and preventing future breaches.
  • Consider how your team could use Copilot to help establish and maintain your data governance. Use the following sample prompt as a starting point.