We are education technology experts.

Skip to main content
Blogs - AI for Learning

The DoE AI Directive – A Must-Read for Product Leaders

  • Published on: September 20, 2024
  • |
  • Updated on: September 23, 2024
  • |
  • Reading Time: 7 mins
  • |
  • Views
  • |
Authored By:

Dipesh Jain

VP - Sales & Marketing

As Artificial Intelligence (AI) continues to evolve, its integration into educational products and services is becoming increasingly prevalent. The US Department of Education (DoE) is not only on board with this trend but actively supports it, which was clear when it issued the 2023 report, Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. It outlined the risks and opportunities associated with incorporating AI in the field of education.

Building on insights from that report, the DoE rereleased a guide this year titled, Designing for Education with Artificial Intelligence: An Essential Guide for Developers. This report is designed to assist product leads and their teams—including innovators, designers, developers, customer-facing staff, and legal experts in prioritizing safety, security, and trust while developing AI tools for education. This guide addresses not only advanced concepts like the development of large language models (LLMs) but also begins with clearly defining fundamental concepts like AI and edtech.

A multi-ethnic group of students learning together on their laptops and tablets.

 

Defining “Artificial Intelligence” and “EdTech”

According to the DoE guideline, Artificial Intelligence (AI) refers to machine-based systems capable of making predictions, recommendations, or decisions that affect real or virtual environments based on human-defined objectives. These systems utilize both machine and human inputs to perceive environments, abstract these perceptions into models through automated analysis, and use these models to generate options for information or action.

Based on the same guidelines, edtech encompasses technologies specifically designed for educational purposes as well as general technologies commonly used within educational settings.

 

5  Recommendations for AI in Designing for Education

As an edtech company, you are responsible for developing strategic partnerships with public officials to increase student and teacher access to high-quality instructional material and assessments. The DoE’s guidelines encompass everything you need to know from using AI in designing for teaching and learning to AI risk management framework.

1. Designing for Teaching and Learning

Designing for education starts with understanding the specific values and needs of educational environments. Developers should focus on key educational priorities such as enhancing reading, science, math, and computer science education. They should center the human element by integrating feedback from educators and students throughout the product development process. This ensures that student needs are comprehensively addressed and products are tailored to support effective teaching and learning.

AI holds significant potential to improve academic outcomes, it can streamline school operations and reduce the costs associated with customizing learning resources. Developers should aim to use AI to meet these diverse educational needs while ensuring that their products promote equity for all students.

To achieve this, providers must deepen their understanding of historical disparities in learning opportunities and actively engage with educational communities throughout the product life cycle. Collaboration with ethics experts is crucial to address ethical concerns, while involving those directly impacted by design choices helps maintain relevance and effectiveness.

Additionally, you should be mindful of AI’s limitations, avoid over-reliance on technology, and consider how AI systems influence instructional decisions to build trustworthy and equitable educational tools.

2. Providing Evidence for Rationale and Impact

Proving the effectiveness of edtech products is crucial for deciding which ones to adopt, especially when aiming to improve student outcomes. The Elementary and Secondary Education Act (ESEA) and educational leaders stress the importance of demonstrating that products genuinely enhance student performance. The Every Student Succeeds Act (ESSA), which updated the ESEA in 2015, encourages schools to choose products backed by solid research or a strong rationale for effectiveness.

To meet these requirements, understand how potential customers use evidence to make decisions and clearly define what success looks like, going beyond basic procurement needs. Partnering with researchers early on can help integrate current learning principles into the product. It’s important to work with educators and users for field testing throughout the product’s development and involve those directly impacted in gathering and interpreting evidence.

Edtech providers should also collect data on safety, security, and trust. All findings should be documented transparently, updated regularly, and made publicly available to support informed choices and build trust.

A Male and a female developer working together on two combined monitors.

3. Advancing Equity and Protecting Civil Rights

Advancing equity and protecting civil rights is a vital commitment for both the department and the administration. Edtech providers need to be attentive to issues such as representation and bias in data sets, algorithmic discrimination, and ensuring accessibility for people with disabilities. Algorithmic discrimination can lead to unequal access to learning opportunities, resources, and outcomes. The NIST identifies three types of AI bias—systemic, computational, and human—that can occur even without discriminatory intent.

Civil rights laws, enforced by the Department’s Office for Civil Rights, apply to educational settings, regardless of AI involvement. Developers must ensure their products comply with these regulations. AI training data should work to reduce bias and reflect diverse user needs. While AI has the potential to improve inclusion and accessibility, achieving digital equity requires addressing design gaps, usage, and access.

To meet these goals, integrate equity and civil rights into their organizational culture, from training data to UI/UX design choices. Setting up review processes and checklists ensures a broad representation of how products perform. Building feedback loops with organizations and experts supports fair design. Staying informed about standards on racism and algorithmic discrimination is essential, and regular third-party reviews can help eliminate bias from databases, algorithms, and design elements to ensure fair user experiences for all.

4. Ensuring Safety and Security

Privacy and data security are well-established in edtech, with clear guidelines and guardrails. The Executive Order on AI and other Administration guidance stress their importance. Educational decision-makers are specifying their data privacy and security needs, including concerns about civil liberties in the AI era. To engage responsibly, edtech providers must clearly outline how they will protect the safety and security of AI users.

Developers need to be familiar with federal laws on privacy and data security, such as FERPA, PPRA, COPPA, and CIPA. School technology leaders emphasize these aspects when managing educational technology. Besides privacy and cybersecurity, AI developers must address broader risks. The NIST AI Risk Management Framework offers a structured way to identify, prioritize, and manage these risks continuously.

Developers should provide clear, simple explanations of how they protect student data. They should enhance accountability through regular audits and feedback from users, especially vulnerable populations. Collaborating with other companies can help set shared standards for managing risks in AI educational products, covering both internal and supply chain risks. Developers must stay updated on public and regulatory perceptions of risk, respond to new concerns, and keep track of evolving AI laws at all levels, balancing state and federal policies.

Provide clear, simple explanations of how student data will be protected. Enhance accountability through regular audits and feedback from users, especially vulnerable populations. Collaborating with other companies can help set shared standards for managing risks in AI educational products, covering both internal and supply chain risks. Stay updated on public and regulatory perceptions of risk, respond to new concerns, and keep track of evolving AI laws at all levels, balancing state and federal policies.

5. Promoting Transparency and Earning Trust

Promoting transparency and earning trust is a crucial goal that extends beyond just technical output. Trust is built through transparency and public commitments, cultivating mutual confidence between technology suppliers and users. Engaging collaboratively with developers, educators, and other stakeholders is essential for building this trust.

Transparency is key to establishing trust. According to the NIST AI Risk Management Framework, trustworthy systems rely on mutual confidence between creators of AI-enabled educational tools and their users. Enhancing AI literacy among educators, parents, students, and other stakeholders can build trust. Without this literacy, assurances about AI might not be convincing.

Developers should highlight their commitment to both responsibility and innovation in their marketing efforts. They should openly share their commitments and disclosures, and emphasize ongoing, two-way communication with educators during product development. Supporting AI literacy within the ecosystem is crucial. Developers should also consider publicly describing their products’ trustworthy system architecture, focusing on achievable traits like interpretability, even as more complex aspects like explainability are still being developed.

Earning public trust is crucial as new AI applications are introduced in education. A robust edtech ecosystem built on mutual trust among technology providers, evaluators, recommenders, and users is the way to move forward. The 2023 AI Report uses an e-bike analogy to illustrate this: just as cyclists control their direction and pace while benefiting from the e-bike’s assistance, educators and students should retain control while AI enhances their teaching and learning experiences. Developers must, therefore, ensure that AI-enabled educational systems prioritize safety, security, and trustworthiness, akin to how e-bike manufacturers must ensure rider safety and gain public trust.

 

Written By:

Dipesh Jain

VP - Sales & Marketing

Dipesh is an experienced revenue professional with a knack for Sales, Marketing, and Presales leadership. But he's more than just a title – he's the driving force behind growth, fueled by his commitment to putting customers first. Dipesh's expertise isn't just in numbers; it's in building meaningful connections and solving real challenges across K-12. Whether it's product growth, improving learner and teacher relationships, or relationship management, he's your go-to person for making genuine connections and driving success.

FAQs

Balancing AI innovation with regulatory compliance requires a proactive approach. Start by establishing a cross-functional team that includes legal experts, developers, and educators. This team should stay updated on evolving AI regulations and interpret how they apply to your products. Implement a compliance-by-design approach, where regulatory requirements are integrated into the product development process from the outset. Regularly conduct internal audits and risk assessments to identify potential compliance issues early.

Demonstrating long-term impact requires a comprehensive approach to data collection and analysis. Design longitudinal studies that track student performance over several years, comparing outcomes between users and non-users of your AI-enhanced products. Partner with educational researchers and institutions to conduct rigorous, peer-reviewed studies on the effectiveness of your tools. Implement robust analytics within your products to capture detailed usage data and correlate it with academic performance metrics.

Adapt to local curriculum standards by ensuring your AI can be trained on or easily configured for different educational frameworks. Consider cultural nuances in AI responses, accounting for diverse perspectives and avoiding culturally insensitive content. Ensure compliance with region-specific data protection laws, which may require adjustments to data collection and storage practices. Develop localization strategies that go beyond simple translation, incorporating local educational practices and values. Build partnerships with local educational experts to gain insights into specific market needs and challenges.

Design modular architectures that allow for easy updates and integration of new AI capabilities without overhauling the entire system. Stay informed on AI trends through continuous research and participation in industry conferences. Maintain flexibility in product roadmaps to incorporate emerging technologies, leaving room for pivots based on technological advancements.

Get In Touch

Reach out to our team with your question and our representatives will get back to you within 24 working hours.