FTC Issues New AI Safety Guidance for Educational Tools
- Nik Reeves-McLaren
- Sep 16, 2025
- 4 min read
Published: 17th September 2025
The US Federal Trade Commission (FTC) has released comprehensive guidance requiring AI systems, particularly those used in educational settings, to incorporate robust safety measures, transparent risk disclosures, and effective parental control tools. The guidance has immediate implications for universities procuring AI tools and platforms for research and teaching.
Key Requirements for AI Providers
The FTC guidance centres on ensuring companies developing AI systems, especially chatbots and interactive tools, implement systematic approaches to user safety. Companies must now document how they evaluate and test for potential harms, particularly those affecting minors including self-harm content and predatory interactions.
Mandatory Safety Measures:
Comprehensive harm assessment: Regular testing for content that could promote self-harm, eating disorders, or dangerous behaviours
Predatory content detection: Systems to identify and prevent grooming or exploitation attempts
Transparent risk disclosure: Clear communication about AI limitations and potential risks
Effective content moderation: Human oversight of AI-generated responses in educational contexts
Impact on University AI Procurement
UK universities, despite the FTC's US jurisdiction, will likely face indirect pressure to comply with these standards. Many AI tools used in British higher education are developed by US companies or operate internationally, making FTC compliance a practical requirement for market access.
Procurement Implications:
Due diligence requirements: Universities may need to verify that AI vendors meet FTC safety standards
Risk assessment documentation: Institutions should request evidence of harm testing and mitigation strategies
Student protection policies: Enhanced responsibility for ensuring AI tools used on campus meet child safety requirements
Vendor transparency: Clearer documentation requirements for AI tool capabilities and limitations
Specific Concerns for Educational AI
The guidance specifically addresses chatbots and interactive AI systems, which are increasingly common in educational settings. Universities using AI for student support, research assistance, or administrative functions must consider how these tools might inadvertently expose users to harmful content or experiences.
Areas of particular concern:
Mental health support bots: AI systems providing wellbeing advice must include appropriate safeguards and human escalation paths
Research assistance tools: AI that helps with literature reviews or data analysis should clearly communicate limitations to prevent over-reliance
Student-facing applications: Any AI system accessible to students requires enhanced safety measures and transparent risk communication
Research Tool Implications
Many research-focused AI tools will need to demonstrate compliance with FTC guidance, particularly those that:
Process personal or sensitive research data
Interact with research participants
Provide analysis that could influence research conclusions
Are accessible to student researchers
For Research Groups: Universities should review existing AI tool usage to ensure vendors provide adequate safety documentation and risk assessments. This includes tools for literature review, data analysis, and research writing assistance.
International Regulatory Alignment
While the FTC guidance applies specifically to US markets, it reflects broader international trends towards AI safety regulation. The European Union's AI Act and UK government AI principles share similar concerns about transparency and harm prevention.
Global Implications:
Regulatory convergence: Safety requirements are becoming more standardised across jurisdictions
Vendor compliance costs: AI companies face pressure to implement safety measures globally rather than market-specific solutions
Institutional policies: Universities worldwide may adopt FTC-aligned standards as best practice
Practical Steps for Universities
Immediate Actions:
Audit current AI tools: Review existing platforms for compliance with safety and transparency requirements
Vendor communication: Request documentation of harm testing and safety measures from AI tool providers
Policy updates: Ensure institutional AI policies address student protection and risk disclosure requirements
Staff training: Educate faculty and staff about AI tool limitations and appropriate use cases
Long-term Planning:
Procurement frameworks: Develop standardised safety requirements for future AI tool acquisitions
Risk management: Implement systematic approaches to evaluating AI tool safety before deployment
Student education: Provide clear guidance about AI tool capabilities and limitations
Implications for Student Research
The guidance emphasises protecting minors, but universities should extend similar protections to all students using AI research tools. This includes:
Clear communication about AI limitations and potential biases in research contexts
Appropriate supervision when students use AI for data collection or analysis
Risk assessment for AI tools that might influence research outcomes or academic decision-making
Vendor Response and Market Changes
AI companies are likely to respond by enhancing safety documentation and implementing more robust content moderation systems. Universities may see:
Improved transparency about AI training data and potential biases
Enhanced safety features in educational AI platforms
Clearer documentation of appropriate use cases and limitations
Better integration with institutional oversight systems
Recommendations for Academic Leaders
Policy Development: Update institutional AI policies to reflect safety and transparency requirements comparable to FTC guidance.
Vendor Relations: Establish clear expectations for AI tool safety documentation and ongoing monitoring.
Risk Management: Implement systematic approaches to evaluating AI tool safety before campus deployment.
Student Protection: Ensure AI tools accessible to students include appropriate safeguards and clear limitation disclosures.
The FTC guidance represents a significant step towards systematic AI safety regulation in educational contexts. While compliance costs may increase, the framework provides universities with clearer standards for evaluating and implementing AI tools responsibly.
Academic institutions that proactively align with these safety principles will be better positioned to leverage AI capabilities whilst protecting their communities from potential harms.
Sources:
Top AI & Tech Updates: Mid-September 2025 Highlights - TST Technology
Analysis based on reported FTC AI safety guidance requirements
Comments