Responsible AI in Teaching & Learning:
A Comprehensive Guide
Department of Biological Systems Engineering , UNL
Empowering educators to integrate AI ethically and responsibly
while preserving academic integrity and fostering student success.
Meet the Team
Purpose & Core Principles
Why This Guide Matters
Artificial intelligence tools present a dual-edged opportunity in education. While they can significantly enhance creativity, learning efficiency, and innovative teaching approaches, they also introduce complex challenges related to academic integrity, transparency, and equitable access.
This comprehensive guide is designed to support faculty in thoughtfully integrating AI technologies into their courses in ways that are both ethical and pedagogically sound. By following these principles, educators can help students develop critical AI literacy skills while upholding fundamental academic values.
Transparency
Clear communication about AI policies and expectations
Integrity
Maintaining academic honesty with AI assistance
Attribution
Proper documentation of AI contributions
Equity
Ensuring fair access to AI resources
Critical Thinking
Evaluating AI outputs thoughtfully
Privacy
Protecting student data and information
Transparency & Explicit Communication
Establishing clear expectations about AI use is fundamental to creating a supportive learning environment. Faculty should include a dedicated "AI Use Policy" in their syllabus that explicitly outlines permitted and prohibited AI applications.
Restrictive Approach
Best for courses focused on foundational skill development where manual practice is essential to learning outcomes.
Example: "AI tools may not be used for writing assignments in this introductory composition course, as developing your own writing process is a core learning objective."
Moderate Approach
Allows AI use with proper disclosure and substantial personal contribution, balancing innovation with accountability.
Example: "AI tools may be used for brainstorming and editing assistance, provided all use is disclosed and the final work demonstrates significant personal analysis."
Permissive Approach
Appropriate for advanced courses where critically evaluating AI outputs is itself a learning objective.
Example: "Students are encouraged to experiment with AI tools throughout the research process, with the expectation that all outputs will be critically evaluated and refined."

Example Syllabus Statement
"Students may use AI tools for brainstorming, grammar checking, and initial research and coding assistance, provided that: (1) all AI assistance is disclosed in an AI Use Statement, (2) the work demonstrates substantial personal contribution and original analysis, (3) all content is fact-checked and revised, and (4) the use of AI does not bypass essential learning objectives. Submitting AI-generated work without personal contribution constitutes academic misconduct. If you are uncertain about appropriate AI use, please consult with me."
Academic Integrity & Accountability
In the age of AI, academic integrity requires a renewed focus on personal accountability and transparent disclosure. Students must understand that while AI can be a powerful learning aid, they remain responsible for the accuracy, originality, and intellectual contribution in their work.
Key Accountability Principles
  • Students must maintain substantial personal contribution in all submitted work
  • AI should enhance, not replace, critical thinking and analysis
  • All AI-generated content must be critically evaluated and fact-checked
  • Students should be able to explain and defend all aspects of their work, regardless of AI assistance
  • Transparency about AI use builds trust and reinforces learning objectives
AI Use Statement Requirements
When AI use is permitted, require students to include an AI Use Statement with their assignments that details:
  1. Which AI tools were used (specific platforms and versions)
  1. When the tools were used (dates of access)
  1. How the tools were used (specific purposes and applications)
  1. What prompts or inputs were provided
  1. How the AI outputs were evaluated, modified, and incorporated

Example Academic Integrity Statement: "Violations may result in assignment revision, course failure, or formal academic integrity proceedings, depending on severity."
Example AI Use Statement: "I used ChatGPT-5 on October 15, 2025, to explore potential thesis statements for this essay. I evaluated the suggestions against course readings, selected ideas that aligned with my argument, and developed them into my own thesis. All research and writing in this paper represents my original work."
Citation & Attribution
Proper documentation of AI contributions is essential for academic integrity and helps students develop professional citation practices. Faculty should provide clear guidance on how to cite AI tools according to their discipline's preferred citation format.
Identify AI Use
Document which AI tools were used, including specific versions and access dates
Describe Purpose
Explain how the AI was used (brainstorming, editing, code generation, etc.)
Format Properly
Follow discipline-specific citation guidelines (APA, MLA, Chicago, etc.)
Integrate Citations
Include both in-text references and full citations in bibliography
Citation Examples
APA Format
OpenAI. (2025). ChatGPT (Version 5) [Large language model]. Retrieved August 12, 2025, from https://chat.openai.com Used for: exploring potential thesis statements.
MLA Format
"ChatGPT." Version 5, OpenAI, 12 Aug. 2025, chat.openai.com. Accessed for thesis development.
Chicago Style
OpenAI. "ChatGPT (Version 5)." Large language model. Accessed August 12, 2025. https://chat.openai.com. Used for initial research exploration.
Best Practices for Attribution
  • Be specific about which parts of the work involved AI assistance
  • Include the exact prompts used when relevant to understanding the output
  • Distinguish between AI-generated content that was used verbatim and content that was substantially modified
  • Acknowledge AI use even when it didn't directly contribute to the final product but influenced your thinking
  • When in doubt, err on the side of more disclosure rather than less
  • Remember that proper attribution demonstrates academic integrity and digital literacy

Example Citation Requirements Statement: "All AI tools must be properly cited according to the course citation style."
Equity & Accessibility
Ensuring equitable access to AI tools is crucial for creating an inclusive learning environment. Faculty must proactively address potential barriers to access and provide appropriate alternatives for students with limited resources or specific accessibility needs.
Access Challenges
  • Limited internet connectivity in rural or underserved areas
  • Financial constraints affecting device availability
  • Technical literacy variations among student populations
  • Accessibility barriers for students with disabilities
  • Privacy concerns related to personal data sharing
Proactive Solutions
  • Prioritize university-approved AI tools with accessibility features
  • Provide on-campus access options through computer labs
  • Create alternative assignment pathways that don't require AI
  • Offer technical support and training resources
  • Use anonymous surveys to identify potential barriers
Inclusive Design
  • Design assignments that can be completed with or without AI
  • Provide clear instructions for various access scenarios
  • Ensure AI literacy instruction is accessible to all students
  • Consider cultural contexts that may affect AI interaction
  • Regularly evaluate and address emerging equity issues
Faculty should recognize that AI tools may perpetuate existing inequities if access issues aren't addressed thoughtfully. By planning for accessibility from the outset and providing flexible options, instructors can ensure that all students benefit from AI integration regardless of their personal circumstances or technical resources.
"Equity in AI education isn't just about providing access—it's about ensuring that all students can meaningfully participate in and benefit from AI-enhanced learning experiences, regardless of their background or resources."
Critical Engagement & AI Literacy
Developing students' ability to critically evaluate AI outputs is essential for responsible AI use. Faculty should integrate AI literacy instruction that helps students understand the limitations, biases, and ethical implications of AI technologies.
Key AI Literacy Skills
Prompt Engineering
Crafting effective queries to elicit useful AI responses while understanding how prompt phrasing affects outputs
Output Evaluation
Critically assessing AI-generated content for accuracy, relevance, bias, and completeness
Bias Recognition
Identifying and addressing algorithmic biases that may perpetuate stereotypes or exclude diverse perspectives
Ethical Reasoning
Considering the ethical implications of AI use in various contexts and making responsible decisions
Guiding Questions for Critical Evaluation
  • Assumptions: What assumptions does the AI appear to make in its response?
  • Gaps: Are there notable gaps in coverage or representation in the AI output?
  • Refinement: How could the prompt be modified to generate a more accurate or comprehensive output?
  • Ethics: What ethical concerns arise from using AI in this specific context?
  • Sources: How can the information provided by the AI be verified against reliable sources?
  • Bias: Does the AI output reflect particular cultural, political, or social biases?
  • Limitations: What are the boundaries of the AI's knowledge or capabilities in this domain?

Classroom Activity Idea
Have students compare AI responses to the same prompt across different platforms or with different parameter settings. Ask them to analyze variations in the outputs and discuss what these differences reveal about how AI systems work and their inherent limitations.
By integrating critical AI literacy into course activities, faculty can help students develop the discernment needed to use AI tools effectively while maintaining intellectual independence. These skills will serve students well beyond the classroom as AI continues to transform professional and civic life.
Privacy & Data Protection
Protecting student privacy when using AI tools requires thoughtful consideration of data security, consent, and confidentiality. Faculty should prioritize university-approved AI platforms with transparent data policies and avoid requiring students to input personal information into external AI systems.
Example Privacy Protection Statement
"Avoid inputting personal, sensitive, or confidential information into AI systems. Be aware that some AI platforms may store and analyze your inputs."
Privacy Considerations
  • Data Retention: Many AI platforms store user inputs and outputs for model training and improvement
  • Intellectual Property: Unclear ownership of AI-generated content may create copyright concerns
  • Confidentiality: Sensitive course content shared with AI may become accessible to third parties
  • Student Consent: Students should understand and consent to how their data will be used
  • Institutional Policies: Faculty must adhere to university guidelines on data protection
Best Practices
  • Use university-sanctioned AI tools with established privacy agreements when available
  • Provide clear guidelines about what types of information should never be shared with AI systems
  • Offer alternative assignment options that don't require external AI tools
  • Educate students about privacy implications of different AI platforms
  • Consider using anonymized or fictional data for AI-based assignments
  • Stay informed about evolving privacy regulations affecting educational technology
Faculty should regularly review the terms of service and privacy policies of AI tools they recommend or require for coursework. When possible, provide students with guidance on how to use these tools while minimizing privacy risks, such as using institutional accounts rather than personal ones or working with synthetic rather than real data.

Remember that FERPA (Family Educational Rights and Privacy Act) protections apply to student educational records, which may include certain interactions with AI tools in educational contexts. Consult with your institution's privacy office if you have questions about specific applications.
Strategic Implementation
Aligning AI with Learning Goals
AI tools should enhance, not replace, essential learning objectives. Faculty must carefully consider how AI integration supports or potentially undermines the core skills and knowledge students need to develop in their courses.
Identify Core Skills
Determine which skills must be developed through direct practice versus those that can be augmented by AI
Design Appropriate Assessments
Create assignments that measure student understanding even when AI assistance is permitted
Balance AI and Human Work
Structure activities to leverage AI strengths while preserving opportunities for authentic learning
When designing assessments, consider whether the focus should be on the final product (which might be AI-enhanced) or on the process of creating it (which demonstrates student learning). In many cases, a combination approach works best, with some elements requiring independent work and others allowing AI collaboration.
Continuous Review & Adaptation
The rapidly evolving nature of AI technologies requires regular policy updates and pedagogical adjustments. Faculty should approach AI integration as an iterative process informed by student feedback, technological developments, and emerging best practices.
  • Regular Policy Reviews: Update AI guidelines at least once per semester to address new tools and capabilities
  • Student Feedback: Collect input on how AI policies are affecting the learning experience
  • Peer Collaboration: Share experiences and strategies with colleagues across disciplines
  • Professional Development: Participate in workshops and training on AI in education
  • Experimentation: Test new approaches in low-stakes assignments before full implementation

Consider conducting mid-semester check-ins with students about AI use in your course. Ask: "How is the current AI policy supporting or hindering your learning? What adjustments would help you better achieve the course objectives?"
By maintaining a flexible, learning-centered approach to AI integration, faculty can navigate the challenges of this technological transition while preserving educational quality and academic integrity. Remember that the goal is not to resist technological change but to harness it thoughtfully in service of authentic learning and student development.
Resources & Further Information
Explore these institutional resources to deepen your understanding of responsible AI integration in teaching and learning. These curated materials provide additional guidance, examples, and support for developing effective AI policies and pedagogical approaches.
UNL Center for Transformative Teaching
Offers consultations, workshops, and customized support for faculty integrating AI into their courses.
UNL Libraries
Provides comprehensive guides on citing AI and other electronic sources across different citation styles.
Additional Institutional Models
Ohio State University
Comprehensive framework for crafting student AI use policies with examples across disciplines.
University of Michigan
Detailed guidance on course policies with sample syllabus language and implementation strategies.
University of Wisconsin-Madison
Guiding principles for generative AI in teaching with practical applications and case studies.
Remember that responsible AI integration is an evolving practice. These resources provide a foundation, but the most effective approaches will be those that you adapt thoughtfully to your specific disciplinary context, student needs, and learning objectives.
Meet the Responsible AI Team
Mark Stone
Professor
Department Head
Derek Heeren
Professor
Associate Head for
Academic Programs
Jennifer Keshwani
Associate Professor
Associate Head for
Extension & Engagement
Santosh Pitla
Professor
Associate Head for
Research & Innovation
AI Use Disclosure & Version History

v1.1 (August 19, 2025) - Current Version
The 'Meet the Responsible AI Team' section was added.
Claude (Anthropic) was used for copy editing. All substantive content remains unchanged.
Updated by: Asa Stone
v1.0 (August 14, 2025) - Initial Release
The developed framework was curated and this web-based resource was created.
No AI assistance was used for this version.
Created by: Santosh Pitla
v0.0 (August 12, 2025) - Framework Development
Initial content synthesis and framework were created.
Claude (Anthropic) was used to synthesize best practices from peer institutions. All AI-generated content was critically evaluated, substantially revised, and validated against current higher education standards. The final document reflects significant human expertise, judgment, and original contribution beyond the AI assistance.
Developed by: Asa Stone
Note on Development Process: AI assistance was utilized to efficiently and effectively compile and synthesize emerging best practices from multiple institutions, allowing the author to focus on contextualizing and adapting these practices for UNL BSE's specific needs and values.