Cultivating Critical AI Literacy and Trustworthy Learning Practices in Online and Blended Courses
Introduction
As artificial intelligence tools become increasingly embedded in education, students and educators are no longer just users of technology; they are participants in systems that shape knowledge production, decision-making, authorship, and truth. In this context, developing critical AI literacy is not optional but essential. Learners must understand not only how AI systems function, but also the social, ethical, and epistemological implications of relying on them in academic spaces. In online and blended learning environments, where technology already mediates interaction, assessment, and content delivery, the presence of AI introduces new questions about credibility, authorship, bias, transparency, and trust. Cultivating critical AI literacy, therefore, becomes fundamental to sustaining meaningful learning, protecting academic integrity, and maintaining trust between learners, educators, and institutions. This reflection explores how pedagogically responsible AI integration can be supported through intentional design, strong social presence, and community trust.
Understanding AI Beyond the Tool: From Usage to Literacy
Many students approach AI as a productivity shortcut: drafting, summarizing, generating, or translating content. However, critical AI literacy requires moving beyond functional use to informed awareness and evaluation. Siau and Wang (2020) emphasize that Artificial Intelligence does not simply execute commands; it interprets patterns from data shaped by human decisions, historical biases, and power structures. Therefore, using AI uncritically risks reinforcing misinformation, inequality, and epistemic injustice.
In online learning spaces where students increasingly rely on AI to support writing, coding, research, and even reflection, educators must guide learners to ask deeper questions, such as:
- Who created this system, and what data trained it?
- Whose perspectives are missing or underrepresented?
- What assumptions does the algorithm reproduce?
- How might this output mislead, manipulate, or oversimplify knowledge?
Through explicit instruction and classroom dialogue, students can begin to see AI not as an omniscient source, but as a constructed, fallible, and contestable tool (Siau & Wang, 2020).
Developing Critical AI Literacy
Critical AI literacy extends beyond the ability to use AI tools; it involves understanding how these systems function, questioning their outputs, and recognizing their limitations. Students must be trained to evaluate AI-generated content, identify bias, verify claims, and understand that AI systems are trained on historical data that often reflects social inequalities and epistemological biases (Siau & Wang, 2020).
Rather than positioning AI as an unquestionable authority, it should be reframed as a probabilistic support system that requires human judgment. Instructors can foster this mindset by designing tasks that require students to compare AI outputs with peer-reviewed sources, justify why certain outputs are trustworthy or problematic, and reflect on how prompts shape the responses generated. These practices support meta-cognitive awareness and reinforce the idea that critical thinking, not convenience, remains at the heart of learning (Siau & Wang, 2020).
This directly addresses the need for students to:
-
Evaluate outputs rather than accept them as fact
-
Identify bias and cultural framing in AI responses
-
Fact-check information with credible sources
-
Understand how training data shapes responses
-
Recognize AI’s lack of contextual and ethical reasoning
AI, Trust, and Epistemic Responsibility
Trust is a cornerstone of successful online learning communities. Learners must trust the content, the instructor, the platform, and each other. However, when AI begins to generate answers, feedback, translations, and even grading suggestions, this trust can become destabilized. Students may either over-trust AI (seeing it as neutral and authoritative) or distrust the entire learning ecosystem.
Siau and Wang (2020) argue that this tension makes it essential to frame AI as an assistant rather than an authority. Cultivating critical AI literacy helps students understand that:
- AI outputs are probabilistic rather than factual
- AI does not possess consciousness, intention, or moral judgment
- AI reflects dominant patterns, not universal truths
- Human responsibility is never removed from decision-making
By reinforcing human agency, instructors transform AI from a source of dependence into a site of critical inquiry, encouraging students to verify, challenge, compare, and expand upon AI-generated content rather than submitting to it (Siau & Wang, 2020). This practice not only protects academic integrity but also strengthens students’ sense of epistemic responsibility, their accountability as knowledge producers rather than passive consumers.
Designing for Ethical and Transparent AI Practices
In blended and online courses, critical AI literacy must be supported at the level of instructional design and not left to individual curiosity. According to Siau and Wang (2020), institutions and educators have an ethical obligation to ensure that AI integration is transparent, fair, and human-centered.
This can be encouraged through:
- Open discussion of when and how AI may be used in coursework
- Clear policies distinguishing support from misconduct
- Reflective tasks asking students to analyze AI’s strengths and weaknesses
- Comparative work between human and AI-generated outputs
- Critical reflection on bias, hallucination, and misinformation
Instead of banning AI or fully embracing it without limits, this approach teaches discernment. Students learn to see AI as an object of study itself—one that reveals broader questions about power, surveillance, authorship, and control in digital society (Siau & Wang, 2020).
Social Presence, Trust, and Responsible AI Use
Social presence is a critical mediator between technological use and ethical behavior. When learners feel seen, valued, and emotionally connected within an online community, they are more likely to engage responsibly and transparently (Garrison et al., 2000; Swan & Shih, 2005). Trust reduces the perceived need to cheat and increases accountability to the group.
In communities where dialogue is encouraged and reflection is shared openly, students feel safer discussing their use of AI tools, concerns, and uncertainties. This promotes collective norm-building around responsible use rather than isolated decision-making driven by fear of punishment.
Instructors can support this by:
-
Modeling transparent AI use themselves
-
Encouraging open discussions about ethical dilemmas
-
Normalizing mistakes as learning tools
-
Establishing shared values at the group level
This aligns with research suggesting that community trust directly influences participation, honesty, and long-term engagement in collaborative environments (Richardson et al., 2017).
Blending Synchronous and Asynchronous Modalities
A thoughtful combination of synchronous and asynchronous activities can further support critical AI engagement. Synchronous sessions encourage real-time dialogue, group reflection, and ethical debates. Asynchronous forums allow for slower, more structured critical thinking and academic feedback.
For example:
-
Students may use AI tools asynchronously, then critically discuss their findings in live sessions
-
Real-time debates can address emerging AI ethical challenges
-
Peer feedback can focus on how AI was (or wasn’t) effectively used
This blended model discourages passive consumption and promotes intentional, reflective engagement.
Connections to Other Themes
Relation to Lia – Group Learning and Collective Responsibility
While Lia emphasized collaboration in human groups, critical AI literacy expands the idea of collective responsibility into the digital realm. If group work builds empathy, tolerance, and shared meaning (Dron & Anderson, 2014), then AI literacy ensures that technology does not undermine that human connection but rather supports it transparently. In other words, ethical AI use must be embedded within the same principles of respect, dialogue, and collective accountability that define effective group learning. Lia’s application of Salmon’s Five-Stage Model can be strengthened through AI by using it as a scaffold at each stage (e.g., AI-supported introductions in Stage 2, AI-curated resources in Stage 3, and AI-facilitated debate prompts in Stage 4), provided that human facilitation and reflection remain central.
Relation to Andreas – Reflection, Identity, and Critical Distance
Andreas’ reflection on learning, self-awareness, and identity connects strongly to AI literacy. To use AI critically, learners must be aware of their own thinking, values, and intentions. When students reflect on why they used AI, what they accepted or rejected, and how it shaped their thinking, AI becomes a mirror revealing cognitive habits rather than a silent, invisible author. This metacognitive awareness strengthens autonomy instead of eroding it. Andreas’ comparison raises a critical point: while AI may generate more up-to-date, tool-rich course designs, it can easily miss deeper pedagogical sequencing, affective dimensions, and epistemological intent, positioning AI best as an enhancer or co-creator rather than a replacement for instructional expertise.
Relation to Mary – Enhancing Accessibility Through AI-Supported Course Revision (Mary)
Mary’s reflection introduces a vital dimension: the use of AI to enhance inclusion and accessibility in online and blended learning environments. AI can support instructors in revising learning materials to align with WCAG guidelines, ensuring that content is accessible to learners with visual, auditory, cognitive, or physical impairments. This application reframes AI not as a threat to pedagogy, but as a tool for inclusion and equity. In low-connectivity contexts, AI-generated modular content (PDFs, transcripts, lighter file formats) can make education more sustainable and globally accessible. However, as with all AI uses, these adaptations require human validation to ensure cultural accuracy, precision, and appropriateness (Popescu et al., 2014).
Relation to Social Presence and Third Spaces
Online learning spaces operate as “third spaces,” i.e., hybrid environments where personal, academic, and technological identities intersect. In this space, AI becomes a powerful participant. Without critical awareness, it can distort authenticity and voice. With critical literacy, however, students reclaim ownership of meaning, ensuring that their online presence remains human, grounded, and accountable rather than algorithmically replaced.
Conclusion
Artificial intelligence is reshaping education, but its impact will depend on how consciously and critically it is integrated. As Siau and Wang (2020) stress, AI must not replace human thinking but rather provoke it. In online and blended learning contexts, cultivating critical AI literacy is essential to preserving trust, integrity, and human agency. By teaching students to interrogate AI, question its authority, and reflect on its influence, educators empower them not only as learners but as ethical participants in a technologically mediated world. Ultimately, the question is not whether AI should be used in education, but how, and more importantly, who remains in control. Through critical AI literacy, the answer can (and should) remain: the human learner.
References
Dron, J., & Anderson, T. (2014). Teaching crowds: Learning and social media. Athabasca University Press.
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2–3), 87–105.
Popescu, A., Fistis, G., & Borca, C. (2014). Behaviour attributes that nurture the sense of e-learning community perception. Procedia Technology, 16, 745–754.
Richardson, J. C., Maeda, Y., Lv, J., & Caskurlu, S. (2017). Social presence in relation to students’ satisfaction and learning in the online environment: A meta‐analysis. Computers in Human Behavior, 71, 402–417.
Salmon, G. (2000). E-moderating: The key to teaching and learning online. Kogan Page.
Siau, K., & Wang, W. (2020). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 33(2), 47–53.
Swan, K., & Shih, L.-F. (2005). On the nature and development of social presence in online course discussions. Journal of Asynchronous Learning Networks, 9(3), 115–136.
Leave a Reply