For Week Four’s topic on design for online and blended learning, I was tasked to lead the discussion on generative artificial intelligence (AI) in the context of education, in which we covered the applications and implications of generative AI on teaching and learning. I was quite familiar with this topic since I explored how students perceived ChatGPT in terms of their opinions, use, and impact in a preliminary study, so I decided to look into how we could rethink learning in the classroom in the context of AI.

Traditionally, one would learn how to use a technological tool by learning the steps on how to operate the software, experimenting its functions with dummy data (if applicable), and using it to solve problems. It is often through this “rinse and repeat” experience that we become more confident and have mastery of the tool. However, with the advent of AI and its rapid pace of development, current tools are always updated, and new tools are always appearing. This would mean that by the time one has learnt how to use the said tool, another new tool would have replaced it, or that it would have evolved so much that the existing knowledge acquired is no longer sufficient to use the tool effectively.

Because of this, there is a need to reconceptualise learning. Dai et al. (2023) calls for a shift from “technology-based learning to learning with technology” (p. 87), in which we should train students to learn how to learn with AI, as opposed to merely learning how to use AI tools. This would mean that students who learn how to learn with AI would be future-ready when they need to use new and/or updated AI tools in the workplace.

When it comes to the actual use of AI tools, students need to exercise their own judgement when dealing with the input and its output. In fact, the output quality of the AI tool is entirely dependant on the input, which is in the form of prompts. Essentially, prompting is informed by enquiry; asking specific, focussed questions in the right way to elicit the desired output. When it comes to creating prompts, I found Northern’s (2019) model for enquiry-based learning to be a good start for students to engage, explore, explain, elaborate, and evaluate with the material. On top of that, there are countless resources online on how to prompt well, supported by samples from online and published sources.

Despite perfect prompting, it is common knowledge that the output received may not be accurate, so training students to employ fact-checking strategies may help them achieve better information literacy. One way to do this is to apply Blakeslee’s (2004) model for evaluation of digital sources, whereby students would need to fact-check the output of AI tools in terms of its currency, relevance, authority, accuracy, and purpose.

To recap, there is a need for us to rethink learning with AI, and to prepare students to be future-ready as they learn how to learn with AI. Students need to be able to exercise caution in prompting and evaluating the output, in order to capitalise on the tool to bring their work quality to the next level.


  1. Blakeslee, S. (2004). The CRAAP test. LOEX Quarterly, 31, 3(4), 6-7.
  2. Dai, Y., Liu, A., & Lim, C. P. (2023). Reconceptualizing ChatGPT and generative AI as a student-driven innovation in higher education. Procedia CIRP, 119, 84-90.
  3. Northern, S. (2019). The 5 E’s of inquiry-based learning. Knowledge Quest.


With generative AI, it’s time to rethink learning