Reflection 5: Final reflections

1. What are the most important things you have learned through your engagement in the ONL course? Why?

I have learned that I probably needed a kick in the butt. I have inherited my online courses from retired teachers and just copied and pasted them every new semester. I won’t do that next year. When September 2026 comes, I will add some new interaction to the courses.

2. How will your learning influence your practice?

I will do some changes in online courses where I am teaching. I will include some more tools for asynchronous collaboration on shared spaces like Miro and make it possible for participants to interact in a “richer” way. I will also try to have some online meetings that are synchronized.

I will also try to accommodate for participants labelled as “lurkers”. I will try to arrange possibilities for them to come and go and grab the knowledge they want and with respect to how they want to learn.

3. What are your thoughts about using technology to enhance learning/teaching in your own context?

I am very positive to enhance learning by using technology. Although technology must justify its use by doing something. For me, who often are in learning contexts where disabled people are participants, that something could be to add extra accessibility or usability. Technology used in the right way can act as an equalitarian force. More people can participate and by using technology more things can be done in a variety of ways. I also think that AI used in the right way can change how we learn and what is important to learn.

4. What are you going to do as a result of your involvement in ONL? Why?

In other words, this question is the same as question 2. So, take a look at question 2.

5. What suggestions do you have (activities and/or in general) for the development of eLearning in your own teaching or context?

I have come to think a lot of why we organise learning that needs to end at a specific date. I know the whole system of credits etc. depends on learning activities starting and stopping at specific dates. In formal settings that might be necessary but in many settings learning can go on without specific dates when you are supposed to be done with all tasks.

I also think that there should be a more ongoing experimenting with new eLearning approaches. And perhaps also that we get rid of some old rather bad online tools and use newer and better ones.

Reflection 4: Design for online and blended learning

Online learning creates a number of opportunities for disabled people to get an education. But often when we design for online learning, we tend to reproduce disadvantages that are present in the physical world or we create new barriers. This is often out of ignorance. There are experiences from the COVID-19 pandemic that we should learn from, both good and bad. We did a work on that in a learning community of people with cognitive impairments. It resulted in recommendations for online collaboration:

Johansson, S., Jonsson, M., Gulliksen, J., & Gustavsson, C. (2025). User participation in co design – Requirements for accessible online collaboration: an exploratory study. Behaviour and Information Technology, 3001, 1–16.

Being able to study from home and adjust your learning activities to parts of the day when you can perform at your best is a huge opening for many people. The asynchronicity is an important aspect. At the same time, the level of dropouts are higher in online learning. One really positive thing with the ONL course is the “blendidness” in the online setting. Our meeting two times a week really created a bond at the same time as we could organise our own learning activities in-between meetings. So blended learning don’t have to be a mix between online and physical activities. It could rather be a blend of synchronous and asynchronous online work.

Reflection 3: Learning in communities – networked collaborative learning

When do something shift from just learning to collaborative learning? From the teacher perspective we often try very hard to create collaboration as we see higher values when people connect and start to do things together. But from a learner’s perspective it is often not evident why collaboration is worth the extra work. If your goal is to pass a course to get an exam, the incentive for collaboration can be weak. It is an initial investment, and it might take some experiences of trying to collaborate before it feels like a thing that makes sense to do. I think that’s why it is easier to get this process going if a group has a higher goal than just to learn something. If you for example is a group of people who are formed by a mutual interest of societal change or create an innovation connectiveness and collaboration often arrive silently. It just is there. And if the group finds that new knowledge is needed to achieve a goal, collective learning often takes place or is easier to organise than in a pure learning setting.

If the above is true, then the recruitment of participants might be considered. Today we have a number of people applying to take part in a course. We accept them and then try to “bolt on” collaboration without really know the people in the group. Then we find it hard to mitigate or scaffold a setting that foster collaboration. Maybe we should prepare participants for collaboration before they apply. I don’t really know though, how that could be done.

Reflection 2: On openness

I think the shift towards open access of research articles pawed the way for being more open also in other parts of academia and in knowledge production. You can clearly see that open access articles are being more downloaded and also more cited than when papers were locked behind pay walls. Perhaps also learning materials can go the same way. As a principle, I think that would be fair. At least if you operate on public funding it seems fair to give something back to the community and in some communities open learning resources might be the only way to reach large parts of the population. Or to reach rare and hidden parts in any population.

But I can se some problems. While my research papers are being published efter being peer reviewed by others there is no similar quality check at the moment for publishing open learning stuff. I can just create something, license it and put it out there. It will be up to the learner to evaluate the quality. Or perhaps some kind of social evaluation can take place like when someone put out an evaluation of a restaurant or a hotel.

Another potential problem that I can see is that of maintenance. Perhaps I have had the funding to develop my learning material that I publish as a free resource but some of the stuff will probably grow old and become increasingly outdated. I probably should feel that i have the moral responsibility to keep my material updated, but will I have the time (money) to do that. Should I let the stuff be totally free in the hope that someone else comes along and update? I am not that bothered putting things that I do out there for free. I have a small free “course” on how disability organisations could use statistics to strengthen their advocacy work for better living conditions for people with disabilities. It is quite popular, but I don’t know who uses it. I can only see in the visitors data that it is used. And it hasn’t been outdated yet.

Some notes on literacy

We often discuss online participation in terms of digital literacy. By focusing on “literacy”, online participation becomes an individual problem. If you are unable to participate in an online context, it is seen as if you have some form of problem with your literacy. You may not be sufficiently native and fluent. You may not understand how to use the tools or understand the content that is available to you, struggling even being in the “visitors mode”. By the way, I think we constantly shift between visitors and residents’ mode, so I don’t like those terms being at one end of a continuum. That relation is messier than allowing itself to be put on a line. I think.

If focusing on digital literacy we also have to move back to the original meaning of “literacy” as the ability to read and write text. First, you need literacy to be able to have digital literacy (or “problem-solving” skills in digitally rich environments as OECD puts it). 

Poorly designed tools require high literacy

But the strong focus on literacy risks missing the question of how we design the tools that you need to master in order to be digitally literate. If these tools are poorly designed, if they are illogical, complicated and perhaps even exclude use for some people (as is often the case with blind people or people with intellectual disabilities just to mention a few), then it places much higher demands on literacy than if they were well designed, simple to use and accessible. Focusing on the flaws in the design of our tools also shifts the responsibility from the individual (if you can’t do this, you need to increase your literacy) to the system.

By focusing on design, we shift the responsibility for problems related to digital participation from the individual to the surrounding society. Why do we provide tools that are poorly designed and exclude some? If we designed better tools, maybe people with lower literacy levels could also participate? How many did a fast exit from the ONL-course due to issues understanding the course web page? About half of the enrolled students in one of our online courses at Lund university don’t make it through the first three weeks of the course. There are several reasons for that (the most common is that they got a place at another course). But how many left because of Moodle? We don’t know. I sort of want to kill myself every time I have to go into the admin and editing mode…)

With hig demands on literacy there is a risk of excluding some learners from learning. With easier tools we could lower the “literacy bar”. It could be interesting to figure out how large proportion of a population that can be regarded having a high level of literacy and presumably easy can take on online learning even if the tools are poorly designed.

State of the art literacy in the OECD

The OECD and the study PIAAC – Problem Solving in Technology-Rich Environments measures both problem-solving and basic computer literacy skills, conducted in a digital environment were various performance tasks measure participants’ use of ICT applications. The PIIAC is targeting the whole population and there is another survey targeting younger students called PISA.

The results from Sweden shows that although the country is presented as being in the top almost 70 percent of the population scores relatively low on adaptive problem solving (where the digital literacy comes in)

A diagram showing the situation on literacy in Sweden

AI-genererat innehåll kan vara felaktigt.

Figure 1: OECD (2024), Do Adults Have the Skills They Need to Thrive in a Changing World?: Survey of Adult Skills 2023, OECD Skills Studies, OECD Publishing, Paris, https://doi.org/10.1787/b263dc5d-en.

 To compare with some other countries and the average OECD result we can assume that the share of the population scoring below 2 or lower is higher in many countries but probably a bit lower in Finland and Japan.

Diagram. OECD on literacy levels in several countries

AI-genererat innehåll kan vara felaktigt.

Figure 2: OECD (2024), Do Adults Have the Skills They Need to Thrive in a Changing World?: Survey of Adult Skills 2023, OECD Skills Studies, OECD Publishing, Paris, https://doi.org/10.1787/b263dc5d-en.

The adaptive problem-solving scale has five levels: Below Level 1 and Levels 1-4 (scoring 0-500 points).

Adaptive problem solving is defined as the ability to achieve one’s goals in a dynamic situation in which a method for solution is not immediately available, requiring engaging in cognitive and metacognitive processes to define the problem, search for information, and apply a solution in a variety of information environments and contexts. (Source: https://www.oecd.org/en/topics/digital-skills.html)

The assessment involves three overarching cognitive processes:

Definition: This involves selecting, organizing and integrating problem information into a mental model; retrieving relevant background information; and the ability to externalize the problem’s main features, with metacognitive processes including goal setting and monitoring problem comprehension.

Searching: This involves searching for operators in the environment (locating information about available actions that might solve the problem) and evaluating how well operators satisfy the problem constraints. (Source: https://www.oecd.org/en/topics/digital-skills.html)

Application: This is when the problem solver applies plans to solve a problem and executes the specified operators, with metacognitive processes involving monitoring progress, taking action if the problem changes or progress has stalled, and reflection.

I would argue that the platforms and tools that we offer our learners often require a proficiency level being at the upper end of level 3 or at level 4 on both literacy and problem solving. By setting the bar this high, we actually “design away” many potential learners that would have had the intellectual capacity to learn but don’t cope with our tools.

I am just trying to post

This post is just to test that everything is on a roll. So if you see this I am.

Hello world!

Welcome to Open Networked Learning. This is your first post. Edit or delete it, then start blogging!