The key to understand the world is the ability to learn, this is at the core of human and as well non-human systems (such as artificial intelligence). Currently, automatic and intelligent systems are ubiquitous in everyday activities and can be seen as a form of algorithmic outsourcing, but also new forms of collaborative learning enabling change. In the book Teaching Crowds, Dron and Andersson (2014) argues that crowds can form collective intelligence—and as such we learn not only from individuals but from collective behaviors. So, massive amount of people, software and connecting technologies procedure networks of learning. Further, humans must produce the huge amounts of big data that machine learning and artificial intelligence needs to operate on (and learn from). The enormous efforts in AI development are underpinned by a continuous access to new data—data many times produced by human interaction. As such, a prerequisite for digital systems to learn from us is thus that we learn them. Following this line of thinking, learning is not only a human-based activity, but rather non-human and human interactions performing material/discursive meanings and agencies.

However, machines (as well as humans) are biased. Human bias can spread to machines (for example bias through design). Machines learn by finding patterns in data, and data can also be biased (for example by interaction bias, latent bias or selection bias). Both machine and human learning are based on, and produce, biased knowledge.

Miranda Fricker (Fricker, 2007) coined the term ‘epistemic injustice’ to conceptualize unfairness in the process of giving and receiving knowledge. Fricker divides this notion into two practices; testimonial injustice—that stresses acts of discrimination in a person’s testimonial credibility (due to stereotypes about that person’s group) and hermeneutical injustice, which stresses how members of stigmatized groups can be denied access to resources and concepts needed to understand or verbalize their experiences.

Therefore, I argue that the concept of epistemic injustice could and should be used as a way to make visible and reduce bias in machine as well as human learning. As such, testimonial injustice stresses aspects of being misread or ignored as credible data (for example though bias by design, latent and selection bias) and hermeneutical injustice stresses the need for resources in order to acquire skills, competences or concepts regarded as necessary in today’s digital world. But also, it points to the necessity for more diversity in the interaction with technology. This interaction would produce increased freedom to innovate, explore, renew and refresh technology.

However, in order for this to be possible there is an urgent need to use the ‘proactionary principle’ (More, 2013) as a foundation for technological implementation in education. The proactionary principle was invented as a conflicting standpoint to the precautionary principle (World Commission on the Ethics of Scientific Knowledge andTechnology, 2005). The precautionary principle stresses, in summary, that unpredictable and irreversible actions should mostly be avoided. The proactionary principle, on the other hand, argues that most innovation throughout history has (already) been unpredictable and irreversible, but also generated great benefits. Thus, this principle argues in favor of people’s freedom to innovate and experiment.

To conclude, we have to account for; 1. Networked learning consists of humans, non-human actors as well as more than human actors (for example crowds), 2. People as well as machines can be bias. Therefore, I argue that in order to secure fair man-machine knowledge production and development of important skillsets—education should be on the forefront in using, devolving and experiment with digital media technology and in doing so strive for epistemic justice.

Sources:

Dron, J., & Andersson, T. (2014). Teaching crowds: Learning and social media: Athabasca University Press.

Fricker, M. (2007). Epistemic Injustice. Power and the Ethics of Knowing. Oxford: Oxford University Press.

More, M. (Ed.) (2013). The Proactionary Principle: Optimizing Technological Outcomes. Hoboken: Wiley.

World Commission on the Ethics of Scientific Knowledge and Technology. (2005). The Precautionary Principle: United Nation and UNESCO.

Topic 3: Network of what? Human and non-human collaborative learning communities