Digital learning matters, but how to measure it?

A recent paper I wrote, together with colleagues from the blended learning team in the Faculty of Biological Sciences at the University of Leeds, reports the first large-scale audit of digital learning provision in a higher education institution. We outline our methods and propose several extensions to suggest how other might use our approach to audit teaching practices in other settings. We conducted our audit across the Faculty and found significant disparity in the provision of digital learning, which the team will subsequently use to target areas for improvement.

A mammoth task

Children are now more than ever before growing up surrounded by technology. By the time they enter higher education (i.e. start university) they have usually been exposed to various forms of digital media and technology for well over a decade. As such, it’s important to include some aspect of digital learning in higher education curricula; psychologists and pedagogists know that doing so improves learning in many cases. But we still don’t have a method to check when and how universities are deploying digital learning as part of their courses. That’s where we come in.

We conducted what, we believe, was the first ever audit of digital learning resources across an entire University Faculty. This involved manually sorting through the digital space of all 183 modules conducted as part of the undergraduate degrees within the Faculty. I spent upwards of 400 hours checking for digital learning resources and analysing the results. As well as searching for digital learning resources, we categorised them based on their perceived interactivity, since interactivity is important for promoting learning. We then wrote a small equation to determine the digital learning ‘score’ of a given module or course.

Digital learning score (patent pending)

Our simple formula for determining how interactive a given course is involved only two parts. Firstly, we measured the number of digital resources within each category. We then assigned each category an ‘interactivity score’ based on how much students had to interact with each type of resource, from a score of 1 for web links to static web pages (e.g. a Wikipedia page or a paper) to 5 for interactive online quizzes. The overall digital learning score was then just the sum of the number of resources multiplied by their respective interactivity values.

We tested this formula by comparing the volume, interactivity and digital learning score across modules, degree programmes, years and schools within the Faculty. We found that our formula did what we intended; that is, we were able to identify areas where digital resources were lacking and those where resource provision was especially high. This will allow the blended learning team to develop strategies in coming years to improve the parity of resource provision across the Faculty, which was a key aim of our flagship audit.

Interactive_Displays.jpg
Kids are growing up in a digitally-connected world. So our higher education system needs to keep up.

Possible extensions and future directions

We also gave suggestions for various extensions to our audit, including how to extend it to measure resource use, within-lecture resources, or to be a wider audit of blended learning. Because different learning techniques suit different students, the most effective method of teaching would be to provide a mixture of teaching styles including digital learning, lectures, flipped classrooms, practicals etc. where possible. Although we focus on digital learning here, in some cases it may not be practical to include digital learning – such as on remote field courses which are, by their very nature, interactive.

The main goal of our paper was to take the audit we conducted and lay out some generalizable methods which can be applied to a range of situations at various scales. Ultimately, we hope that we’ve done the groundwork and thought through the complications of putting such an audit together, so that others don’t have to. We believe that if others can build on the framework we developed, these audits can be useful to higher education professions as a tool for monitoring the deployment of digital learning technology. This would then aid the development of an engaging and interactive student experience.

To find out more, read our Online Learning article ‘A generalizable framework for multi-scale auditing of digital learning provision in higher education’. 

If you’re interested in hearing more about implementing these audits, please get in touch with Veronica Volz or another of my co-authors at the University of Leeds.

Paper reference:
Ross SRP-J, Volz V, Lancaster MK, Divan A. (2018). A generalizable framework for multi-scale auditing of digital learning provision in higher education. Online Learning 22(2) 246-270. DOI: 10.24059/olj.v22i2.1229

 

 

One thought on “Digital learning matters, but how to measure it?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s