This afternoon I’m here at the University of Edinburgh Business School in George Square to see Professor John Lee, of ESALA, Edinburgh College of Art and School of Informatics giving his inaugural lecture Learning Vicariously with Rich Media.
This is a liveblog and so all the usual caveats apply about typos, errors, etc. Please do comment below if you have any corrections you’d like to add.
We are being introduced to the lecture which celebrates the appointment of Professor John Lee to the post of Personal Chair of Digital Media. John works across both the School of Architecture and Informatics and blends those areas of expertise effortlessly. His interests include areas of design and cognition in design and learning and he has devised a method in order to film events and seminars by students so that they can be viewed again to reflect and engage that lecture which seems to have application across the university. And now over to Professor John Lee.
So this might be a confusing title on various levels so I will be coming back to that on various levels. I’ll start by talking about rich media, what they are and why we don’t make better use of them. Then I will be talking about vicarious learning, a topic that I and a number of other people have been working on for quite a while. And I’ll be talking about how this interacts and connects and can exploit rich media and technology for learning. I’ll talk about the system mentioned in the introduction, YouTute. And then I will talk about some further possible applications and future directions.
Rich Media Resources
What do I mean by that? Mainly audio and video really. Obviously there are many other types of media one might work with but I’m really interested in media which may be available, especially on the internet, media we can engage with. Audio and video are key but these media can be augmented by various other types of information – but they may be augmented by links, say to each other, or other forms of information or data. So annotation is a form of linking of media to other data. Tagging and cross referencing can be included. Commentary and discussion can be included. And there are a wider variey of internet resources that they can be connected to.
So when I talk about rich media I mean video, perhaps audio, but with these sorts of associated materials. And really rich media are not very common. The potential exists for exploiting these far more than they are being used at present, particularly in education. I think it would be useful to develop the potential for using these in a far more substantial sort of way. So I will not just be talking about my work in the past but also how we may use this sort of research to take forward these media in more fruitful ways.
Examples of rich media include YouTube which includes lots of less well known functionality such as annotation. So if we look at the video “hug the world” for instance we can see text and hotspots throughout the video. And those hotspots link to completely other videos. That is potentially quite useful type of thing that could be exploited, for instance in education. But they don’t seem to be widely used. You don’t see that many videos that use this functionality in videos on YouTube. Even those using video in teaching tend not to use these.
The other thing we see now is the capture of University lectures in video – like this one. Most lectures in Informatics, many others across the University and tools like iTunesU collate collections of videos of lectures. But it seems to me that these are not easy to use collaboratively or creatively. Looking at this video – it happens to be me lecturing – we can see indexes into the video automatically provided by seeing when the image changes. You can jump to various points in the lecture or a point where something is discussed in the notes. There are various different things one can think of that you could link to. So there is a useful layar on top of the lecture here – that’s quite a useful layar of rich media. So we can watch the video and move around it but we can’t do much else with it, we cant take pieces of that video and use them somewhere else for instance. The technical barriers seem overcomeable but something is stopping people making more creative use of this sort of video.
So I think we need new models of how to do this. Models of reusing materials in elearning is hardly a new topic. The old models are based on teacher-led creation of “learning objects”. There are standardised systems for development and re-use but they have never really become ubiquitous. You can find all sorts of collections of these but people don’t really use them that much. I think that’s partly because they are not always that easy to use but also because they tend to be built by teachers and particular ways that teachers think. Quite often teachers differ on these things. So often you find that University teachers have a strong tendency not to reuse other University teachers’ materials.
But i think we need to move away from that model and towards a kind of learner led recruitment of materials – from the web in particular. Learniers can search and discovery materials, to rework those materials and to share and collaborate around these materials to achieve search and discovery. And this is where we can see improvements. And technology can assist with the recruitment process here. So Google can help, the semantic web can help. But also within rich media there are opportunities opening up – such as the automated indexing of a lecture video. But there is more we can do as that’s just quite a simple example. But perhaps we can look at the speech in the video, at what’s happening in the video, what’s on screen to automatically interpret rich media.
So we can move away from passive consumption and use rich media to support activity, reflection, construction in an inquiry-led processes and to support the development of distributed collaborative learning through social networking. In particular what learners need to be able to do with these materials is to get engaged with these materials in some way. And we need to enable this engagement for distributed learners – they may be distributed geographically, distant learners, or they may be distributed in a more social sense. But tehse learners may be those that it can be difficult to get engaged in the learning process. And one of the things we’ve looked at in the past is using engaging learners through vicarious learning…
So what is vicarious learning? Well it’s learning from exposure to the learning experiences of others – e.g. students learn by watching other student’s problem solving in class. e.g. mast classes, design crits, etc. So this is where students discuss a problem and there is an audience – that might be a small or a large audience. Even without active participation vicarious learning arises from active listening and watching, it’s not just observational learning but active listening and watching, but it differs from observational, it’s not solely about observing expert performance. So the audience doesn’t just watch passively, but may not be part of the dialogue, but they are an active part of the process. So it’s somehow based on access to the learning process.
And vicarious learning is something that is perhaps done using technology. For instance a learner can watch another person engaging in a learning process.
We had a research project that aimed to attack this problem, this was some time ago. This was work with several collaboratoes funded by ESRC/EPSRC with Terry Mayes, Jean McKendree, Richard Cox, Keith Stenning, Finbar Dineen. And the project focused on capturing dialogue around specific problems – so in such a dialogue the process is exposed. These dialogues would be rated for the quality of learning content, matched to specific needs of learners, implies use in a controlled perhaps “intelligent” environment, which was difficult, and often based on “task-directed discussion” (TDDs) – which don’t always arise in tutorials.
The outcomes of this work kind of highlighted things like the benefit of exposure to student-student discussion. We were able to comapre these with tutor-student instruction, so if the tutor expounding material to students it was much less valuable to the students than a discussion in which the students perspective on the material was highlighted. If students discussed the material one found out how they were processing and understanding it. And students found watching “strugglers” most useful of all, though an uncomfortable thing to view.
The vicarious learning was effective in promoting reflection/discussion. And there seemed to be both cognitive and social benefits. And impact especially from modelling of dialogue skills and empathi identification with other students. There was less clear cut evidence for “domain” learning but there was potential importance for distance learning (and others), and it was an aid to “enculturation” into a disciplinary framework – exposure to the language and culture is harder to do in distance contexts on the whole. c.f. Laurillard, Schon etc.
Another collaborative project, VL-PATSy, funded by ESRC TLRP and working witH Richard Cox, Susan Rabold, Rosemary Varley, Julie Morris, Kirsty Hoben.
There was already a system called PATSy used with trainee speech therapists for diagnostic practice/training. This might include video, medical history, test results etc. These kinds of things could be augmented with other resources. And we augmented the materials with vicarious learning materials – TDD-based dialogues and interventions based on sophisticated student model. It was fairly effective but it was complicated and expensive to construct – it was a major 3 year project to set up and to repurpose it would also have been complicated and expensive. So we thought it might be more fruitful to try another approach.
So this other approach was around the idea of recycling stuff. Collating video recordings, of tutorial groups for instance. Then make these videos available to other learners – either automatically or manually by topic. And students could customise through selecting and annotating those videos. This is the idea which has become, with funding through the Principal’s eLearning Fund, “YouTute”. For this project we had Susan Rabold, Neil Mayo, Jon Oberlander, and Stuart Anderson involved.
So the idea was to recycle tutorial activity, captured on video. And then students would themselves be responsible for highlighting the useful aspects of these. We thought the effects might be enhanced if learners could edit the videos to pick out specific points, to annotate the videos to highlight issues, to rework the content for deeper learning. So looking at a capture of the main prototype system – this includes various video clips, notes from the session, topic and tags etc. This is an old version from perhaps four years ago but you can see we can view video of the whiteboard, we can see the tutorial roomm and we can see a camera looking at the smartboard. The students can actually use this to relive the tutorials in some sense. Either tutorials they were involved in, or those they weren’t involved in but on a topic they are looking at.
One of our undergraduate students, Marcin Bot, suggested a redesign of the system – a more student friendly interface intended here, bigger buttons, access to segments of video etc. But he didn’t feel this material was easy to rework particularly. This interface does help you access shared “tutes” (or create new ones) – these are the nuggets of tutorials on particular topics etc.
So this was quite a useful resource. We worked with it for several years whilst funded to do that. And some useful observations came out of that process. In particular that it was especially good for students who had failed the exam – they found it very useful for revising when away from Edinburgh. That seemed to be very useful for them. So I think this system could be taken forwaerd and developed. And improved with individualised tute collections, more developed social networking model, more interactive virtual editing of tutes, integration with recorded lectures, more streamlined collection and back end process etc.
So, I just wanted to finish by going back to the idea of really rich media. It’s not really new. There is plenty of work on mashup and remix culture. There are things like iMovie, the Open Movie Editor, Diver (stanford) – this was developed in Stanford and it’s quite a nice idea, it allows you to focus in on parts of videos, on some elements of a video and share that, jumpcut.com – used to allow you to edit online, it was purchased by Yahoo! and subsequently disappeared. But recently a thing called Adobe Premiere Express lets you do something similar – an online video editor. That’s the sort of thing we should pay more attention to but this is proprietary and probably expensive [note from Nicola here: YouTube does also include the facility to edit video online now].
So in future I’d like to see the development of integration of these sorts of ideas with learning systems – I have an EU proposal in train at the moment. I’d like to see improved automated annotation of video – recent and current work in Informatics on segmentation etc, using speech and/or image processing. And we could see this applied elsewhere, such as in schools, such as work with Tom Kane of a local company called Prescience Communications. So recently I’ve been working with Tom’s project and playing with the idea of a sequence of clips which you can move through, capturing the feel of jumpcut.com. This ability to personalise media in different ways would be useful for schools students in particular.
Q1 – Richard Coyne, ESALA) Seems to me there is an issue of professional presentation or lack of. These days you have kids recording themselves in their bedrooms with great charisma, great thought of setting. Does a lack of professsionalism in these environments matter?
A1) I don’t think it does. So this work here is bringing external organisations into schools – in this case it was a discussion of the classification of films. It seems to me the important thing there is the quality of the discussion. The visual presentation isn’t neccassarily a concern. So we had a top quality of speaker here, these are the things that matter really. Exactly how the film appears isn’t neccassarily the first consideration here. If part of the discussion itself is
Q1) You and I might think that but what about the poeple this is directed at, the audiences.
A1) Well we’ll find out. It seems to me that students and audiences etc. it seems to me that audience are hugely accepting of varying quality of recordings. That first content on iTunesU you’d have very mixed quality filming, discussion of tutorial arrangements etc.
Q2 – Simon Higgs, BPA) Play, we all know how important play is in early learning but presumably it is also very important in vicarious learning and I was wondering how this relates to rich media learning and virtual worlds and games as a rich media space for vicarious learning.
A2) Certainly many people have looked at games and learning, Serious Games etc. In the rich media context I’m not familiar anything specific on this but in this area it certainly seems to be important, mashing up materials is about learning and about play at the same time, there would be an inevitable element of play that would be motivating in that activity. But how one seeks to exploit that more actively I’m not sure.
Q2) I think Richard’s comment about design is important there.
A2) Yes, you might be right. And one of the things that Marcin altered our design was to perhaps make it more playful.
Q3) I wanted to ask about incentives. Often things are most successful around examinations – so we get huge downloads of materials near exams, Youtube is heavily used then etc. What do you think you would need to incentivise wider use of these things? How do you reward students for co-producing materials that have high production values perhaps?
A3) One way of incentivising students is to give them some sort of credit for doing something, so that was one possibility that we considered at one point, that we could built some of these activities into their assessment. I would like to think that just the idea of sharing and co-create these things between each other, that this would be a resource to be their own in their own way would be motivating in itself. But I think it’s an unsolved problem really in this area generally, as to how to get those online activities of community to work properly and then stop working – they fizzle out, people drop out of the activity. But that is an interesting and unsolved problem in elearning I think.
Q4) On that point it seems to me that this seems to be something we would develop top down, that we’d need to develop special systems for this. Don’t you agree that this idea of students learning together and sharing artefacts is commonplace and is happening almost in a bottom up way, with people following each other on Twitter and so on.
A4) I agree to some extent but it’s patchy. I think with better tools and better context we could help them to do that. I think with things like Twitter and specific Facebook groups etc. these things exist but they are isolated phenomena to some extent.
Q5) Some of your interfaces are very media rich, how easy is it to divide attention when you have that going on – notes, lecture video etc. What’s important here?
A5) It’s a good question here. The interface I showed to the online lecture, often you don’t need the picture of the lecture, that’s probably not what students will watch, mainly they will focus on the main screen, probably the smartboard. Sometimes you want to see what’s being pointed at. The audio and main presentation screen are the main thing.
Q5) And diver with it’s zoom in?
A5) Yes, I’d like to built that in. None of these things are terribly hard to do. It’s not hard to code these things. It’s interesting in a way to me that we don’t see more being done with them.
Q6) I was quite interested in your comment about students learning more from struggling students – what going on there – is it that they make their process more obvious? is it about comparing your own understandings?
A6) Some of all of those. You benefit from seeing correction, from comparing understanding. It seems to me there is something quite useful about the affective side of this – empathy draws them in to focus on the solution in a way that a tutor with a blackboard wouldn’t do.
Q7) It seems emplicit in the name rich media that these are good, these are rich! But I find I use video as little as possible – I’ll see it if there isn’t a paper that they’ve written. I’d love to think that students watch me lecturing on video as the best medium but I’m not convinced that that’s not as valuable as looking the same thing up in a textbook.
A7) That’s fair, not all media needs to be rich. There are a lot of situations in which various elearning tools are in use where a blackboard might be equally effective or better. I’m not saying these are the only or always the best way to do education but they do have a value. But they do have a value for distance learners and it seems to be that seeing video of an experience is useful for those that were not there in person. It’s an attempt and not totally supplant other types of approaches but to make the best use of them that they can.
Q8 – Bruce Currie, formally of the Digital Design course) Enlightenment philosophers defacing images of god, the modern version of which might be the Taliban defacing images of Buddha in response to which it has been suggested that laser projections of Buddha to build peace. Do you see students using rich digital media to solve big world problems?
A8) I don’t know, I hope they may.