Feb 252016
Today we have our second eLearning@ed/LTW Showcase and Network event. I’m liveblogging so, as usual, corrections and updates are welcome. 
Jo Spiller is welcoming us along and introducing our first speaker…
Dr. Chris Harlow – “Using WordPress and Wikipedia in Undergraduate Medical & Honours Teaching: Creating outward facing OERs”
I’m just going to briefly tell you about some novel ways of teaching medical students and undergraduate biomedical students using WordPress and platforms like Wikipedia. So I will be talking about our use of WordPress websites in the MBChB curriculum. Then I’ll tell you about how we’ve used the same model in Reproductive Biology Honours. And then how we are using Wikipedia in Reproductive Biology courses.
We use WordPress websites in the MBChB curriculum during Year 2 student selected components. Students work in groups of 6 to 9 with a facilitator. They work with a provided WordPress template – the idea being that the focus is on the content rather than the look and feel. In the first semester the topics are chosen by the group’s facilitator. In semestor two the topics and facilitators are selected by the students.
So, looking at example websites you can see that the students have created rich websites, with content, appendices. It’s all produced online, marked online and assessed online. And once that has happened the sites are made available on the web as open educational resources that anyone can explore and use here: http://studentblogs.med.ed.ac.uk/
The students don’t have any problem at all building these websites and they create these wonderful resources that others can use.
In terms of assessing these resources there is a 50% group mark on the website by an independent marker, a 25% group mark on the website from a facilitator, and (at the students request) a 25% individual mark on student performance and contribution which is also given by the facilitator.
In terms of how we have used this model with Reproductive Biology Honours it is a similar idea. We have 4-6 students per group. This work counts for 30% of their Semester 1 course “Reproductive Systems” marks, and assessment is along the same lines as the MBChB. Again, we can view examples here (e.g. “The Quest for Artificial Gametes”. Worth noting that there is a maximum word count of 6000 words (excluding Appendices).
So, now onto the Wikipedia idea. This was something which Mark Wetton encouraged me to do. Students are often told not to use or rely on Wikipedia but, speaking a biomedical scientist, I use it all the time. You have to use it judiciously but it can be an invaluable tool for engaging with unfamiliar terminology or concepts.
The context for the Wikipedia work is that we have 29 Reproductive Biology Honours stduents (50% Biomedical Sciences, 50% intercalculating medics), and they are split into groups of 4-5 students/groups. We did this in Semester 1, week 1, as part of the core “Research Skills in Reproductive Biology”. And we benefited from expert staff including two Wikipedians in Residence (at different Scottish organisations), a librarian, and a learning, teaching and web colleague.
So the students had an introdution to Wikipedia, then some literature searching examples. We went on to groupwprl sesssions to find papers on particular topics, looking for differences in definitions, spellings, terminology. We discussed findings. This led onto groupwork where each group defined their own aspect to research. And from there they looked to create Wikipedia edits/pages.
The groups really valued trying out different library resources and search engines, and seeing the varying content that was returned by them.
The students then, in the following week, developed their Wikipedia editing skills so that they could combine their work into a new page for Neuroangiogenesis. Getting that online in an afternoon was increadibly exciting. And actually that page was high in the search rankings immediately. Looking at the traffic statistics that page seemed to be getting 3 hits per day – a lot more reads than the papers I’ve published!
So, we will run the exercise again with our new students. I’ve already identified some terms which are not already out there on Wikipedia. This time we’ll be looking to add to or improve High Grade Serious Carcinoma, and Fetal Programming. But we have further terms that need more work.
Q1) Did anyone edit the page after the students were finished?
A1) A number of small corrections and one querying of whether a PhD thesis was a suitable reference – whether a primary or secondary reference. What needs done more than anything else is building more links into that page from other pages.
Q2) With the WordPress blogs you presumably want some QA as these are becoming OERs. What would happen if a project got, say, a low C.
A2) Happily that hasn’t happened yet. That would be down to the tutor I think… But I think people would be quite forgiving of undergraduate work, which it is clearly presented at.
Q3) Did you consider peer marking?
A3) An interesting question. Students are concerned that there are peers in their groups who do not contribute equally, or let peers carry them.
Comment) There is a tool called PeerAim where peer input weights the marks of students.
Q3) Do all of those blog projects have the same model? I’m sure I saw something on peer marking?
A3) There is peer feedback but not peer marking at present.
Dr. Anouk Lang – “Structuring Data in the Humanities Classroom: Mapping literary texts using open geodata”
I am a digital humanities scholar in the school of Languages and Linguistics. One of the courses I teach is digital humanities for literature, which is a lovely class and I’m going to talk about projects in that course.
The first MSc project the students looked at was to explore Robert Louis Stevenson’s The Dynamiter. Although we were mapping the texts but the key aim was to understand who wrote what part of the text.
So the reason we use mapping in this course is because these are brilliant analytical students but they are not used to working with structured data, and this is an opportunity to do this. So, using CartoDB – a brilliant tool that will draw data from Google Sheets – they needed to identify locations in the text but I also asked students to give texts an “emotion rating”. That is a rating of intensity of emotion based on the work of Ian Gregory – spatial historian who has worked with Lakes data on the emotional intensity of these texts.
So, the students build this database by hand. And then loaded into CartoDB you get all sorts of nice ways to visualise the data. So, looking at a map of London you can see where the story occurs. The Dynamiter is a very weird text with a central story in London but side stories about the planting of bombs, which is kind of played as comedy. The view I’m showing here is a heatmap. So for this text you can see the scope of the text. Robert Louis Stevenson was British, but his wife was American, and you see that this book brings in American references, including unexpected places like Utah.
So, within CartoDB you can try different ways to display your data. You can view a “Torque Map” that shows chronology of mentions – for this text, which is a short story, that isn’t the most helpful perhaps.
Now we do get issues of anachronisms. OpenStreetMap – on which CartoDB is based – is a contemporary map and the geography and locations on the map changes over time. And so another open data source was hugely useful in this project. Over at the National Library of Scotland there is a wonderful maps librarian called Chris Fleet who has made huge numbers of historical maps available not only as scanned images but as map tiles through a Historical Open Maps API, so you can zoom into detailed historical maps. That means that mapping a text from, say, the late 19th Century, it’s incredibly useful to view a contemporaneous map with the text.
You can view the Robert Louis Stevenson map here: http://edin.ac/20ooW0s.
So, moving to this year’s project… We have been looking at Jean Rhys. Rhys was a white Creole born in the Dominican Republic who lived mainly in Europe. She is a really located author with place important to her work. For this project, rather than hand coding texts, I used the wonderful wonderful Edinburgh Geoparser (https://www.ltg.ed.ac.uk/software/geoparser/??) – a tool I recommend and a new version is imminent from Clare Grover and colleagues in LTG, Informatics.
So, the Geoparser goes through the text and picks out text that looks like places, then tells you which it things is the most likely location for that place – based on aspects like nearby words in the text etc. That produces XML and Clare has created me an XSLT Stylesheet, so all the students have had to do is to manually clean up that data. The GeoParser gives you GeoNames reference that enables you to check latitude and longitude. Now this sort of data cleaning, the concept of gazeteers, these are bread and butter tools of the digital humanities. These are tools which are very unfamiliar to many of us working in the humanities. This is open, shared, and the opposite of the scholar secretly working in the librarian.
We do websites in class to benefit from that publicness – and the meaning of public scholarship. When students are doing work in public they really rise to the challenge. They know it will connect to their real world identities. I insist students sow their name, their information, their image because this is part of their digital scholarly identities. I want people who Google them to find this lovely site with it’s scholarship.
So, for our Jean Rhys work I will show you a mock up preview of our data. One of the great things about visualising your data in these ways is that you can spot errors in your data. So, for instance, checking a point in Canada we see that the Geoparser has picked Halifax Nova Scotia when the text indicates Halifax in England. When I raised this issue in class today the student got a wee bit embarrassed and made immediate changes… Which again is kind of perk of work in public.
Next week my students will be trying out QGIS  with Tom Armitage of EDINA, that’s a full on GIS system so that will be really exciting.
For me there are real pedagogical benefits of these tools. Students have to really think hard about structuring their data, which is really important. As humanists we have to put our data in our work into computational form. Taking this kind of class means they are more questioning of data, of what it means, of what accuracy is. They are critically engaged with data and they are prepared to collaborate in a gentle kind of way. They also get to think about place in a literary sense, in a way they haven’t before.
We like to think that we have it all figured out in terms of understanding place in literature. But when you put a text into a spreadsheet you really have to understand what is being said about place in a whole different way than a close reading. So, if you take a sentence like: “He found them a hotel in Rue Lamartine, near Gard du Nord, in Monmatre”. Is that one location or three? The Edinburgh GeoParser maps two points but not Rue Lamartine… So you have to use Google maps for that… And is the accuracy correct. And you have to discuss if those two map points are distorting. The discussion there is more rich than any other discussion you would have around close reading. We are so confident about close readings… We assume it as a research method… This is a different way to close read… To shoe horn into a different structure.
So, I really like Michel De Certeau’s “Spatial stories” in The practice of everyday life (De Certeau 1984), where he talks about structured space and the ambiguous realities of use and engagement in that space. And that’s what that Rue LaMartine type example is all about.
Q1) What about looking at distance between points, how length of discussion varies in comparison to real distance
A1) That’s an interesting thing. And that CartoDB Torque display is crude but exciting to me – a great way to explore that sort of question.
OER as Assessment – Stuart Nichol, LTW
I’m going to be talking about OER as Assessment from a students perspective. I study part time on the MSc in Digital Education and a few years ago I took a module called Digital Futures for Learning, a course co-created by participants and where assessment is built around developing an Open Educational Resource. The purpose is to “facilitate learning for the whole group”. This requires a pedagogical approach (to running the module) which is quite structured to enable that flexibility.
So, for this course, the assessment structure is 30% position paper (basis of content for the OER), then 40% of mark for the OER (30%peer-assessed and tutor moderated / 10% self assessed), and then the final 30% of the marks come from an analysis paper that reflects on the peer assessment. You could then resubmit the OER along with that paper reflecting on that process.
I took this module a few years ago, before the University’s adoption of an open educational resource policy, but I was really interested in this. So I ended up building a course on Open Accreditation, and Open Badges, using weebly: http://openaccreditation.weebly.com/.
This was really useful as a route to learn about Open Educational Resources generally but that artefact has also become part of my professional portfolio now. It’s a really different type of assignment and experience. And, looking at my stats from this site I can see it is still in use, still getting hits. And Hamish (Macleod) points to that course in his Game Based Learning module now. My contact information is on that site and I get tweets and feedback about the resource which is great. It is such a different experience to the traditional essay type idea. And, as a learning technologist, this was quite an authentic experience. The course structure and process felt like professional practice.
This type of process, and use of open assessment, is in use elsewhere. In Geosciences there are undergraduate students working with local schools and preparing open educational resources around that. There are other courses too. We support that with advice on copyright and licensing. There are also real opportunities for this in the SLICCs (Student Led Individually Created Courses). If you are considering going down this route then there is support at the University from the IS OER Service – we have a workshop at KB on 3rd March. We also have the new Open.Ed website, about Open Educational Resources which has information on workshops, guidance, and showcases of University work as well as blogs from practitioners. And we now have an approved OER policy for learning and teaching.
In that new OER Policy and how that relates to assessment, and we are clear that OERs are created by both staff and students.
And finally, fresh from the ILW Editathon this week, we have Ewan MacAndrew, our new Wikimedian in residence, who will introduce us to Histropedia (Interactive timelines for Wikipedia: http://histropedia.com) and run through a practical introduction to Wikipedia editing.
Wikimedian in Residence – University of Edinburgh – Ewan MacAndrew
Ewan is starting by introducing us to to “Listen to Wikipedia“, which turns live edits on Wikipedia right now into melodic music. And that site colour codes for logged in, anonymous, and clean up bots all making edits.
My new role, as Wikimedian in Residence, comes about from a collaboration between the University of Edinburgh and Wikimedia Foundation. And my role fits into their complimentary missions, which fit around the broad vision of imagining the world where all knowledge is openly available. My role is to enhance the teaching and curriculum, but also helping to highlight the rich heritage and culture around the university beyond that, and helping raise awareness of their commitment to open knowledge. But this isn’t a new collaboration, it is part of an ongoing collaboration through events and activities and collaboration.
It’s also important to note that I am a Wikimedian in Residence, rather than a Wikipedian in Residence. Wikimedia is the charitable foundation behind Wikipedia, but they have a huge family of projects including Wikibooks, MediaWiki, Wikispecies, etc. That includes Wikidata is the database of all knowledge that humans and machines can read, which is completely language independent – the model Wikipedia is trying to work towards.
So, what is Wikipedia and how does it work? Well we have over 5 million articles, 38 million pages, over 800 million edits, and over 130k active users.
There has been past work by the University with Wikimedia. There was the Women, Science and Scottish editathon for ILW 2015, Chris Harlow already spoke about his work, there was an Ada Lovelace editathon from October 2015, Gavin Willshaw was part of #1Lib1Ref day for Wikipedia’s 15th Birthday in January 2016. Then last week we had the History of Medicine editathon for ILW 2016 which generated 4 new articles, improved 56 articles, uploaded over 500 images to Wikicommons. Those images, for instance, have huge impact as they are of University buildings and articles with images are far more likely to be clicked on and explored.
You can explore that recent editathon in a Storify I made of our work…

View the story “University of Edinburgh Innovative Learning Week 2016 – History of Medicine Wikipedia editathon” on Storify

We are now looking at new and upcoming events, our next editathon is for International Women’s Day. In terms of ideas for events we are considering:

  • Edinburgh Gothic week – cross curricular event with art, literature, film, architecture, history, music and crime
  • Robert Louis Stevenson Day
  • Scottish Enlightenment
  • Scottish photographers and Image-a-thons
  • Day of the Dead
  • Scotland in WWI Editathon – zeppelin raids, Craiglockhart, etc.
  • Translationathons…

Really open to any ideas here. Do take a look at reports and updates on the University of Edinburgh Wikimedian in Residence activities here: https://en.wikipedia.org/wiki/Wikipedia:University_of_Edinburgh

So, I’m going to now quickly run through the five pillars of Wikipedia, which are:

  1. An encylopedia – not a gossip column, or blog, etc. So we take an academic, rigorous approach to the articles we are putting in.
  2. Neutral point of view – trying to avoid “peacock terms”. Only saying things that are certain, backed up by reliable published sources.
  3. Free content that anyone can use, edit and distribute.
  4. Respect and civility – when I run sessions I ask people to note that they are new users so that others in the community treat you with kindness and respect.
  5. No firm rules – for every firm rules there has to be flexibility to work with subjects that may be tricky, might not quite work. If you can argue the case, and that is accepted, there is the freedom to accept exceptions.

People can get bogged down in the detail of Wikipedia. Really the only rule is to “Be bold not reckless!“.

When we talk of Wikipedia and what a reliable source is, Wikipedia is based on reliable published source with reputation for fact-checking and accuracy. Academic and peer-reviewed scholarly material is often used (barring the no original research distinction). High quality mainstream publications too. Blogs are not seen as reliable generally, but sites like BBC and CNN are. And you need several independent sources for a new article – generally we look for 250 words and 3 reliable sources for a new Wikipedia article.

Ewan is now giving us a quick tour through enabling the new (fantastic!) visual editor, which you can do by editing your settings as a registered user. He’s also encouraging us to edit our own profile page (you can say hello to Ewan via his page here), formatting and linking our profiles to make them more relevant and useful. Ewan is also showing how to use Wikimedia Commons images in profiles and pages. 

So, before I finish I wanted to show you Histropedia, which allows you to create timelines from Wikipedia categories.

Ewan is now demonstrating how to create timelines, to edit them, to make changes. And showing how the timelines understand “important articles” – which is based on high visibility through linking to other pages. 

If you create a timeline you can save these either as a personal timeline, or as a public timeline for others to explore. The other thing to be aware of is that WikiData can be modified to search for more specialised topics – for instance looking at descendants of Robert the Bruce. Or even as specific as female descendants of Robert the Bruce born in Denmark. That just uses Robert the Bruce and a WikiData term called “child of”, and from those two fields you can build a very specific timelines. Histropedia uses both categories and WikiData terms… So here it is using both of those.


Q1) Does Wikidata draw on structured text in articles?

A1) It’s based on “an instance of”… “place of education” or “created on” etc. That’s one of the limitations of Histropedia right now… It can’t differentiate between birth and death date versus dates of reign. So limited to birth and death, foundation dates etc.

Q2) How is Wikipedia “language independent”?

A2) Wikipedia is language dependent. Wikidata is language independent. So, no matter what tool Wikidata uses, it functions in every single language. Wikipedia doesn’t work that way, we have to transfer or translate texts between different language versions of Wikipedia. Wikidata uses a q code that is neutral to all languages that gets round that issue of language.

Q3) Are you holding any introductory events?

A3) Yes, trying to find best ways to do that. There are articles from last week’s editathon which we could work on.

And with that we are done – and off to support our colleague Jeremy Knox’s launch of his new book: Posthumanism and the Massive Open Online Course: Contaminating the Subject of Global Education.

Thanks to all our fantastic speakers, and our lovely organisers for this month’s event: Stuart Nicol and Jo Spiller.

Nov 072013

Today I am connected to one of a new series of JISC and ALT (Association for Learning Technology) Digital Literacy webinarsMultimodal Profusion in the Massive Open Online Course – Jeremy Knox, Sian Bayne. 

I will be taking notes throughout the session and hopefully catching many of the questions etc. As usual this is a liveblog so my notes may include the odd error or typo – please let me have your thoughts or corrections in the comments below!  

:: Update: the recording for this session is now available here ::

According to Lesley Gourley’s introduction these sessions are all being recorded and being made available online via the ALT website. These webinars are based on forthcoming papers in Research in Learning Technology – Special issue on Scholarships and Literacies in the Digital Age. Beyond practice and into greater overarching change. This will be out towards the end of the year.

Lesley is introducing Jeremy and Sian. Sian’s research interests are related to teaching and learning online, particularly around post humanism and multimodal academic literacies. Jeremy is working on a PhD on critical post humanism in open educational environments.

We are beginning with Sian: We will be building on work we have done in our E-Learning and Digital Cultures MOOC and looking at how we can theorise what we have encountered there.

The E-Learning adn Digital Cultures MOOC has just begun it’s second run. It initially ran in early 2013 with around 27,000 students and is running again, launched this week, with around 19,000 students. And we have tried to see this as going beyond the classic MOOC lectures. Instead we have curated open educational resources, web essays, etc. alongside theoretical work and educational thinking. And we then encourage participants to blog their thoughts. We have discussion forums but we also encourage them to use Twitter (#edcmooc), to blog their experience… influenced by the cMOOC design than by the conventional xMOOC design. And we saw before – and are seeing again – a real sense of community development. We see very active Facebook group (4500+, G+ group (3800+) etc.

Jeremy: For me one of the ways in which this sort of massive participation seemed to manifest was in the submission of final assignments to the EDCMOOC. We had over 1700 artefacts submitted. We asked them to create something that commented on one or all of the course themes, something creative designed to be experienced on the web. What was really interesting to me was that in that requirement to make the digital artefact public… we initially did that so that we could use peer assessment – using the peer assessment module – and in order for that to work, and to mirror the public open pedagoguey we were trying to use. But as a result this digital creativity began to be collected and curated on the web. So this image we see on the screen – a Padlet page of 330 artefacts – but you get this profusion of digital creative work. That’s significant because not only is assessment usually hidden, it is also usually private. But this is really open and collaborative as an experience.

And that really led to us thinking about this as “sociomaterial”. This is emerging in some educational research (Fenwick, Edwards and Sawchuk 2011) and encompasses ANT, Complexity Theory, Cultural Historical Activity Theory and Spatial Theory. So we wanted to think about this as a way of percieving relationships between humans (the social) and non-humans (the material). The relation is all important here as this perspective is about disregarding form before the relation, instead seeing the relation between these things as the key focus. I like the idea of Karen Berad who talks about “inter-action” but if we talk about “intra-action” we talk about those things without having to regard them as pure forms.

So why the sociomaterial? Well it counters what can be seen as an over-emphasis on human agency, particularly in digital literacy discourse. The idea that technology is just there to achieve educational goals – an approach that overlooks the role of technology and the change or influence it can have. And it also responds to the idea that online environments are “virtual” or somehow “immaterial” – we are moving to a place where the web is something real and tangible. And when we get to the idea of things being tangible we can get to a place where we see things as situatable to education events. And it offers an alternative way of understanding knowledge – what it is and how it comes about. This isn’t too philosophical but part of the day to day work of educators and the sociomaterial has some profound insights here. And it allows us to acknowledge ways that software and algorithms co-produce digital work (rather than being simple “tools” for human use).

Sian: At this point we thought it might be useful to say what we mean by digital artefacts, those created with a sort of sociomaterial literacy. So I thought I would show a few examples. Firstly “Twitterchat by cikgubrian” on YouTube which brought together and aggregate an assemblage of impressions of the EDC MOOC. Next up “My Scottish MOOC by Willa Ryerson” – another animation about the experience of the Scottish MOOC. Finally “Our #EDCMooc Experience: Class? Network? Something Else?” a “Haiku Deck” using images and text comments. Now Jeremy will do a more detailed reading of some of these artefacts.

Jeremy: I want to provide more of a detailed overview of how these might be looked at as sociomaterial objects. firstly “World Builder: a crowd-sourced tag heart” by John O’Neill. This was created with a tag cloud tool. What struck me was that this was submitted as a piece of work to be assessed for representing a theme of the course. It is put forward as a stable contained piece of work. But I want to look at the processes to produce it… which question it’s source and finality. It’s a sociomaterial reading that enables us to do this. So this text was produced in the responses to a video used in the course called “World Builder” about an idealised virtual world for someone apparently in a coma in hospital. So this text is from around 85 posts in a forum thread from about 75 identified participants. So it was this participant who took this text from the forum. A number of the responses addresses specific questions that we as a teaching team put forward, so our text not only informed that discussion as well. so the distributed elements were not just discursive but there were technological and algorithmic elements that shaped these texts. There are a number of automatic process that take place on this text. Several interesting variables come into play here. The scale of font to relative frequency is adjustable. The tightness regulate how tightly the words fit into a shape. But there are also factors that are automatic algorithmic changes – like removal of small words, combining of tenses, sometimes plurals. These are encoded into the software. And there is the heartshape as well… which determines location and proximity of words. So this seems to embody the symbolic from the material in this. It is a hybrid object, a continuity of matter and culture here. Social and material are not distinct. And as significant as the contesting and blurring of origins, also it’s stability and finality of the object is under question… it was submitted as a Flickr image, also in a Wallwisher, also on Tagxedo website. On the latter website each word is a hyperlink. That really blurs the status of the object as final for me.

And the second example is “E-Learning and Human 3.0” by Nick Hood, created by VideoScribe. It’s a presentation software using text and an animated hand. Once again this presentation has come about from some really interesting and layered process. So the user inputs text and positions it within a sort of whiteboard space. And select from some existing images. And you choose a sort of “preferred limb” for writing. This represents an archetypal black box of digital creation. A tension between software accessibility and usability – this software is clearly both accessible and usable – and on the other hand a kind of openness and user agency. The user doesn’t have fantastic control. That tension is also about absence and presence… the hand is a sense of presence, the spatial aspect of the classroom that draws on the idea of whiteboard. But the surface layer conceals non human agencies at play.

So firstly I wanted to touch on the idea of the image of the hand. So this is a screen capture of the video options – the limb or writing implement – you’d like to animate your presentation with. Most are arms, some are instruments, one is a foot. So you enact a teaching body different from the author – you are distributing the teaching body. And also the hand is animated with the software that preceeds the software. The teaching body is performed by this really complex assemblage of bodies codes, and texts. These are co-constituantly non symbolic. The teacherly body is human and non human at once.

The other thing is this straight forward way of simulating the classroom space. this was submitted via YouTube, where the video has algorithmically generated suggestions. And it will consider the viewer currently watching as well as other viewers of this video – and what they have looked at. This is complex and ongoing algorithm of human interaction that persistently changes that page and that video. Elements are rearranged, reordered, constantly reproduced by humans and algorithms. Human, body, algorithm and non human actor are all present and interacting.

Sian: so I guess we want to end with implications – what does this all mean? Jeremy picked on two of thousands of artefacts to think about how they fit into code, algorithms and agency. Some themes here:

Non-representationalism – seeing knowledge not as something re-produced or re-created outside of a situation (the human min) but instead knowledge is within and part of enacted relational process. Does the artefact convey the intentions of the author? It is about a more complex performance involving both the person and the alogorithmic elements. A new way to understanding that.

Anti-anthropocentrism – the decentreing of a human or human author as the authentic single author of a digital work, it is problematised, this idea of technology in our service… instead it is about decentring the subjtec allows to move beyond an instrumental view of technology and simplistic ideas of empowerment. It helps us interact criticism. So for instance that tool used by Nick presents all limb options as white, forcing us to think critically about that. So we have fundamental issues to consider here.

Both artefacts are i nteresting, we could have spoken about hundreds of examples. Our overarching point is to see digital literacy as something other than technical mastery, instead theoretical areas that decentre human intention.

Jeremy: So some conclusions to add to some of that. I find it interesting that in much digital literacy work you see this emphasis on skills training and future proofing. The idea of training, especially in schools, to enable students to be competant citizens for the futrue. Interesting to consider that in the context of anxiety and fear in relation to technology. Perhaps this may be a response to the loss of stability and authority in digital space.

We see the digital artefacts of the EDCMOOCs as a demonstration of complex, contingent, specific and relational sociomatierla practices.

The resulting knowledge might be considered a collective enactment of human and non-human agencies. Context matters here.

And this perspective gives us a new way to look at digital literacies. We see technology as having a role that expands further to the wider social, cultural and technological contingencies which shape work produced in educational contexts.


Q1) Are YouTube videos on any channels?

A1 – Sian) We can share a list of the videos included here. I can also send around some sites where MOOC students have tried to crowdsource and curate these.

Q2) Interesting interpretation: how close is your relational-sociomaterial stance to Siemens and Downes’ Connectivism

A2 – Jeremy) Siemens and Downes are doing good work updating the social constructivist view of MOOCs up to date. For me it’s about how technology is perceived. A lot of the connectivism work slips into an instrumentalist view of technology as there to inform connections. Sociomaterial perspectvies takes a more nuanced views. Siemens has talked about “non human devices” so there are some interesting cross overs. But the view of technology is where they don’t quite correlate.

A2 – Sian) Connectivism making some great work and shifts in terms of pedagogical design but yes, still about being anthrocentric, less focus on the materiality of those networks. That is the slight difference for me than the sociomaterial approach we’ve taken here.

Q3) Why Collaborate rather than Google+ Hangouts

A3 – Lesley) ALT’s preferred method due to numbers.

Q4 – Nick) Is there any aspect of your research that considers the teacher as assessor and how aligned the teachers digital literacy has to be with the student’s digital literacy. Some students submit work that could be challenging to assess in terms of what parts of that work are the students’ own work versus the choice of tool use, to be able to interpret what the students content is?

A4 – Sian) Such an important question. Partly about teachers knowledge and understanding. Partly about what the tool can do. But it also troubles the notion of assessment. And it troubles the frameworks of assessment in particular – those are grounded in textual history, but this is much more about interpretation and the interpretation of the teacher. We are as much taxing our interpretation as the students skills. It questions intentionality.

A4 – Jeremy) A great question. The sociomaterial reading really questions if we can really assess the skill of the author or the skill of the algorithm. The YouTube recommendation algorithm… we don’t need to work out exactly what it’s doing, not the point, but it’s about showing it as entangles and enmeshes, the algorithm isn’t a purely material form, you can’t separate out the intention of the author. And that really troubles identifying and assessing achievements. Interpretation is an interesting way to move that forward.

Q5)  What criteria do you use to assess the students artefacts or creations?

A5 – Jeremy) These were peer assessed. We defined some criteria within the course and asked students to peer assess each other’s work. Students submitted the URLs. the software allocated the URLs to three students for feedback and grading. We were really experimenting with peer assessments. We weren’t trying to impose a sociomaterial assessment, these are a response to that process.

A5 – Sian) We drew on experience of peer assessment from the MSc of eLearning. The criteria wasn’t sociomaterial exactly. There is another aspect of form here, ideally we would respond in the same form as the submitted artefact.

Q6) Is the Edinburgh MOOC a cMOOC? And I’m not clear on the difference!

A7 – Jeremy) A cMOOC is a connectivist MOOC, the likes of Siemens, Downes and Cormier who were experimenting with open content and assemment. They were the original courses called MOOCs. Later Coursera, EdX etc. created platforms called MOOCs, called xMOOCs to distinguish from cMOOCs. So cMOOCs more radical and distributed. xMOOCs hosted centrally, usually established universities, high profile. I’m not sure we were either. Not convinced either is a valid way to talk about MOOCs. When xMOOCs first emerged… the first wave contained video lectures and quizzes in the first wave but actually things are moving on – Sian has been doing some work on this – but we weren’t really either. We wanted to combine interest in experimentation with Coursera platform.

A7 – Sian) Myself and Jen Ross have been doing some work for the UK HEA about MOOC pedgogies. No-one really talking about xMOOCs or cMOOCs so much anymore. One message out of that is that in the UK only really hybrid pedagogies in the UK.

Q8) In terms of digital literacy… perhaps the issue is that we are not sure what literacy means in any context.

A8 – Jeremy) Robin Goodfellow has done some great work on what we mean when we say “digital literacy”. We were taking a slightly different approach and rethink the idea of the human at the centre. See Sue Thomas’ interesting work on the complexities of literacy, of transliteracies. The complexities and factors here. Again that work for us… that still has the idea of the tool as something separate from the person using it.

A8 – Sian) I’d agree that literacy is an increasingly problematic term – Robin has done good work here but we have terms like “emotional literacy” etc. Some real muddiness not for researchers

Q9 – from me) In terms of critiquing digital literacies how much of what you critique of the instrumental approach is actually grounded in pragmatic needs of policy makers, funders, etc? Whilst skills based approaches are problematic, they are actionable for those decision makers. How would more sociomaterial approaches be actionable in terms of policy, in terms of ensuring critically skilled students/individuals?

A9 – Sian) I think you are right, skills based approaches can be addressed by policies but they construct literacies as deficits, so it’s about rethinking about literacy as capacities. To think again about how technology plays an active partnership in the way meaning is constructed. Hard in terms of policies but lets us move away from the idea of deficits and competencies…

A9 – Jeremy) Great question. It makes me think about the issues of literacies as a driver for MOOCs, efficiency gains etc. For me that question is great because it points to much wider institutional and political factors at play and the wider discourse around elearning.

Q10) Will you run the same course again?

A10 – Sian) We intend to offer it three times. We have made small changes this time and possibly again… but after that… well MOOCs are moving so quickly. I’m sure we’ll want to ride whatever waves are coming next…

A10 – Jeremy) There was a particular MOOC moment and I feel priviledged to have been teaching in that moment. As a team we would be interested in working at the critical edge of what is happening, not sure MOOCs will be in the near future. To add to what Sian said we had a lot of feedback on teh first MOOC. Around 60% of the first wave students worked in education and we have used their feedback. We shall do that again. But we also like to surprise people so we look forward to the third MOOC!

Q11) Seeing how different and personal those artefacts are for each learner, is it possible to define any sort of ‘common’ digital literacy, or would it be different for each person?

A11 – Jeremy) Yes, I think it really questions that idea… that distribution of agency and creativity. So many people were involved in creating that word cloud, including us as teachers. Of course the author plays a significant role in that particular coming together. But yeah, it definitely questions that.

A11 – Sian) I’d agree with that. That’s whats exciting about these academic forms, that can’t be flattened like traditional academic forms. And questions what we do when we assess academic work.

Q12 – Nick) I was just wondering about the different knowledge that participants arrive with… the issue of literacies and how they change, it moves all the time

A12 – Sian) It does really move, really question assessible terms

A12 – Jeremy) That relates to the earlier question. It is so situationable. It is not assessable to generalisable criteria really. if we think about these as singularities it is tricky to see how you might understand them and how important the situation they come about through.

Q13 – Lesley) I’m interested in what you’ve been talking about in terms of representation, assemblages and how they may be critiqued. The loss of some sort of shared code. When we think of masters or postgraduate level works, how do you engage critically with say that heart shape word cloud.

A13 – Jeremy) for me the sociomaterial reading is a way to be critical about what happened in order to understand how that artefact came about. It is about recognising the author and the decentering of that author… not a flattening out of considering what’s important and powerful and not represented, just a way to think about what is important, what is powerful in that coming together.

A13 – Sian) I think lesley and others may be interested in the ESRC Seminar Series that Jeremy and I are involved in around code in educational practice.

And with that we draw to a close with thanks to the speakers and facilitators.

See also: