Jun 272016
 
This afternoon I’m at the eLearning@ed/LTW monthly Showcase and Network event, which this month focuses on Assessment and Feedback.
I am liveblogging these notes so, as usual, corrections and updates are welcomed. 
The wiki page for this event includes the agenda and will include any further notes etc.: https://www.wiki.ed.ac.uk/x/kc5uEg
Introduction and Updates, Robert Chmielewski (IS Learning, Teaching and Web)
Robert consults around the University on online assessment – and there is a lot of online assessment taking place. And this is an area that is supported by everybody. Students are interested in submitting and receiving feedback online, but we also have technologists who recognise the advantages of online assessment and feedback, and we have the University as a whole seeing the benefits around, e.g. clarity over meeting timelines for feedback. The last group here is the markers and they are more and more appreciative of the affordances of online assessment and feedback. So there are a lot of people who support this, but there are challenges too. So, today we have an event to share experiences across areas, across levels.
Before we kick off I wanted to welcome Celeste Houghton. Celeste: I an the new Head of Academic Development for Digital Education at the University, based at IAD, and I’m keen to meet people, to find out more about what is taking place. Do get in touch.
eSubmission and eFeedback in the College of Humanities and Social Science, Karen Howie (School of History, Classics & Archaeology)
This project started about 2-3 years back in February 2015. The College of Humanities and Social Sciences wants 100% electronic submission/feedback where “pedagogically appropriate” by 2016/17 academic year. Although I’m saying electronic submission/feedback the in-between marking part hasn’t been prescribed. The project board for this work includes myself, Robert and many others any of whom you are welcome to contact with any questions.
So, why do this? Well there is a lot of student demand for various reasons – legibility of comments; printing costs; enabling remote submission. For staff the benefits are ore debatable but they can include (as also reported by Jisc) increased efficiency, and convenience. Benefits for the institution (again as reported by Jisc) include measuring feedback response rates, and efficiencies that free up time for student support.
Now some parts of CHSS are already doing this at the moment. Social and Political Studies are using an in-house system. Law are using Grademark. And other schools have been running pilots, most of them with GradeMark, and these have been mostly successful. But we’ve had lots of interesting conversations around these technologies, around quality of assessment, about health and safety implications of staring at a screen more.
We have been developing a workflow and process for the college but we want this to be flexible to schools’ profiles – so we’ve adopted a modular approach that allows for handling of groups/tutors; declaration of own work; checking for non-submitters; marking sheets and rubrics; moderation, etc. And we are planning for the next year ahead, working closely with the Technology Enhanced Learning group in HSS. We are having some training – for markers it’s a mixture of in-School and is with College input/support; and for administrators by learning technologies in the school or through discussions with IS LTW EDE. To support that process we have screencasts and documentation currently in development. PebblePad isn’t part of this process, but will be.
To build confidence in the system we’re facing some myth busting etc. For instance, anonymity vs pastoral care issues – a receipt dropbox has been created; and we have an agreement with EUSA that we can deanonymise if identification is not provided. And we have also been looking at various other regulations etc. to ensure we are complying and/or interpreting them correctly.
So, those pilots have been running. We’ve found that depending on your preocesses the administration can be complex. Students have voiced concerns around “generic” feedback. Students were anxious – very anxious in some cases. It is much quicker for markers to get started with marking, as soon as the deadline has passed. But there are challenges though – including when networks go down, for instance there was an (unusual) DDOS attack during our pilots that impacted our timeline.
Feedback from students seems relatively good. 14 out of 36 felt quality of marking was better than on paper – but 10 said it was less good. 29 out of 36 said feedback was more legible. 10 felt they had received more feedback than noral, 11 less. 3 out of 36 would rather submit on paper, 31 would would rather submit online. In our first pilot with first year students around 10% didn’t look at feedback for essay, 36% didn’t look at tutorial feedback. In our second pilot about 10% didn’t look at either assignments submissions.
Markers reported finding the electronic marking easier, but some felt that the need to work on screen was challenging or less pleasant than marking on paper.
Q&A
Q1) The students who commented on less or more feedback than normal – what were they comparing to?
A1) To paper-based marking, which they would have had for other courses. So when we surveyed them they would have had some paper-based and some electronic feedback already.
Q2) A comment about handwriting and typing – I read a paper that said that on average people write around 4 times more words when typing than when hand writing. And in our practice we’ve found that too.
A2) It may also be student perceptions – looks like less but actually quite a lot of work. I was interested in students expectations that 8 days was a long time to turn around feedback.
Q2) I think that students need to understand how much care has been taken, and that that adds to how long these things take.
Q3) You pointed out that people were having some problems and concerns – like health and safety. You are hoping for 100% take up, and also that backdrop of the Turnitin updates… Are there future plans that will help us to move to 100%
A3) The health and safety thing came up again and again… But it’s maybe to do with how we cluster assignments. In terms of Turnitin there are updates but not all of those emerge rather slowly – there is a bit more competition now, and some frustration across the UK, so looking likely that there will be more positive developments.
Q4) It was interesting that idea that you can’t release some feedback until it is all ready… For us in the Business School we ended up releasing feedback when there was a delay.
A4) In our situation we had some marks ready in a few days, others not due for two weeks. A few days would be fair, a few weeks would be problematic. It’s an expectation management issue.
Comment) There is also a risk that is marking is incomplete or partially done it can cause students great distress…
Current assessment challenges, Dr. Neil Lent (Institute for Academic Development)
My focus is on assessment and feedback. Initially the expectation was that I’d be focused on how to do assessment and feedback “better”. And you can do that to an extent but… The main challenge we face is a cultural rather than a technical challenge. And I mean technical in the widest sense – technological, yes, but also technical in terms of process and approach. I also think we are talking about “cultures” rather than “culture” when we think about this.
So, why are we focussing on assessment and feedback? Well we have low NSS scores, low league table position and poor student experience reported around this area. Also issues of (un)timely feedback, low utility, and the idea that we are a research-led university and the balance of that and learning and teaching. Some of these areas are more myth than reality. I think as a university we now have an unambiguous focus on teaching and learning but whether that has entirely permeated our organisational culture is perhaps arguable. When you have competing time demands it is hard to do things properly, and the space to actually design better assessment and feedback.
So how do we handle this? Well is we look at the “Implementation Staircase” (Reynolds and Saunders 1987) we can see that it comes from senior management, then to colleges, to schools, to programmes, to courses, to students. Now you could go down that staircase or you can go back up… And that requires us to think about our relationships with students. Is this model dialogic? Maybe we need another model?
Activity theory (Engestrom 1999) is a model for a group like a programme team, or course cohort, etc. So we have a subject here – it’s all about the individual in the context of an object, the community, mediating tool, rules and conventions, division of labour. This is a classic activity theory idea, with modern cultural aspects included. So for us the subject might be the marker, the object the assignment, the mediating tool something like the technological tools or processes, rules and conventions may include the commitment to return marks within 2 weeks, division of labour could include colleagues and sharing of marking, community could be students. It’s just a way to conceptualise this stuff.
A cultural resolution would see culture as practice and discourse. Review and reflection need to be embedded and internalised way of life. We have multiple stakeholders here – not always the teacher or the marker. And we need a bit of risk taking – but that’s scary when we are thinking about risk taking. That can feel at odds with the need to perform at a high level but risk taking is needed. And we need best practice to share experience in events such as this.
So there are technical things we could do better, do right. But the challenge we face is more of a collective one. We need to create time and space to genuinely reflect on their teaching practice, to interact with that culture. But you don’t change practice overnight. And we have to think about our relationship with our students, and thinking about how we encourage and enable them to be part of the process, and building up their own picture of what good/bad work looks like. And then the subject, object, culture will be closer together. Sometimes real change comes from giving examples of what works, inspiring through those examples etc. Technological tools can make life easier, if you have the time to spend time to understand them and how to make them work for you.
Q&A
Q1) Not sure if it’s a question or comment or thought… But I’m wondering what we take from those NSS scores, and if that’s what we should work to or if we should think about assessment and feedback in a different kind of paradigm.
A1) When we think about processes we can kid ourselves that this is all linear, it’s cause and effect. It isn’t that simple… The other thing about concentrating on giving feedback on time, so they can make use of it. But when it comes to the NSS it commodifies feedback, which challenges the idea of feedback as dialogic. There are cultural challenges for this. And I think that’s where risk, and the potential for interesting surprises come in…
Q2) As a parent of a teenager I now wonder about personal resilience, to be able to look at things differently, especially when they don’t feel confident to move forwards. I feel that for staff and students a problem can arise and they panic, and want things resolved for them. I think we have to move past that by giving staff and students the resilience so that they can cope with change.
A2) My PhD was pretty much on that. I think some of this comes from the idea of relatively safe risk taking… That’s another kind of risk taking. As a sector we have to think that through. Giving marks for everything risks everything not feeling like a safe space.
Q3) Do we not need to make learning the focus.
A3) Schools and universities push that grades, outcomes really matter when actually we would say “no, the learning is what matters”, but that’s hard in the wider context in which the certificate in the hand is valued.
Comment) Maybe we need that distinction that Simon Riley talked about at this year’s eLearning@ed conference, of distinguishing between the task and the assignment. So you can fail the task but succeed that assignment (in that case referring to SLICCs and the idea that the task is the experience, the assignment is writing about it whether it went well or poorly).
Not captured in full here: a discussion around the nature of electronic submission, and students concern about failing at submitting their assignments or proof of learning… 
Assessment Literacy: technology as facilitator, Prof. Susan Rhind (Assistant Principal Assessment and Feedback)
I’m going to talk about assessment literacy, and about technology as a facilitator. I’m also going to talk about something I’m hoping you may be able to advise about.
So, what is assessment literacy? It is being talked about a lot in Higher Education at the moment. There is a book all about it (Price et al 2012) that talks about competencies and practices. For me what is most important is the idea of ensuring some practical aspects are in place, that students have an understanding of the nature, meaning and level of assessment standards, that they have skills in self and peer assessment. The idea is to narrow the gap between students and teaching staff. Sadler (1989,2010) and Bod and Molloy (2013) talk about students needing to understand the purpose of assessment and process of assessment. It means understanding assessment as a central part of curriculum design (Medland 2016, Gibbs and Dunbar-Goddet, 2009). We need assessment and feedback at the core, at the heart of our learning and teaching.
We also have to understand assessment in the context of quality of teaching and quality of assessment and feedback. For me there is a pyramid of quality (with programme at bottom, individual at top, course in the middle). When we talk about good quality feedback we have to conceptualise it, as Neil talked about, as a dialogic process. So there is individual feedback… But there is also course design and programme design in terms of assessment and feedback. No matter how good a marker is in giving feedback, it is much more effective when the programme design supports good quality feedback. In this model technology can be a facilitator. For instance I wanted to plug Fiona Hale’s Edinburgh Learning Design Roadmap (ELDeR) workshops and processes. This sort of approach lets us build for longer term improvement in these areas.
Again, thinking about feedback and assessment quality, and things that courses can do, we have a table here that compares different types of assessment, the minimum pre-assessment activity to ensure they have assessment literacy, and then enhancement examples. a minimum requirement for feedback and some exemplars for marking students work.
An example here would be work we’ve done at the Vet School around student use of Peerwise MCQs – here students pushed for use in 3rd year, and for revision at the end of the programme. By the way if you are interested in assessment literacy, or have experience to share, we now have a channel for Assessment and Feedback, and for Assessment Literacy on MediaHopper.
Coming back to that exemplars of students work… We run Learning to be an Examiner sessions which students could take part in, and which includes the opportunity to mark exemplars of students work. That leads to conversations, and exchange of opinions, to understand the reasons behind the marking. And I would add that any place we can bring the students and teaching staff closer together only benefits us and our NSS scores. The themes coming out of this work was that there was real empathy for staff, and quelling fears. Students also noted that as they took part, the better they understood the requirements, the less important feedback felt.
There have been some trials using ACJ (Adaptive Comparative Judgement), which is the idea that with enough samples of work you can use comparison to put work into an order or ranking. So you present staff several assignments and they can rank them. We ran this as an experiment as it provides a chance for students to see others’ work and compare to their own as well as others. We ran a survey after this experiment but students felt seeing others’ responses, and also to understand others’ approaches to comparison and marking.
So, my final point here is a call for help… As we think about what excites and encourages students I would like to find a Peerwise like system for free text type questions. Student feedback was good, but they wanted to do that for a lot more questions than just those we were able to set. So I would like to take Peerwise away from the MCQ context so that students could see and comment and engage with each others work. And I think that anything that brings students and staff closer together in their understanding is important.
Q&A
Q1) How do we approach this in a practical way. We’ve asked students to look at exemplar essays but we bump into problems doing them. It’s easy to persuade those who wrote good essays and have moved to later years, but it’s hard to find those with poorer.
A1) We were doing this with short questions, not long essays. Hazel Marzetti was encouraging sharing of essays and they were reluctant. I think there’s something around expectation management – creating the idea up front that work will be available for others… That one has to opt out rather than opt out. Or you can mock up essays but you lose that edge of it being the real thing.
Q2) On the idea of exemplars… How do we feel about getting students to do a piece of work, and then sharing that with others on, say, the same topic. You could pick a more tangental topic, but that risks being less relevant, that a good essay is properly authentic… But for others there is a risk of copying potential.
A2) I think that it’s about understanding risk and context. We don’t use the idea of “model answers” but instead “outline answers”. Some students do make that connection… But they are probably those with a high degree of assessment literacy who will do well anyway.
Q3) By showing good work, showing a good range with similar scores, but also when you show students exemplars you don’t just give out the work, you annotate it, point out what makes it good, features that make it notable… A way to inspire students and help them develop assessment literacy when judging others’ work.
And with that our main presentations have drawn to a close with a thank you for all our lovely speakers and contributors.  We are concluding with an Open Discussion on technology in Assessment and Feedback.
Susan: Yeah, I’m quite a fan of mandatory activities but which do not carry a mark. But I’d seriously think about not assigning marks for all feedback activities… 
Comment: But the students can respond with “if it’s so important, why doesn’t this carry credit?”
Susan: Well you can make it count. For instance our vet students have to have a portfolio, and are expected to discuss that annually. That has been zero credits before (now 10 credits) but still mandatory. Having said that our students are not as focused on marking in that way.
Comment: I don’t want to be the “ah, but…” person here… But what if a student fails that mandatory non marked work? What’s the make-up task?
Susan: For us we are able to find a suitable bespoke negotiated exercise for the very few students this applies to…
Comment: What about equity?
Susan: I think removing the mark actually removes that baggage from the argument… Because the important thing here is doing the right tasks for the professional world. I think we should be discussing this more in the future.  
And with that Robert is drawing the event to a close. The next eLearning@ed/LTW monthly meet up is in July, on 27th July and will be focused on the programme for attaining the CMALT accreditation.  

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)