Oct 072016
 

PS-15: Divides (Chair: Christoph Lutz)

The Empowered Refugee: The Smartphone as a Tool of Resistance on the Journey to Europe – Katja Kaufmann

For those of you from other continents we had a great deal of refugees coming to Europe last year, from Turkey, Syria, etc. who were travelling to Germany, Sweden, and Vienna – where I am from – was also a hub. Some of these refugees had smartphones and that was covered in the (right wing) press about this, criticising this group’s ownership of devices but it was not clear how many had smartphones, how they were being used and that’s what I wanted to look at.

So we undertook interviews with refugees to see if they used them, how they used them. We were researching empowerment by mobile phones, following Svensson and Wamala Larsson (2015) on the role of the mobile phone in transforming capacilities of users. Also with reference to N. Kabeer (1999), A. Sen (1999) etc. on meanings of empowerment in these contexts. Smith, Spend and Rashid (2011) describe mobiles and their networs altering users capability sets, and about phone increasing access to flows of information (Castell 2012).

So, I wanted to identify how smartphones were empowering refugees through: gaining an advantage in knowledge by the experiences of other refugees; sensory information; cross-checking information; and capabilities to opposse actions of others.

In terms of an advantage in knowledge refugees described gaining knowledge from previous refugees on reports, routes, maps, administrative processes, warnings, etc. This was through social networks and Facebook groups in particular. So, a male refugee (age 22) described which people smugglers cannot be trusted, and which can. And another (same age) felt that smart phones were essential to being able to get to Europe – because you find information, plan, check, etc.

So, there was retrospective knowledge here, but also engagement with others during their refugee experience and with those ahead on their journey. This was mainly in WhatsApp. So a male refugee (aged 24) described being in Macedonia and speaking to refugees in Serbia, finding out the situation. This was particularly important last year when approaches were changes, border access changed on an hour by hour basis.

In terms of Applying Sensory Abilities, this was particularly manifested in identifying own GPS position – whilst crossing the Aegean or woods. Finding the road with their GPS, or identifying routes and maps. They also used GPS to find other refugees – friends, family members… Using location based services was also very important as they could share data elsewhere – sending GPS location to family members in Sweden for instance.

In terms of Cross-checking information and actions, refugees were able to track routes whilst in the hand of smugglers. A male Syrian refugee (aged 30) checked information every day whilst with people smugglers, to make sure that they were being taken in the right direction – he wanted to head west. But it wasn’t just routes, it was also weather condiions, also rumous, and cross-checking weather conditions before entering a boat. A female Syrian refugee downloaded an app to check conditions and ensure her smuggler was honest and her trip would be safer.

In terms of opposing actions of others, this was about being capable of opposing actions of others – orders of authorities, potential acts of (police) violence, risks, fraud attempts, etc. Also disobedience by knowledge – the Greek government gave orders about the borders, but smartphones allowed annotated map sharing that allowed orders to be disobeyed. And access to timely information – exchange rates for example – a refugee described negotiating price of changing money down by Google searching for this. And opposition was also about a means to apply pressure – threatening with or publishing photos. A male refugee (aged 25) described holding up phones to threaten to document policy violence, and that was impactful. Also some refugees took pictures of people smugglers as a form of personal protection and information exchange, particularly with publication of images as a threat held in case of mistreatment.

So, in summary the smartphones

Q&A

Q1) Did you have any examples of privacy concerns in your interviews, or was this a concern for later perhaps?

A1) Some mentioned this, some felt some apps and spaces are more scrutinised than others. There was concern that others may have been identified through Facebook – a feeling rather than proof. One said that they do not send their parents any pictures in case she was mistaken by Syrian government as a fighter. But mostly privacy wasn’t an immediate concern, access to information was – and it was very succesful.

Q2) I saw two women in the data here, were there gender differences?

A2) We tried to get more women but there were difficulties there. On the journey they were using smartphones in similar ways – but I did talk to them and they described differences in use before their journey and talked about picture taking and sharing, the hijab effect, etc.

Social media, participation, peer pressure, and the European refugee crisis: a force awakens? – Nils Gustafsson, Lund university, Sweden

My paper is about receiving/host nations. Sweden took in 160,000 refugees during the crisis in 2015. I wanted to look at this as it was a strange time to live in. A lot of people started coming in late summer and early autumn… Numbers were rising. At first response was quite enthusiastic and welcoming in host populations in Germany, Austria, Sweden. But as it became more difficult to cope with larger groups of people, there were changes and organising to address challenge.

And the organisation will remind you of Alexander (??) on the “logic of collective action” – where groups organise around shared ideas that can be joined, ideas, almost a brand, e.g. “refugees welcome”. And there were strange collaborations between government, NGOs, and then these ad hoc networks. But there was also a boom and bust aspect here… In Sweden there were statements about opening hearts, of not shutting borders… But people kept coming through autumn and winter… By December Denmark, Sweden, etc. did a 180 degree turn, closing borders. There were border controls between Denmark and Sweden for the first time in 60 years. And that shift had popular support. And I was intrigued about this. And this work is all part of a longer 3 year project on young people in Sweden and their political engagement – how they choose to engage, how they respond to each other. We draw on Bennett & Segerberg (2013), social participation, social psychology, and the notion of “latent participation” – where people are waiting to engage so just need asking to mobilise.

So, this is work in progress and I don’t know where it will go… But I’ll share what I have so far. And I tried to focus on recruitment – I am interested in when young people are recruited into action by their peers. I am interested in peer pressure here – friends encouraging behaviours, particularly important given that we develop values as young people that have lasting impacts. But also information sharing through young people’s networks…

So, as part of the larger project, we have a survey, so we added some specific questions about the refugee crisis to that. So we asked, “you remember the refugee crisis, did you discuss it with your friends?” – 93.5% had, and this was not surprising as it is a major issue. When we asked if they had discussed it on social media it was around 33.3% – much lower perhaps due to controversy of subject matter, but this number was also similar to those in the 16-25 year old age group.

We also asked whether they did “work” around the refugee crisis – volunteering or work for NGOs, traditional organisations. Around 13.8% had. We also asked about work with non-traditional organisations and 26% said that they had (and in 16-25% age group, it was 29.6%), which seems high – but we have nothing to compare this too.

Colleagues and I looked at Facebook refugee groups in Sweden – those that were open – and I looked at and scraped these (n=67) and I coded these as being either set up as groups by NGOs, churches, mosques, traditional organisations, or whether they were networks… Looking across autumn and winter of 2015 the posts to these groups looked consistent across traditional groups, but there was a major spike from the networks around the crisis.

We have also been conducting interviews in Malmo, with 16-19 and 19-25 year olds. They commented on media coverage, and the degree to which the media influences them, even with social media. Many commented on volunteering at the central station, receiving refugees. Some felt it was inspiring to share stories, but others talked about their peers doing it as part of peer pressure, and critical commenting about “bragging” in Facebook posts. Then as the mood changed, the young people talked about going to the central station being less inviting, on fewer Facebook posts… about feeling that “maybe it’s ok then”. One of our participants was from a refugee background and ;;;***

Q&A

Q1) I think you should focus on where interest drops off – there is a real lack of research there. But on the discussion question, I wasn’t surprised that only 30% discussed the crisis there really.

A1) I wasn’t too surprised either here as people tend to be happier to let others engage in the discussion, and to stand back from posting on social media themselves on these sorts of issues.

Q2) I am from Finland, and we also helped in the crisis, but I am intrigued at the degree of public turnaround as it hasn’t shifted like that in Finland.

A2) Yeah, I don’t know… The middleground changed. Maybe something Swedish about it… But also perhaps to do with the numbers…

Q2) I wonder… There was already a strong anti-immigrant movement from 2008, I wonder if it didn’t shift in the same way.

A2) Yes, I think that probably is fair, but I think how the Finnish media treated the crisis would also have played a role here too.

An interrupted history of digital divides – Bianca Christin Reisdorf, Whisnu Triwibowo, Michael Nelson, William Dutton, Michigan State University, United States of America

I am going to switch gears a bit with some more theoretical work. We have been researching internet use and how it changes over time – from a period where there was very little knowledge of or use of the internet to the present day. And I’ll give some background than talk about survey data – but that is an issue of itself… I’ll be talking about quantitative survey data as it’s hard to find systematic collection of qualitative research instruments that I could use in my work.

So we have been asking about internet use for over 20 years… And right now I have data from Michigan, the UK, and the US… I have also just received further data from South Africa (this week!).

When we think about Digital Inequality the idea of the digital divide emerged in the late 1990s – there was government interest, data collection, academic work. This was largely about the haves vs. have-nots; on vs. off. And we saw a move to digital inequalities (Hargittai) in the early 2000s… Then it went quite aside from work from Neil Selwyn in the UK, from Helsper and Livingstone… But the discussion has moved onto skills…

Policy wise we have also seen a shift… Lots of policies around digital divide up to around 2002, then a real pause as there was an assumption that problems would be solved. Then, in the US at least, Obama refocused on that divide from 2009.

So, I have been looking at data from questionnaires from Michigan State of the State Survey (1997-2016); questionnaires from digital future survey in the US (2000, 2002, 2003, 2014); questionnaires from the Oxford Internet Surveys in the UK (2003, 2005, 2007, 2009, 2013); Hungarian World Internet Project (2009); South African World Internet Project (2012).

Across these data sets we have looked at questionnaires and frequency of use of particular questions here on use, on lack of use, etc. When internet penetration was less high there was a lot of explanation in questions, but we have shifted away from that, so that we assume that people understand that… And we’ve never returned to that. We’ve shifted to devices questions, but we don’t ask other than that. We asked about number of hours online… But that increasingly made less sense, we do that less as it is essentially “all day” – shifting to how frequently they go online though.

Now the State of the State Survey in Michigan is different from the other data here – all the others are World Internet Project surveys but SOSS is not looking at the same areas as not interent researchers neccassarily. In Hungary (2009 data) similar patterns of question use emerged, but particular focus on mobile use. But the South African questionnaire was very different – they ask how many people in the household is using the internet – we ask about the individual but not others in the house, or others coming to the house. South Africa has around 40% penetration of internet connection (at least in 2012 when we have data here), that is a very different context. There they ask for lack of access and use, and the reasons for that. We ask about use/non-use rather than reasons.

So there is this gap in the literature, there is a need for quantitative and qualitative methods here. We also need to understand that we need to consider other factors here, particularly technology itself being a moving target – in South Africa they ask about internet use and also Facebook – people don’t always identify Facebook as internet use. Indeed so many devices are connected – maybe we need

Q&A

Q1) I have a question about the questionnaires – do any ask about costs? I was in Peru and lack of connections, but phones often offer free WhatsApp and free Pokemon Go.

A1) Only the South African one asks that… It’s a great question though…

Q2) You can get Pew questionnaires and also Ofcom questionnaires from their website. And you can contact the World Internet Project directly… And there is an issue with people not knowing if they are on the internet or not – increasingly you ask a battery of questions… and then filtering on that – e.g. if you use email you get counted as an internet user.

A2) I have done that… Trying to locate those questionnaires isn’t always proving that straightforward.

Q3) In terms of instruments – maybe there is a need to developmore nuanced questionnaires there.

A3) Yes.

Levelling the socio-economic playing field with the Internet? A case study in how (not) to help disadvantaged young people thrive online – Huw Crighton Davies, Rebecca Eynon, Sarah Wilkin, Oxford Internet Institute, United Kingdom

This is about a scheme called the “Home Access Scheme” and I’m going to talk about why we could not make it work. The origins here was a city council’s initiative – they came to us. DCLG (2016) data showed 20-30% of the population were below the poverty line, and we new around 7-8% locally had no internet access (known through survey responses). And the players here were researchers, local government, schools, and also an (unnamed) ISP.

The aim of the scheme was to raise attainment in GCSEs, to build confidence, and to improve employability skills. The Schools had a responsibility to identify students in need at school, to procure laptops, memory sticks and software, provide regular, structured in-school pastoral skills and opportunities – not just in computing class. The ISP was to provide set up help, technical support, free internet connections for 2 years.

This scheme has been running two years, so where are we? Well we’ve had successes: preventing arguments and conflict; helped with schoolwork, job hunting; saved money; and improved access to essential services – this is partly as cost cutting by local authorities have moved transactions online like bidding for council housing, repeat prescription etc. There was also some intergenerational bonding as families shared interests. Families commented on the success and opportunities.

We did 25 interiews, 84 1-1 sessions in schools, 3 group workshops, 17 ethnographic visits, plus many more informal meet ups. So we have lots of data about these families, their context, their lives. But…

Only three families had consistent internet access throughout. Only 8 families are still in the programme. It fell apart… Why?

Some schools were so nervous about use that they filtered and locked down their laptops. One school used the scheme money to buy teacher laptops, gave students old laptops instead. Technical support was low priority. Lead teachers left/delegated/didn’t answer emails. Very narrow use of digital technology. No in-house skills training. Very little cross-curriculum integration. Lack of ICT classes after year 11. And no matter how often we asked about it we got no data from schools.

The ISP didn’t set up collections, didn’t support the families, didn’t do what they had agreed to. They tried to bill families and one was threatened with debt collectors!

So, how did this happen? Well maybe these are neoliberalist currents? I use that term cautiously but… We can offer an emergent definition of neoliberalism from this experience.

There is a neoliberalist disfigurement of schools: teachers under intense pressue to meet auditable targets; the scheme’s students subject to a range of targets used to problematise a school’s performance – exclusions, attendance, C grades; the scheme shuffled down priorities; ICT not deemed academic enough under Govian school changes; and learning is stribbed back to narrow range of subjects and focus towards these targets.

There were effects of neoliberalism on the city council: targets and “more for less” culture; scheme disincentivised; erosion of authority of democratic institutional councils – schools beyond authority controls, and high turn over of staff.

There were neoliberalist practices at the ISP: commodifying philanthropy; couldn’t not treat families as customers. And there were dysfunctional mini-markets: they subcontracted delivery and set up; they subcontracted support; they charged for support and charged for internet even if they couldn’t help…

Q&A

Q1) Is the problem digital divides but divides… Any attempt to overcome class separation and marketisation is working against the attempts to fix this issue here.

A1) We have a paper coming and yes, there were big issues here for policy and a need to be holistic… We found parents unable to attend parents evening due to shift work, and nothing in the school processes to accommodate this. And the measure of poverty for children is “free school meals” but many do not want to apply as it is stigmatising, and many don’t qualify even on very low incomes… That leads to children and parents being labelled disengaged or problematic

Q2) Isn’t the whole basis of this work neoliberal though?]

A2) I agree. We didn’t set the terms of this work..

Panel Q&A

Q1/comment) RSE and access

A1 – Huw) Other companies the same

Q2) Did the refugees in your work Katja have access to Sim cards and internet?

A2 – Katja) It was a challenge. Most downloaded maps and resources… And actually they preferred Apple to Android as the GPS is more accurate without an internet connection – that makes a big difference in the Aegean sea for instance. So refugees shared sim cards, used power banks for the energy.

Q3) I had a sort of reflection on Nils’ paper and where to take this next… It occurs to me that you have quite a few different arguements… You have this survey data, the interviews, and then a different sort of participation from the Facebook groups… I have students in Berlin here looking at the boom and bust – and I wondered about that Facebook group work being worth connecting up to that type of work – it seems quite separate to the youth participation section.

A3 – Nils) I wasn’t planning on talking about that, but yes.

Comment) I think there is a really interesting aspect of these campaigns and how they become part of social media and the everyday life online… The way they are becoming engaged… And the latent participation there…

Q3) I can totally see that, though challenging to cover in one article.

Q4) I think it might be interesting to talk to the people who created the surveys to understand motivations…

A4) Absolutely, that is one of the reasons I am so keen to hear about other surveys.

Q5) You said you were struggling to find qualitative data?

A5 – Katja) You can usually download quantitative instruments, but that is harder for qualitative instruments including questions and interview guides…

XP-02: Carnival of Privacy and Security Delights – Jason Edward Archer, Nathanael Edward Bassett, Peter Snyder, University of Illinois at Chicago, United States of America

Note: I’m not quite sure how to write up this session… So these are some notes from the more presentation parts of the session and I’ll add further thoughts and notes later… 

Nathanial: We have prepared three interventions for you today and this is going to be kind of a gallery exploring space. And we are experimenting with wearables…

Fitbits on a Hamster Wheel and Other Oddities, oh my!

Nathanial: I have been wearing a FitBit this week… but these aren’t new ideas… People used to have beads for counting, there are self-training books for wrestling published in the 16th Century. Pedometers were conceived of in Leonardo di Vinci’s drawings… These devices are old, and tie into ideas of posture, and mastering control of physical selves… And we see the pedometer being connected with regimes of fitness – like the Manpo-Meter (“10,000 steps meter) (1965). This narrative takes us to the 1970s running boom and the idea of recreational discipline. And now the world of smart devices… Wearables are taking us to biometric analysis as a mental model (Neff – preprint).

So, these are ways to track, but what happens with insurance companies, with those monitoring you. At Oriel Roberts university students have to track their fitness as part of their role as students. What does that mean? I encourage you all to check out “unfitbit” – interventions to undermine tracking. Or we could, rather than going to the gym with a FitBit, give it to Terry Crews – he’s going anyway! – and he could earn money… Are fitness slaves in our future?

So, use my FitBit – it’s on my account

And so, that’s the first part of our session…

?: Now, you might like to hear about the challenges of running this session… We had to think about how to make things uncomfortable… But then how do you get people to take part… We considered a man-in-the-middle site that was ethically far too problematic! And no-one was comfortable participating in that way… Certainly raising the privacy and security issue… But as we talk of data as a proxy for us… As internet researchers a lot of us are more aware of privacy and security issues than the general population, particularly around metadata. But this would have been one day… I was curious if people might have faked your data for that one day capture…

Nathanial: And the other issue is why we are so much more comfortable sharing information with FitBit, and other sharing platforms, faceless entities versus people you meet at a conference… And we didn’t think about a gender aspect here… We are three white guys here and we are less sensitive to that being publicised rather than privatised. Men talk about how much they can benchpress… but personal metadata can make you feel under scrutiny

Me: I wouldn’t want to share my data and personal data collection tools…

Borrowing laptop vs borrowing phone…

?: In the US there have been a few cases where FitBits have been submitted as evidence in court… But that data is easier to fake… In one case a woman claimed to have been raped, and they used her FitBit to suggest that

Nathanial: You talked about not being comfortable handing someone your phone… It is really this blackbox… Is it a wearable? It has all that stuff, but you wear it on your body…

??: On cellphones there is FOMO – Fear Of Missing Out… What you might mix…

Me: Device as security

Comment: Ableism embedded in devices… I am a cancer survivor and I first used step counts as part of a research project on chemotherapy and activity… When I see a low step day on my phone now… I can feel this stress of those triggers on someone going through that stress…

Nathanial: FitBit’s vibrate when you have/have not done a number of steps… Trying to put you in an ideological state apparatus…

Jh: That nudge… That can be good for able bodied… But if you can’t move that is a very different experience… How does that add to their stress load.

Interperspectival Goggles

Again looking at the condition of virtuality – Hayles 2006(?)

Vision is constructed… Thinking of higher resolution… From small phone to big phone… Lower resolution to higher resolution TV… We have spectacles, quizzing glasses and monocles… And there is the strange idea of training ourselves to see better (William Horation Bates, 1920s)… And emotional state interfering with how you do something… Rgeb we have optomitry and x-rays as a concept of seeing what could not be seen before… And you have special goggles and helmets… LIke the idea of the Image Accumulator in Videodrome (1985?), or the idea of the Memory recorder and playback device in Brainstorm (1983). We see embodied work stations – Da Vinci Surgery Robot (2000) – divorcing what is seen, from what is in front of them…

There are also playful ideas: binocular football; the Decelerator Helmet; Meta-perceptional Helmet (Cleary and Donnelly 2014); and most recently Google Glass – what is there and also extra layers… Finally we have Oculus Rift and VR devices – seeing something else entirely… We can divorce what we see from what we are perceiving… We want to swap people’s vision…

1. Raise awareness about the complexity of electronic privacy and security issues.

2. Identify potential gaps in the research agenda through playful interventions, subversions, and moments of the absurd.

3. Be weird, have fun!

Mathius

“Cell phones are tracking devices that make phonecalls” (Applebaum, 2012)

I am interested in IMSI catcher which masquerades as a wireless base station, prompting phones to communicate with it. They are used by police, law inforcement, etc. They can be small and handheld, or they can be drone mounted. And they can track people, people in crowds, etc. There is always a different way to use it – you can scan for people in crowds. So if you know someone is there you can scan for it in a different way. So, these tools are simple and disruptive and problematic, especially in activism contexts.

But these tools are also capable of caturing transmitted content, and all the data in your phone. These devices are problematic and have raised all sorts of issues about their use, who and how you use them. I’d like to think of this a different way… Is there a right to protest? And to protest anonymously? We do have anti-masking laws in some places – that suggests no right to anonymous protest. But that’s still a different privacy right – covering my face is different from participating at all…

Protests are generally about a minority persuading a majoruty about some sort of change. There is no legal rights to protest anonymously, but there are lots of protected anoymous spaces. So, in the 19th century there was big debate on whether or not the voting ballot should be anonymous – democracy is really the C19th killer app. So there is a lovely quote here about the “The Australian system” by Bernheim (1889) and the introduction of anonymous voting. It wasn’t brought in to preserve privacy. At the time politicians brought votes – buying a keg of beer or whatever – and anonymity was there to stop that, not to preserve individual privacy. But Jill LePore (2008) writes about how our forebears considered casting a “secret ballot” to be “cowardly, underhanded and dispicable”.

So, back to these devices… There can be an idea that “if you have nothing to fear, you have nothing to hide”, but many of us understand that it is not true. And this type of device silences uncomfortable discourse.

Mathias Klang, University of Massachusetts Boston

Q1) How do you think that these devices fit into the move to allow law inforcement to block/”switch off” the camera on protestors/individuals’ phones?

A1) Well people can resist these surveillance efforts, and you will see subversive moves. People can cover cameras, conceal devices etc. But with these devices it may be that the phone becomes unusable, requiring protestors to disable phones or leave phones at home… And phones are really popular and well used for coordinating protests

Bryce Newell, Tilburg Institute for Law, Technology, and Society

I have been working on research in Washington Stat, working with law enforcement on license plate recognition systems and public disclosure law. And looking at what you can tell. So, here is a map of license plate data from Seattle, showing vehicle activity. In Minneapolis similar data being released led to mapping of the governer’s registered vehicles..

The second area is about law enforcement and body cameras. Several years ago peaceful protestors at UC Davis were pepper sprayed. Even in the cropped version of that image you can see a vast number of phones out, recording the event. And indeed there are a range of police surveillance apps that allow you to capture police encounters without that being visible on the phone, including: ACLU Police Tape, Stop and Frisk Watch; OpenWatch; CopRecorder2. And some of these apps upload the recording to the cloud right away to ensure capture. And there have certainly been a number of incidents from Rodney King to Oscar Grant (BART), Eric Garner, Ian Tomlinson, Michael Brown. Of these only the Michael Brown case featured law enforcement with bodycams. There has been a huge call for more cameras on law enforcement… During a training meeting some officers told me “Where’s the direct-to-YouTube button?” and “If citizens can do it, why can’t we also benefit from the ability to record in public places?”. There is a real awareness of control and of citizen videos. I also heard a lot of there being “a witch hunt about to begin…”.

So, I’m in the middle of focused coding on police attitudes to body cameras. Police are concerned that citizen video is edited, out of context, distorting. And they are concerned that it doesn’t show wider contexts – when recording starts, perspective, the wider scene, the fact that provocation occurs before filming usually. But there is also the issue of control, and immediate physical interaction, framing, disclosure, visibility – around their own safety, around how visible they are on the web. They don’t know why it is being recorded, where it will go…

There have been a number of regulatory responses to this challenge: (1) restrict collection – not many, usually budgetary and rarely on privacy; (2) restrict access – going back to the Minneapolis case, within two weeks of the map of governer vehicles being published in the paper they had an exemption to public disclosure law which is now permanent for this sort of data. In the North Carolina protests recently the call was “release the tapes” – and they released only some – then the cry was “release all the tapes”… But on 1st October law changed to again restrict access to this type of data.

But different state provide different access. Some provide access. In Oakland, California, data was released on how many license plates had been scanned. In Seattle data on scans can, because the data for many scans of one licence plates over 90 days is quite specific, you can almost figure out the householder. But granularity varies.

Now, we do see body cameras of sobriety tests, foot chases, and a half hour long interview with prostitute that discloses a lot of data. Washington shares a lot of video to YouTube. We see that in Rotterdam, Netherlands police doing this too.

But one patrol office told me that he would never give his information to an officer with a camera. Another noted that police choose when to start recording with little guidance on when and how to do this.

And we see a “collatoreal visibility” issue for police around these technologies.

Q&A

Q1) Is there any process where police have to disclose that they are filming with a body cam?

A1) Interesting question… Initially they didn’t know. We used to have two party consent process – as for tapings – to ensure consent/implied consent. But the State attorney general described this as outside of that privacy regulation, saying that a conversation with a police officer is a public conversation. But police are starting to have policies that officers should disclose that they have cameras – partly as they hope and sometimes it may reduce violence to police.

Data Privacy in commercial users of municipal location data – Meg Young, University of Washington

My work looks at how companies use Seattle’s location data. I wanted to look at how data privacy is enacted by Seattle municipal government? And I am drawing on the work of Annemarie Mol and John Law (2004), an ethnographer working on health, that focuses on the lived experience. My data is drawing on ethnographic as as well as focus groups, interviews with municipal government and local civic technology communities. I really wanted to present the role of commercial actors in data privacy in city government.

We know that cities collect location data to provide services, and so share it for third parties to do so. In Washinton we have a state freedom of information (FOI) law, which states “The people of this state do not yield their sovereignty to the government…”, making data requestable.

In Seattle the traffic data is collected by a company called Acyclica. The city is growing and the infrastructure is struggling, so they are gathering data to deal with this, to shape traffic signals. This is a large scale longitudinal data collection process. Acyclica are doing that with wi-fi sensors sniff MAC addresses, the location traces sent to Acyclica (MAC salted). The data is aggregated and sent to the city – they don’t see the detailed creepy tracking, but the company does. And this is where the FOI law comes in. The raw data is on the company side here. If the raw data was a public record, it would be requestable. The company becomes a shield for collecting sensitive data – it is proprietizing.

So you can collect data, have service needs met, but without it becoming public to you and I. But analysing the contract the terms do not preclude the resale of data – though a Seattle Dept. of Transport (DOT) worker notes that right now people trust companies more than government. Now I did ask about this data collection – not approved elsewhere – and was told that having wifi settings on in public making you open to data collection – as it is in public space.

My next example is the data from parking meters/pay stations. This shows only the start, end, no credit card #s etc. The DOT is happy to make this available via public records requests. But you can track each individual, and they are using this data to model parking needs.

The third example is the Open Data Portal for Seattle. They pay Socrata to host that public-facing data portal. They also sell access to cleaned, aggregated data to companies through a separate API called the Open Data Network. The Seattle Open Data Manager didn’t see this situation as different from any other reseller. But there is little thought about third party data users – they rarely come up in converations – who may combine this data with other data sets for data analysis.

So, in summary, municipal government data is no less by and for commercial actors as it is the public. Proprietary protections around data are a strategy for protecting sensitive data. Government transfers data to third party

Q&A

Q1) Seattle has a wifi for all programme

A1) Promisingly this data isn’t being held side by side… But the routers that we connect to collect so much data… Seeing an Oracle database of the websites fokls

Q2) What are you policy recommendations based on your work?

A2) We would recommend licensing data with some restrictions on use, so that if the data is used inappropriately their use could be cut off…

Q2) So activists could be blocked by that recommendation?

A2) That is a tension… Activists are keen for no licensing here for that reason… It is challenging, particularly when data brokers can do problematic profiling…

Q2) But that restricts activists from questioning the state as well.

Response – Sandra Braman

I think that these presentations highlight many of the issues that raise questions about values we hold as key as humans. And I want to start from an aggressive position, thinking about how and why you might effectively be an activist in this sort of environment. And I want to say that any concerns about algorithmically driven processes should be evaluated in the same way as we would social process. So, for instance we need think about how the press and media interrogate data and politicians

? “Decoding the social” (coming soon) is looking at social data and analysis of social data in the context of big data. She argues that social life is too big and complex than predicatable data. Everything that people who use big data “do” to understand patterns, are things that activists can do too. We can be just as sophisticated as corporations.

The two things I am thinking about are how to mask the local, and how to use the local… When I talk of masking the local I look back to work I did several years back on local broadcasting. There is mammoth literature on TV as locale, and production and how that is separate, misrepresenting, and the assumptions versus the actual information provided vs actual decision making. My perception is that social activism is that there is some brilliant activity taking place – brilliance at moments, specific apps often. And I think that if you look at the essays that Julian Assange before he founded WikiLeaks, particularly n weak links and how those work… He uses sophisticated social theory in a political manner.

But anonymity is practicably impossible… What can we learn from local broadcast? You can use phones in organised ways – there was training for phone cameras for the Battle of Seattle for instance. You can fight with indistinguishable actions – all doing the same things. Encryption is cat and mouse… Often we have activists presenting themselves as mice, although we did see an app discussed at the plenary on apps to alert you to protest and risk. And I have written before on tactical memory.

In terms of using the local… If you know you will be sensed all the time, there are things you can do as an activist to use that. It is useful to think about how we can conceive of ourselves as activists as part of the network. And I was inspired by US libel laws – if a journalist has transmission/recording devices but are a neutral observer, you are not “repeating” the libel and can share that footage. That goes back to 1970s law, but that can be useful to us.

We are at risk of being censored, but that means that you have choices about what to share, being deliberate in giving signals. We have witnessing, which can be taken as a serious commitment. That can happen with people with phones, you can train witnessing. There are many moments were leakage can be an opportunity – maybe not with volume or content of Snowden, but we can do that. There are also ways to learn and shape learning. But we can also be routers, and be critically engaged in that – what we share, the acceptable error rate. National Security are concerned about where in the stream they should target the misinformation – activists can adopt that too. The server functions – see my strategic memory piece. We certainly have community-based wifi, MESH networks, and that is useful politically and socially. We have responsibilities to build the public that is appropriate, and the networking infrastructure that enables those freedom. We can use more computational power to resolve issues. Information can be an enabler as well as influencing your own activism. Thank you to Anne and her group in Amsterdam for triggering thinking here, but big data we should be engaging critically. If you can’t make decisions in some way, there’s no point to doing it.

I think there needs to be more robustness in managing and working with data. If you go far then you need a very high level of methodological trust. Information has to stand up in court, to respect activist contributions to data. Use as your standard, what would be acceptable in court. And in a Panspectrum (not Panopticon) environment, when data is collected all the times, you absolutely have to ask the right questions.

Panel Q&A

Q1) I was really interested in that idea of witnessing as being part of being a modern digital citizens… Is there more on protections or on that which you can say

A1 – Sandra) We’ve seen all protections for whistle blowing in government disappear under Bush (II)… We still have protections for private sector whistle blowers. But there would be an interesting research project in there…

Q2) I wondered about that idea of cat and mouse use of technology… Isn’t that potentially making access a matter of securitisation…?

A2) I don’t think that “securitisation” makes you a military force… One thing I forgot to say was about network relations… If a system is interacting with another system – the principle of requisite variety – they have to be as complex as the system you are dealing with. You have to be at least as sophisticated as the other guy…

Q3) For Bryce and Meg, there are so many tensions over when data should be public and when it should be private, and tensions there… And police desires to show the good things they do. Also Meg, this idea of privatising data to ensure privacy of data – it’s problematic for us to collect data, but now a third party can do that.

A3 – Bryce) One thing I didn’t explain well enough is that video online comes from police, and from activists – it depends on the video here. Some videos are accessed via public records requests and published to YouTube channel – in fact in Washington you can make requests for free and you can do it anonymously. Police department does public video. Whilst they did a pilot in 2014 they had a hackathon to consider how to deal with redaction issues… detect faces, blur them, etc.. And proactive posting of – only some – video. The narrative of sharing everything, but that isn’t the case. The rhetoric has been about being open, by privacy rights and the new police chief. A lot of it was administrative cost concerns… In the hackathon they asked if posting in a blurred form, it would do away with blanket requests to focus requests. At that time they dealt with all requests for email. They were receiving so many emails and under state law they had to give up all the data and for free. But state law varies, in Charlotte they gave up less data. In some states there is a a differnet approach with press conferences, narratives around the footage as they release parts of videos…

A3 – Meg) The city has worked on how to release data… They have a privacy screening process. They try to provide data in a way that is embedded. They still have a hard core central value that any public record is requestable. Collection limitation is an important and essential part of what cities should be doing… In a way private companies collecting data results in large data sets that will end up insecure in those data sets… Going back to what Bryce was saying, the bodycam initiative was really controversial… There was so much footage and unclear what should be public and when… And the faultlines have been pretty deep. We have the Coalition for Open Government advocates for full access, the ACLU worried that these become surveillance cameras… This was really contentious… They passed a version of a compromise but the bottom line is that the PRA is still a core value for the state.

A3 – Bryce) Much of the ACLU, nationally certainly, was to support bodycams, but individuals and local ACLUs change and vary… They were very pro, then backing off, then local variance… It’s a very different picture hence that variance.

Q4) For Matthias, you talked about anti-masking laws. Are there cases where people have been brought in for jamming signals under that law.

A4 – Matthias) Right now the American cases is looking for keywords – manufacturers of devices, the ways data is discussed. I haven’t seen cases like that, but perhaps it is too new… I am a Swedish lawyer and that jamming would be illegal in protest…

A4 – Sandra) Would that be under antimasking or under jamming law.

A4 – Matthias) It would be under hacking laws…

Q4) If you counter with information… But not if switching phone off…

A4 – Matthias) That’s still allowed right now.

Q5) Do you do work comparing US and UK bodycamera?

A5 – Bryce) I don’t but I have come across the Rotterdam footage. One of my colleagues has looked at this… The impetus for adoption in the Netherlands has been different. In the US it is transparancy, in the Netherlands it was protection of public servants as the narrative. A number of co-authors have just published recently on the use of cameras and how they may increase assault on officers… Seeing some counter-intuitive results… But the why question is interesting.

Comment) Is there any aspect of cameras being used in higher risk areas that makes that more likely perhaps?

A5 – Sandra) It’s the YouTube on-air question – everyone imagines themselves on air.

Q6) Two speakers quoted individuals accused of serious sexual assault… And I was wondering how we account for the fact that activists are not homogenous here… Particularly when tech activists are often white males, they can be problematic…

A6) Techies don’t tend to be the most politically correct people – to generalise a great deal…

A6 – Sandra) I think they are separate issues, if I didn’t engage with people whose behaviour is problematic it would be hard to do any job at all. Those things have to be fought, but as a woman you should also challenge and call those white male activists on their actions.

Q7 – me) I was wondering about the retention of data. In Europe there is a lot of use of CCTV and the model  there is record everything, and retain any incident. In the US CCTV is not in widespread use I think and the bodycam model is record incidents in progress only… So I was wondering about that choice in practice and about the retention of those videos and the data after capture.

A7 – Bryce) The ACLU has looked at retention of data. It is a state based issue. In Washington there are mandatory minimu periods… They are interesting as due to findings in conduct they are under requirements to keep everything for as long as possible so auditors from DOJ can access and audit. Bellingham and Spokane, officers can flag items, and supervisors can… And that is what dictates retention schedule. There are issues there of course. Default when I was there was 2 years. If it is publicly available and hits YouTube then that will be far more long lasting, can pop up again… Perpetual memory there… So actual retention schedule won’t matter.

A7 – Sandra) A small follow up – you may have answered with that metadata… Do they treat bodycam data like other types of police data, or is it a separate class of data?

A7 – Bryce) Generally it is being thought of as data collection… And there is no difference from public disclosure, but they are really worried about public access. And how they share that with prosecutors… They could share on DVD… And wanted to use share function of software… But they didn’t want emails to be publicly disclosable with that link… So being thought about as like email.

Q8 – Sandra) On behalf of colleagues working on visual evidence in course.

Comment – Micheal) There is work on video and how it can be perceived as “truth” without awareness of potential for manipulation.

A8 – Bryce) One of the interesting things in Bellingham was release of that video I showed of a suspect running away… The footage was following a police pick up for suspected drug dealing but the footage showed evasion of arrest and the whole encounter… And in that case, whether or not he was guilty of the drug charge, that video told a story of the encounter. In preparing for the court case the police shared the video with his defence team and almost immediately they entered a guilty plea in response to that… And I think we will see more of that kind of invisible use of footage that never goes to court.

And with that this session ends… 

PA-31:Caught in a feedback loop? Algorithmic personalization and digital traces (Chair: Katrin Weller)

Wiebke Loosen1, Marco T Bastos2, Cornelius Puschmann3, Uwe Hasebrink1, Sascha Hölig1, Lisa Merten1, Jan­-Hinrik Schmidt1, Katharina E Kinder­-Kurlanda4, Katrin Weller4

1Hans Bredow Institute for Media Research; 2; 3Alexander von Humboldt Institute for Internet and Society; 4GESIS Leibniz Institute for the Social Sciences

?? – Marco T Bastos, University of California, Davis  and Cornelius Puschmann, Alexander von Humboldt Institute for Internet and Society

Marco: This is a long-running project that Cornelius and I have been working on. At the time we started, in 2012, it wasn’t clear what impact social media might have on the filtering of news, but they are now huge mediators of news and news content in Western countries.

Since then there is some challenge and conflict between journalists, news editors and audiences and that raises the issue of how to monitor and understand that through digital trace data. We want to think about which topics are emphasized by news editors, and which are most shared by social media, etc.

So we will talk about taking two weeks of content from the NYT and The Guardian across a range of social media sites – that’s work I’ve been doing. And Cornelius has tracked 1.5/4 years worth of content from four German newspapers (Suddeutsche Zeitung, Die Zeit, FAZ, Die Welt).

With the Guardian we accessed data from the API which tells you which articles were published in print, and which have not – that is baseline data for the emphasis editors place on different types of content.

So, I’ll talk about my data from the NY Times and the Guardian, from 2013, though we now have 2014 and 2015 data too. This data from two weeks is about 16k+ articles. The Guardian runs around 800 articles per day, the NYT does around 1000. And we could track the items on Twitter, Facebook, Google+, Delicious, Pinterest and Stumbleupon. We do that by grabbing the unique identifyer for the news article, then use the social media endpoints of social platforms to find sharing. But we had a challenge with Twitter – in 2014 they killed the end point we and others had been using to track sharing of URLs. The other sites are active, but relatively irrelevant in the sharing of news items! And there are considerable differences across the ecosystems, some of these social networks are not immediately identifiable as social networks – will Delicious or Pinterest impact popularity?

This data allows us to contrast the differences in topics identified by news editors and social media users.

So, looking at the NYT there is a lot of world news, local news, opinion. But looking at the range of articles Twitter maps relatively well (higher sharing of national news, opinion and technology news), but Facebook is really different – there is huge sharing of opinion, as people share what lies with their interests etc. We see outliers in every section – some articles skew the data here.

If we look at everything that appeared in print, we can look at a horrible diagram that shows all shares… When you look here you see how big Pinterest is, but in fashion in lifestyle areas. The sharing there doesn’t reflect ratio of articles published really though. Google+ has sharing in science and technology in the Guardian, in environment, jobs, local news, opinion and technology in the NYT.

Interestingly news and sports, which are real staples of newspapers but barely feature here. Economics are even worse. Now the articles are english-speaking but they are available globally… But what about differences in Germany… Over to Cornelius…

Cornelius: So Marcos’ work is ahead of mine – he’s already published some of this work. But I have been applying his approach to German newspapers. I’ve been looking at usage metrics and how that relationship between audiences and publishers, and how that relationship changes over time.

So, I’ve looked at Facebook engagement with articles in four German newspapers. I have compared comments, likes and shares and how contribution varies… Opinion is important for newspapers but not necessarily where the action is. And I don’t think people share stories in some areas less – in economics they like and comment, but they don’t share. So interesting to think about the social perception of sharability.

So, a graph here of Die Zeit here shows articles published and the articles shared on Facebook… You see a real change in 2014 to greater numbers (in both). I have also looked at type of articles and print vs. web versions.

So, some observations: niche social networks (e.g. Pinterest) are more relevant to news sharing than expected. Reliance on FB at Die Zeit grew suddenly in 2014. Social nors of liking, sharing and discussing differ significantly across news desks. Some sections (e.g. sports) see a mismatch of importance and use versus liking and sharing.

In the future we want to look at temporal shifts in social media feedback and newspapers coverage. Monitoring

Q&A

Q1) Have you accounted for the possibility of bots sharing content?

A1 – Marcus) No, we haven’t But we are looking across the board but we cannot account for that with the data we have.

Q2) How did you define or find out that an article was shared from the URLs

A2) Tricky… We wrote a script for parsing shortened URLs to check that.

A2 – Cornelius) Read Marco’s excellent documentation.

Q3) What do you make of how readers are engaging, what they like more, what they share more… and what influences that?

A3 – Cornelius) I think it is hard to judge. There are some indications, and have some idea of some functions that are marketed by the platforms being used in different ways… But wouldn’t want to speculate.

Twitter Friend Reportoires: Inferring sources of information management from digital traces – Jan-Hinri Schmidt; Lisa Merton, Wiebke Loosen, Uwe, Kartin?

Our starting point was to think about shifting the focus of Twitter Research. Many studies are on Twitter – explicitly or implicitly – as a broadcast paradigm, but we want to conceive of it as an information tool, and the concept of “Twitter Friend Reportoires” – using “Friend” in the Twitter terminology – someone I follow. We ware looking for patterns in composition of friend sets.

So we take a user, take their friends list, and compare to list of accounts identified previously. So our index has 7,528 Twitter account of media outlets (20.8%) of organisations (political parties, companies, civil society organisations (53.4%) and of individuals (politicians, celebrities and journalists, 25.8%) – all in Germany. We take our sample, compare with a relational table, and then to our master index. And if the account isn’t found in the master index, we can’t say anything about them yet.

To demonstrate the answers we can find with this approach…. We have looked at five different samples:

  • Audience_TS – sample following PSB TV News
  • Audience_SZ – sample following quality daily newspapers
  • MdB – members of federal parliament
  • BPK – political journalists registerd for the bundespressekonferenz
  • Random – random sample of German Twitter users (via Axel Bruns)

We can look at the friends here, and we can categorise the account catagories. In our random sample 77.8% are not identifiable, 22.2% are in our index (around 13% are individual accounts). That is lower than the percentages of friends in our index for all other audiences – for MdB and BPK a high percentage of their friends are in our index. Across the groups there is less following of organisational accounts (in our index) – with the exception of the MdB and political parties. If we look at the media accounts we can see that with the two audience samples they have more following of media accounts than others, including MdB and BPK… When it comes to individual public figures in our indexes, celebrities are prominent for audiences, much less so for MdB and BPK, but MdB follow other politicians, and journalists tend to follow other politicians. And journalists do follow politicians, and politicians – to a less extent – follow journalists.

In terms of patterns of preference we can suggest a model of a fictional user to understand preference between our three categories (organisational account, media account, individual account). And we can use that profile example and compare with our own data, to see how others behaviours fit that typology. So, in our random sample over 30% (37,9%) didn’t follow any organisational accounts. Amongst MdB and BPK there is a real preference for individual accounts.

So, this is what we are measuring right now… I am still not quite happy yet. It is complex to explain, but hard to also show the detail behind that… We have 20 categories in our master index but only three are shown here… Some frequently asked questions that I will ask and answer based on previous talks…

  1. Around 40% identified accounts is not very must is it?
    Yes and no! We have increased this over time. But initially we did not include international accounts, if we did that we’d increase share, especially with celebrities, also international media outlets. However, there is always a trade off, there will also be a long tail… And we are interested in specific categorisations and in public speakers as sources on Twitter.
  2. What does friending mean on Twitter anyway?
    Good question! More qualitative research is needed to understand that – but there is some work on journalists (only). Maybe people friend people for information management reasons, reciprocity norms, public signal of connection, etc. And also how important are algorithmic recommendations in building your set of friends?

Q&A

Q1 – me) I’m glad you raised the issue of recommendation algorithms – the celebrity issue you identified is something Twitter really pushes as a platform now. I was wondering though if you have been looking at how long the people you are looking at have been on Twitter – as behavioural norms

A1) It would be possible to collect it, but we don’t now. We do, for journalists and politicians we do gather list of friends of each month to get longitudinal idea of changes. Over a year, there haven’t been many changes yet…

Q2) Really interesting talk, could you go further with the reportoire? Could there be a discrepancy between the reportoire and their use in terms of retweeting, replying etc.

A2) We haven’t so far… Could see which types of tweets accounts are favouriting or retweeting – but we are not there yet.

Q3) A problem here…

A3) I am not completely happy to establish preference based on indexes… But not sure how else to do this, so maybe you can help me with it. 

Analysing digital traces: The epistemological dimension of algorithms and (big) internet data – Katharine Kinder-Kuranda and Katrin Weller

Katherine: We are interested in the epistemiological aspects of algorithms, so how we research these. So, our research subjects are researchers themselves.

So we are seeing real focus on algorithms in Internet Research, and we need to understand the (hidden) influence of algorithms on all kinds of research, including researchers themselves. So we have researchers interested in algorithms… And in platforms, users and data… But all of these aspects are totally intertwined.

So lets take a Twitter profile… A user of Twitter gets recommendations of who to follow in a given moment of time, and they see newsfeeds at a given moment of time. That user has context that as a researcher I cannot see or interpret the impact of that context on the user’s choice of e.g. who they then follow.

So, algorithms observe, count, sort and rank information on the basis of a variety of different data sources – they are highly heterogeneous and transient. Online data can be user-generated content or activity, traces or location data from various internet platforms. That promises new possibilities, but also raises significant challenge, including because of its heterogeneity.

Social media data has uncertain origins, about users and their motivations; often uncertain provenance of the data. The “users that we see are not users” but highly structured profiles and the result of careful image-management. And we see renewed discussion of methods and epistemology, particularly within the social sciences, for instance suggestions include “messiness” (Knupf 2014), and ? (Kitchen 2012).

So, what does this mean for algorithms? Algorithms operate on an uncertain basis and present real challenges for internet research. So I’m going to now talk about work that Katrin and I did in a qualitative study of social media researchers (Kinder-Kurlanda and Weller 2014). We conducted interviews at conferences – highly varied – speaking to those working with data obtained from social media. There were 40 interviews in total and we focused on research data management.

We found that researchers found very individual ways to address epistemological challenges in order to realise the potential of this data for research. And there were three real concerns here: accessibility, methodology, research ethics.

  1. Data access and quality of research

Here there were challenges of data access, restrictions on privacy of social media data, technical skills; adjusting research questions due to data availability; struggle for data access often consumes much effort. Researchers talks about difficulty in finding publicatio outlets, recognition, jobs in the disciplinary “mainstream” – it is getting better but a big issue. There was also comment on this being a computer science dominated fields – which had highly formalised review processes, few high ranking conferences, and this enforces highly strategic planning of resources and research topics. So researchers attempts to acieve validity and good research quality are constrained. So, this is really challenging for researchers.

2. New Methodologies for “big data”

Methodologies in this research often defy traditional ways of achieveing research validity – through ensuring reproducability, sharing of data sets (ethically not possible). There is a need to find patterns in large data sets by analysis of keywords, or automated analysis. It is hard for others to understand process and validate it. Data sets cannot be shared…

3. Research ethics

There is a lack of users informed consent to studies based on online data (Hutton and Henderson 2015). There are ethical complexity. Data cannot really be anonymised…

So, how do algorithms influence our research data and what does this mean for researchers who want to learn something about the users? Algoritms influence what content users interact with, for example: How to study user networks without knowing the algorithms behind follower/friend suggestions? How to study populations?

To get back to the question of observing algorithms? Well the problem is that various actors in the most diverse situations react out of different interests to the results of algorithic calculations, and may even try to influence algorithms. You see that with tactics around trending hashtags as part of protest for instance. The results of algorithmic analyses presented to internet users with information on how algorithms take part.

In terms of next steps. researchers need to be aware that online environments are influenced by algorithms and so are the users and the data they leave behind. It may mean capturing the “look and feel” of the platform as part of research.

Q&A

Q1) One thing I wasn’t sure about… Is your sense when you were interviewing researchers that they were unaware of algorithmic shaping… Or was it about not being sure how to capture that?

A1) Algorithms wasn’t the terminology when we started our work… They talked about big data… the framing and terminology is shifting… So we are adding the algorithms now… But we did find varying levels of understanding of platform function – some were very aware of platform dynamics, but some felt that if they have a Twitter dataset that’s a representation of the real world.

Q1) I would think that if we think about recognising how algorithms and platform function come in as an object… Presumably some working on interfaces were aware but others looking at, e.g. friendship group, took data and weren’t thinking about platform function, but that is something they should be thinking about…

A1) Yes.

Q2) What do you mean by the term “algorithm” now, and how that term is different from previously…

A2) I’m sure there is a messyness of this term. I do believe that looking at programmes, wouldn’t solve that problem. You have the algorithm in itself, gaining attention… From researchers and industry… So you have programmers tweaking algorithms here… as part of different structures and pressures and contexts… But algorithms are part of a lot of peoples’ everyday practice… It makes sense to focus on those.

Q3) You started at the beginning with an illustration of the researcher in the middle, then moved onto the agency of the user… And the changes to the analytical capacities working with this type of data… But how much is the awareness amongst researchers of how the data, the tools they work with, and how they are inscribed into the research…

A3) Thank you for making that distinction here. The problem in a way is that we saw what we might expect – highly varied awareness… This was determined by disciplinary background – whether STS researchers in sociology, or whether a computer scientist, say. We didn’t find too many disciplinary trends, but we looked across many disciplines…. But there were huge ranges of approach and attitude here – our data was too broad.

Panel Q&A

Q1 – Cornelius) I think that we should say that if you are wondering about “feedback” here, it’s about thinking about metrics and how they then feedback into practice, if there is a feedback loop… From very different perspectives… I would like to return to that – maybe next year when research has progressed. More qualitative understanding is needed. But a challenge is that stakeholder groups vary greatly… What if one finding doesn’t hold for other groups…

Q2) I am from the Wikimedia Foundation… I’m someone who does data analysis a lot. I am curious if in looking at these problems you have looked at recommender systems research which has been researching this space for 10 years, work on messy data and cleaning messy data… There are so many tiny differences that can really make a difference. I work on predictive algorithms, but that’s a new bit of turbulence in a turbulent sea… How much of this do you want to bring this space…

A2 – Katrin) These communities have not come together yet. I know people who work in socio-technical studies who do study interface changes… There is another community that is aware that this exists… And is not aware so closely… But see it as tiny bits of the same puzzle… And can be harder to understand for historical data… And getting an idea of what factors influence your data set. In our data sets we have interviewees more like you, and some with people at sessions like this… There is some connection, but not all of those areas coming together…

A2 – Cornelius) I think that there is a clash between computational social science data work, and this stuff here… That predictable aspect screws with big claims about society… Maybe an awareness but not a keenness. In terms of older computer science research that we are not engaging in, but should be… But often there is a conflict of interests sometimes… I saw a presentation that showed changes to the interface, changing behaviour… But companies don’t want to disclose that manipulation…

Comment) We’ve gone through a period, disheartened to see it is still there, that researchers are so excited to trace human activities, that they treat hashtags as the political debate… This community helpfully problematises or contextualises this… But I think that these papers are raising the question of people orientating practices towards the platform, from machine learning… I find it hard to talk about that… And how behaviour feeds into machine learn… Our system tips to behaviour, and technology shifts and reacts to that which is hard.

Q3) I wanted to agree with that idea of the  need to document. But I want to push at your implicit position that this is messy and difficult and hard to measure… But I think that applies to *any* methods… Standards of data removal, arise elsewhere, messiness occurs elsewhere… Some of those issues apply across all kinds of research…

A3 – Cornelius) Christian would have had an example on his algorithm audit work that might have been helpful there.

Comment) I wanted to comment on social media research versus traditional social science research… We don’t have much power over our data set – that’s quite different in comparison with those running surveys, undertaking interviews… and I have control of that tool… And I think that argument isn’t just about survey analysis, but other qualitative analysis… Your research design can fit your purposes…

 

Twitter recommend algorithms, celebrities and noise. Time on twitter. Overall follower/following counts? Does friend suggest influence?

Advertistors? and role in shaping content in news

Time:
Friday, 07/Oct/2016:

4:00pm – 5:30pm

Session Chair:

Location: HU 1.205
Humboldt University of Berlin Dorotheenstr. 24 Building 1, second floor 80 seats
Show help for 'Increase or decrease the abstract text size'

Presentations

Wiebke Loosen1, Marco T Bastos2, Cornelius Puschmann3, Uwe Hasebrink1, Sascha Hölig1, Lisa Merten1, Jan­-Hinrik Schmidt1, Katharina E Kinder­-Kurlanda4, Katrin Weller4

1Hans Bredow Institute for Media Research; 2University of California, Davis; 3Alexander von Humboldt Institute for Internet and Society; 4GESIS Leibniz Institute for the Social Sciences

Oct 062016
 

Today I am again at the Association of Internet Researchers AoIR 2016 Conference in Berlin. Yesterday we had workshops, today the conference kicks off properly. Follow the tweets at: #aoir2016.

As usual this is a liveblog so all comments and corrections are very much welcomed. 

PA-02 Platform Studies: The Rules of Engagement (Chair: Jean Burgess, QUT)

How affordances arise through relations between platforms, their different types of users, and what they do to the technology – Taina Bucher (University of Copenhagen) and Anne Helmond (University of Amsterdam)

Taina: Hearts on Twitter: In 2015 Twitter moved from stars to hearts, changing the affordances of the platform. They stated that they wanted to make the platform more accessible to new users, but that impacted on existing users.

Today we are going to talk about conceptualising affordances. In it’s original meaning an affordance is conceived of as a relational property (Gibson). For Norman perceived affordances were more the concern – thinking about how objects can exhibit or constrain particular actions. Affordances are not just the visual clues or possibilities, but can be felt. Gaver talks about these technology affordances. There are also social affordances – talked about my many – mainly about how poor technological affordances have impact on societies. It is mainly about impact of technology and how it can contain and constrain sociality. And finally we have communicative affordances (Hutchby), how technological affordances impact on communities and communications of practices.

So, what about platform changes? If we think about design affordances, we can see that there are different ways to understand this. The official reason for the design was given as about the audience, affording sociality of community and practices.

Affordances continues to play an important role in media and social media research. They tend to be conceptualised as either high-level or low-level affordances, with ontological and epistemological differences:

  • High: affordance in the relation – actions enabled or constrained
  • Low: affordance in the technical features of the user interface – reference to Gibson but they vary in where and when affordances are seen, and what features are supposed to enable or constrain.

Anne: We want to now turn to platform-sensitive approach, expanding the notion of the user –> different types of platform users, end-users, developers, researchers and advertisers – there is a real diversity of users and user needs and experiences here (see Gillespie on platforms. So, in the case of Twitter there are many users and many agendas – and multiple interfaces. Platforms are dynamic environments – and that differentiates social media platforms from Gibson’s environmental platforms. Computational systems driving media platforms are different, social media platforms adjust interfaces to their users through personalisation, A/B testing, algorithmically organised (e.g. Twitter recommending people to follow based on interests and actions).

In order to take a relational view of affordances, and do that justice, we also need to understand what users afford to the platforms – as they contribute, create content, provide data that enables to use and development and income (through advertisers) for the platform. Returning to Twitter… The platform affords different things for different people

Taking medium-specificity of platforms into account we can revisit earlier conceptions of affordance and critically analyse how they may be employed or translated to platform environments. Platform users are diverse and multiple, and relationships are multidirectional, with users contributing back to the platform. And those different users have different agendas around affordances – and in our Twitter case study, for instance, that includes developers and advertisers, users who are interested in affordances to measure user engagement.

How the social media APIs that scholars so often use for research are—for commercial reasons—skewed positively toward ‘connection’ and thus make it difficult to understand practices of ‘disconnection’ – Nicolas John (Hebrew University of Israel) and Asaf Nissenbaum (Hebrew University of Israel)

Consider this… On Facebook…If you add someone as a friend they are notified. If you unfriend them, they do not. If you post something you see it in your feed, if you delete it it is not broadcast. They have a page called World of Friends – they don’t have one called World of Enemies. And Facebook does not take kindly to app creators who seek to surface unfriending and removal of content. And Facebook is, like other social media platforms, therefore significantly biased towards positive friending and sharing actions. And that has implications for norms and for our research in these spaces.

One of our key questions here is what can’t we know about

Agnotology is defined as the study of ignorance. Robert Proctor talks about this in three terms: native state – childhood for instance; strategic ploy – e.g. the tobacco industry on health for years; lost realm – the knowledge that we cease to hold, that we loose.

I won’t go into detail on critiques of APIs for social science research, but as an overview the main critiques are:

  1. APIs are restrictive – they can cost money, we are limited to a percentage of the whole – Burgess and Bruns 2015; Bucher 2013; Bruns 2013; Driscoll and Walker
  2. APIs are opaque
  3. APIs can change with little notice (and do)
  4. Omitted data – Baym 2013 – now our point is that these platforms collect this data but do not share it.
  5. Bias to present – boyd and Crawford 2012

Asaf: Our methodology was to look at some of the most popular social media spaces and their APIs. We were were looking at connectivity in these spaces – liking, sharing, etc. And we also looked for the opposite traits – unliking, deletion, etc. We found that social media had very little data, if any, on “negative” traits – and we’ll look at this across three areas: other people and their content; me and my content; commercial users and their crowds.

Other people and their content – APIs tend to supply basic connectivity – friends/following, grouping, likes. Almost no historical content – except Facebook which shares when a user has liked a page. Current state only – disconnections are not accounted for. There is a reason to not know this data – privacy concerns perhaps – but that doesn’t explain my not being able to find this sort of information about my own profile.

Me and my content – negative traits and actions are hidden even from ourselves. Success is measured – likes and sharin, of you or by you. Decline is not – disconnections are lost connections… except on Twitter where you can see analytics of followers – but no names there, and not in the API. So we are losing who we once were but are not anymore. Social network sites do not see fit to share information over time… Lacking disconnection data is an idealogical and commercial issue.

Commercial users and their crowds – these users can see much more of their histories, and the negative actions online. They have a different regime of access in many cases, with the ups and downs revealed – though you may need to pay for access. Negative feedback receives special attention. Facebook offers the most detailed information on usage – including blocking and unliking information. Customers know more than users, or Pages vs. Groups.

Nicholas: So, implications. From what Asaf has shared shows the risk for API-based research… Where researchers’ work may be shaped by the affordances of the API being used. Any attempt to capture negative actions – unlikes, choices to leave or unfriend. If we can’t use APIs to measure social media phenomena, we have to use other means. So, unfriending is understood through surveys – time consuming and problematic. And that can put you off exploring these spaces – it limits research. The advertiser-friends user experience distorts the space – it’s like the stock market only reporting the rises except for a few super wealthy users who get the full picture.

A biography of Twitter (a story told through the intertwined stories of its key features and the social norms that give them meaning, drawing on archival material and oral history interviews with users) – Jean Burgess (Queensland University of Technology) and Nancy Baym (Microsoft Research)

I want to start by talking about what I mean by platforms, and what I mean by biographies. Here platforms are these social media platforms that afford particular possibilities, they enable and shape society – we heard about the platformisation of society last night – but their governance, affordances, are shaped by their own economic existance. They are shaping and mediating socio-cultural experience and we need to better to understand the values and socio-cultural concerns of the platforms. By platform studies we mean treating social media platforms as spaces to study in their own rights: as institutions, as mediating forces in the environment.

So, why “biography” here? First we argue that whilst biographical forms tend to be reserved for individuals (occasionally companies and race horses), they are about putting the subject in context of relationships, place in time, and that the context shapes the subject. Biographies are always partial though – based on unreliable interviews and information, they quickly go out of date, and just as we cannot get inside the heads of those who are subjects of biographies, we cannot get inside many of the companies at the heart of social media platforms. But (after Richard Rogers) understanding changes helps us to understand the platform.

So, in our forthcoming book, Twitter: A Biography (NYU 2017), we will look at competing and converging desires around e.g the @, RT, #. Twitter’s key feature set are key characters in it’s biography. Each has been a rich site of competing cultures and norms. We drew extensively on the Internet Archives, bloggers, and interviews with a range of users of the platform.

Nancy: When we interviewed people we downloaded their archive with them and talked through their behaviour and how it had changed – and many of those features and changes emerged from that. What came out strongly is that noone knows what Twitter is for – not just amongst users but also amongst the creators – you see that today with Jack Dorsey and Anne Richards. The heart of this issue is about whether Twitter is about sociality and fun, or is it a very important site for sharing important news and events. Users try to negotiate why they need this space, what is it for… They start squabling saying “Twitter, you are doing it wrong!”… Changes come with backlash and response, changed decisions from Twitter… But that is also accompanied by the media coverage of Twitter, but also the third party platforms build on Twitter.

So the “@” is at the heart of Twitter for sociality and Twitter for information distribution. It was imported from other spaces – IRC most obviously – as with other features. One of the earliest things Twitter incorporated was the @ and the links back.. You have things like originally you could see everyone’s @ replies and that led to feed clutter – although some liked seeing unexpected messages like this. So, Twitter made a change so you could choose. And then they changed again to automatically not see replies from those you don’t follow. So people worked around that with “.@” – which created conflict between the needs of the users, the ways they make it usable, and the way the platform wants to make the space less confusing to new users.

The “RT” gave credit to people for their words, and preserved integrity of words. At first this wasn’t there and so you had huge variance – the RT, the manually spelled out retweet, the hat tip (HT). Technical changes were made, then you saw the number of retweets emerging as a measure of success and changing cultures and practices.

The “#” is hugely disputed – it emerged through hashtag.org: you couldn’t follow them in Twitter at first but they incorporated it to fend off third party tools. They are beloved by techies, and hated by user experience designers. And they are useful but they are also easily coopted by trolls – as we’ve seen on our own hashtag.

Insights into the actual uses to which audience data analytics are put by content creators in the new screen ecology (and the limitations of these analytics) – Stuart Cunningham (QUT) and David Craig (USC Annenberg School for Communication and Journalism)

The algorithmic culture is well understood as a part of our culture. There are around 150 items on Tarleton Gillespie and Nick Seaver’s recent reading list and the literature is growing rapidly. We want to bring back a bounded sense of agency in the context of online creatives.

What do I mean by “online creatives”? Well we are looking at social media entertainment – a “new screen ecology” (Cunningham and Silver 2013; 2015) shaped by new online creatives who are professionalising and monetising on platforms like YouTube, as opposed to professional spaces, e.g. Netflix. YouTube has more than 1 billion users, with revenue in 2015 estimated at $4 billion per year. And there are a large number of online creatives earning significant incomes from their content in these spaces.

Previously online creatives were bound up with ideas of democratic participative cultures but we want to offer an immanent critique of the limits of data analytics/algorithmic culture in shaping SME from with the industry on both the creator (bottom up) and platform (top down) side. This is an approach to social criticism exposes the way reality conflicts not with some “transcendent” concept of rationality but with its own avowed norms, drawing on Foucault’s work on power and domination.

We undertook a large number of interviews and from that I’m going to throw some quotes at you… There is talk of information overload – of what one might do as an online creative presented with a wealth of data. Creatives talk about the “non-scalable practices” – the importance and time required to engage with fans and subscribers. Creatives talk about at least half of a working week being spent on high touch work like responding to comments, managing trolls, and dealing with challenging responses (especially with creators whose kids are engaged in their content).

We also see cross-platform engagement – and an associated major scaling in workload. There is a volume issue on Facebook, and the use of Twitter to manage that. There is also a sense of unintended consequences – scale has destroyed value. Income might be $1 or $2 for 100,000s or millions of views. There are inherent limits to algorithmic culture… But people enjoy being part of it and reflect a real entrepreneurial culture.

In one or tow sentences, the history of YouTube can be seen as a sort of clash of NoCal and SoCal cultures. Again, no-one knows what it is for. And that conflict has been there for ten years. And you also have the MCNs (Multi-Contact Networks) who are caught like the meat in the sandwich here.

Panel Q&A

Q1) I was wondering about user needs and how that factors in. You all drew upon it to an extent… And the dissatisfaction of users around whether needs are listened to or not was evident in some of the case studies here. I wanted to ask about that.

A1 – Nancy) There are lots of users, and users have different needs. When platforms change and users are angry, others are happy. We have different users with very different needs… Both of those perspectives are user needs, they both call for responses to make their needs possible… The conflict and challenges, how platforms respond to those tensions and how efforts to respond raise new tensions… that’s really at the heart here.

A1 – Jean) In our historical work we’ve also seen that some users voices can really overpower others – there are influential users and they sometimes drown out other voices, and I don’t want to stereotype here but often technical voices drown out those more concerned with relationships and intimacy.

Q2) You talked about platforms and how they developed (and I’m afraid I didn’t catch the rest of this question…)

A2 – David) There are multilateral conflicts about what features to include and exclude… And what is interesting is thinking about what ideas fail… With creators you see economic dependence on platforms and affordances – e.g. versus PGC (Professionally Generated Content).

A2 – Nicholas) I don’t know what user needs are in a broader sense, but everyone wants to know who unfriended them, who deleted them… And a dislike button, or an unlike button… The response was strong but “this post makes me sad” doesn’t answer that and there is no “you bastard for posting that!” button.

Q3) Would it be beneficial to expose unfriending/negative traits?

A3 – Nicholas) I can think of a use case for why unfriending would be useful – for instance wouldn’t it be useful to understand unfriending around the US elections. That data is captured – Facebook know – but we cannot access it to research it.

A3 – Stuart) It might be good for researchers, but is it in the public good? In Europe and with the Right to be Forgotten should we limit further the data availability…

A3 – Nancy) I think the challenge is that mismatch of only sharing good things, not sharing and allowing exploration of negative contact and activity.

A3 – Jean) There are business reasons for positivity versus negativity, but it is also about how the platforms imagine their customers and audiences.

Q4) I was intrigued by the idea of the “Medium specificity of platforms” – what would that be? I’ve been thinking about devices and interfaces and how they are accessed… We have what we think of as a range but actually we are used to using really one or two platforms – e.g. Apple iPhone – in terms of design, icons, etc. and the possibilities of interface is, and what happens when something is made impossible by the interface.

A4 – Anne) When the “medium specificity” we are talking about the platform itself as medium. Moving beyond end user and user experience. We wanted to take into account the role of the user – the platform also has interfaces for developers, for advertisers, etc. and we wanted to think about those multiple interfaces, where they connect, how they connect, etc.

A4 – Taina) It’s a great point about medium specitivity but for me it’s more about platform specifity.

A4 – Jean) The integration of mobile web means the phone iOS has a major role here…

A4 – Nancy) We did some work with couples who brought in their phones, and when one had an Apple and one had an Android phone we actually found that they often weren’t aware of what was possible in the social media apps as the interfaces are so different between the different mobile operating systems and interfaces.

Q5) Can you talk about algorithmic content and content innovation?

A5 – David) In our work with YouTube we see forms of innovation that are very platform specific around things like Vine and Instagram. And we also see counter-industrial forms and practices. So, in the US, we see blogging and first person accounts of lives… beauty, unboxing, etc. But if you map content innovation you see (similarly) this taking the form of gaps in mainstream culture – in India that’s stand up comedy for instance. Algorithms are then looking for qualities and connections based on what else is being accessed – creating a virtual circle…

Q6) Can we think of platforms as instable, about platforms having not quite such a uniform sense of purpose and direction…

A6 – Stuart) Most platforms are very big in terms of their finance… If you compare that to 20 years ago the big companies knew what they were doing! Things are much more volatile…

A6 – Jean) That’s very common in the sector, except maybe on Facebook… Maybe.

PA-05: Identities (Chair: Tero Jukka Karppi)

The Bot Affair: Ashley Madison and Algorithmic Identities as Cultural Techniques – Tero Karppi, University at Buffalo, USA

As of 2012 Ashley Madison is the biggest online dating site targeted at those already in a committed relationship. Users are asked to share their gender, their sexuality, and to share images. Some aspects are free but message and image exchange are limited to paid accounts.

The site was hacked in 2016, stealing site user data which was then shared. Security experts who analysed the data assessed it as real, associated with real payment details etc. The hacker intention was to expose cheaters but my paper is focused on a different aspect of the aftermath. Analysis showed 43 male bots, and 70k female bots and that is the focus of my paper. And I want to think about this space and connectivity by removing the human user from the equation.

The method for me was about thinking about the distinction between human and non-human user, the individual and the bot. Eminating from germination theory I wanted to use cultural techniques – with materials, symbolic values, rules and places. So I am seeking elements of difference of different materials in the context of the hack and the aftermath.

So, looking at a news items: “Ashley madison, the dating website for cheaters, has admitted that some women on its site were virtual computer programmes instead of real women.” (CNN money), which goes onto say that users thought that they were cheating, but they weren’t after all! These bots interacted with users in a variety of ways from “winking” to messaging, etc. The role of the bot is to engage users in the platform and transform them into paying customers. A blogger talked about the space as all fake – the men are cheaters, the women are bots and only the credit card payments are real!

The fact that the bots are so gender imbalanced tells us the difference in how the platform imagines male and female users. In another commentary they comment on the ways in which fake accounts drew men in – both by implying real women were on the site, and by using real images on fake accounts… The lines between what is real and what is fake have been blurred. Commentators noted the opaqueness of connectivity here, and of the role of the bots. Who knows how many of the 4 million users were real?

The bots are designed to engage users, to appear as human to the extent that we understand human appearance. Santine Olympo talked about bots whilst others looking at algorithmic spaces and what can be imagined and created from our wants and needed. According to Ashley Madison employees the bots – or “angels” – were created to match the needs of users, recycling old images from real user accounts. This case brings together the “angel” and human users. A quote from a commentator imagines this as a science fiction fantasy where real women are replaced by perfect interested bots. We want authenticity in social media sites but bots are part of our mundane everyday existence and part of these spaces.

I want to finish by quoting from Ashley Madison’s terms and conditions, in which users agree that “some of the accounts and users you may encounter on the site may be fiction”.

Facebook algorithm ruins friendship – Taina Bucher, University of Copenhagen

“Rachel”, a Facebook user/informant states this in a tweet. She has a Facebook account that she doesn’t use much. She posts something and old school friends she has forgotten comment on it. She feels out of control… And what I want to focus on today are ordinary affects of algorithmic life taking that idea from ?’s work and Catherine Stewart’s approach to using this in the context of understanding the encounters between people and algorithmic processes. I want to think about the encounter and how the encounter itself becoming generative.

I think that the fetish could be one place to start in knowing algorithms… And how people become attuned to them. We don’t want to treat algorithms as a fetish. The fetishist doesn’t care about the object, just about how the object makes them feel. And so the algorithm as fetish can be a mood maker, using the “power of engagement”. The power does not reside in the algorithm, but in the types of ways people imagine the algorithm to exist and impact upon them.

So, I have undertaken a study of people’s personal algorithm stories, looking at people’s personal algorithm stories about Facebook algorithm; monitoring and querying Twitter for comments and stories (through keywords) relating to Facebook algorithms. And a total of 25 interviews were undertaken via email, chat and Skype.

So, when Rachel tweeted about Facebook and friendship, that gave me the starting point to understand stories and the context for these positions through interviews. And what repeatedly arose was the uncanny nature of Facebook algorithms. Take, for instance Micheal, a musician in LA. He shares a post and usually the likes come in rapidly, but this time nothing… He tweets that the algorithm is “super frustrating” and he believes that Facebook only shows paid for posts. Like others he has developed his own strategy to show posts more clearly. He says:

“If the status doesn’t build buzz (likes, comments, shares) within the first 10 minutes or so it immediately starts moving down the news feed and eventually gets lost.”

Adapting behaviour to social media platforms and their operation can be seen as a form of “optimisation”. Users aren’t just updating their profile or hoping to be seen, they are trying to change behaviours to be better seen by the algorithm. And this takes us to the algorithmic imaginary, the ways of thinking about what algorithms are, what they should be, how they function, and what these imaginations in turn make possible. Many of our participants talked about changing behaviours for the platform. Rachel talks about “clicking every day to change what will show up on her feed” is not only her using the platform, but thinking and behaving differently in the space. Adverts can also suggest algorithmic intervention and, no matter whether the user is profiled or not (e.g. for anti-wrinkle cream), users can feel profiled regardless.

So, people do things to algorithms – disrupting liking practices, comment more frequently to increase visibility, emphasise positively charged words, etc. these are not just interpreted by the algorithm but also shape that algorithm. Critiquing the algorithm is not enough, people are also part of the algorithm and impact upon its function.

Algorithmic identity – Michael Stevenson, University of Groningen, Netherlands

Michael is starting with a poster of Blade Runner… Algorithmic identity brings to mind cyberpunk and science fiction. But day to day algorithmic identity is often about ads for houses, credit scores… And I’m interested in this connection between this clash of technological cool vs mundane instruments of capitalism.

For critics the “cool” is seen as an ideological cover for the underlying political economy. We can look at the rhetoric around technology – “rupture talk”, digital utopianism as that covering of business models etc. Evgeny Morozov writes entertainingly of this issue. I think this critique is useful but I also think that it can be too easy… We’ve seen Morozov tear into Jeff Jarvis and Tim O’Reilly, describing the latter as a spin doctor for Silicon Valley. I think that’s too easy…

My response is this… An image of Christopher Walken saying “needs more Bourdieu”. I think we need to take seriously the values and cultures and the effort it takes to create those. Bourdieu talks about the new media field with areas of “web native”, open, participatory, transparant at one end of the spectrum – the “autonomous pole”; and the “heteronomous pole” of mass/traditional media, closed, controlled, opaque. The idea is that actors locate themselves between these poles… There is also competition to be seen as the most open, the most participatory – you may remember a post from a few years back on Google’s idea of open versus that of Facebook. Bourdieu talks of the autonomous pole as being about downplaying income and economic value, whereas the heteronomous pole is much more directly about that…

So, I am looking at “Everything” – a site designed in the 1990s. It was built by the guys behind Slashdot. It was intended as a compendium of knowledge to support that site and accompany it – items of common interest, background knowledge that wasn’t news. If we look at the site we see implicit and explicit forms of impact… Voting forms on articles (e.g. “I like this write up”), and soft links at the bottom of the page – generated by these types of feedback and engagement. This was the first version in the 1990s. Then in 1999 Nathan Dussendorf(?) developed the Everything2 built with the Everything Development Engine. This is still online. Here you see that techniques of algorithmic identity and datafication of users, this is very explicitly presented – very much unlike Facebook. Among the geeks here the technology is put on top, showing reputation on the site. And being open source, if you wanted to understand the recommendation engine you could just look it up.

If we think of algorithms as talk makers, and we look back at 1999 Everything2, you see the tracking and datafication in place but the statement around it talks about web 2.0/social media type ideas of democracy, meritocracy, conflations of cultural values and social actions with technologies and techniques. Aspects of this are bottom up and you also talk about the role of cookies, and the addressing of privacy. And it directly says “the more you participate, the greater the opportunity for you to mold it your way”.

Thinking about Field Theory we can see some symbolic exclusion – of Microsoft, of large organisations – as a way to position Everything2 within the field. This continues throughout the documentation across the site. And within this field “making money is not a sin” – that developers want to do cool stuff, but that can sit alongside making money.

So, I don’t want to suggest this is a utopian space… Everything2 had a business model, but this was of its time for open source software. The idea was to demonstrate capabilities of the development framework, to get them to use it, and to then get them to pay for services… But this was 2001 and the bubble burst… So the developers turned to “real jobs”. But Everything2 is still out there… And you can play with the first version on an archived version if you are curious!

The Algorithmic Listener – Robert Prey, University of Groningen, Netherlands

This is a version of a paper I am working on – feedback appreciated. And this was sparked by re-reading Raymond Williams, who talks about “there are in fact no masses, but only ways of seeing people as masses” (1958/2011). But I think that in the current environment Williams might now say “there are in fact no individuals, but only ways of seeing people as individuals”. and for me I’m looking at this through the lens of music platforms.

In an increasingly crowded and competitive sector platforms like Spotify, SoundCloud, Apple Music, Deezer, Pandora, Tidel, those platforms are increasingly trying to differentiate themselves through recommendation engines. And I’ll go on to talk about recommendations as individualisation.

Pandora internet radio calls itself the “music genome project” and sees music as genes. It seeks to provide recommendatoins that are outside the distorting impact of cultural information, e.g. you might like “The colour of my love” but you might be put off by the fact that Celine Dion is not cool. They market themselves against the crowd. They play on the individual as the part separated from the whole. However…

Many of you will be familiar with Spotify, and will therefore be familiar with Discover Weekly. The core of Spotify is the “taste profile”. Every interaction you have is captured and recorded in real time – selected artists, songs, behaviours, what you listen to and for how long, what you skip. Discover weekly uses both the taste profile and aspects of collaborative filtering – selecting songs you haven’t discovered that fits your taste profile. So whilst it builds a unique identity for each user, it also relies heavily on other peoples’ taste. Pandora treats other people as distortion, Spotify sees it as more information. Discover weekly does also understands the user based on current and previous behaviours. Ajay Kalia (Spotify) says:

“We believe that it’s important to recognise that a single music listener is usually many listeners… [A] person’s preference will vary by the type of music, by their current activity, by the time of day, and so on. Our goal then is to come up with the right recommendation…”

This treats identity as being in context, as being the sum of our contexts. Previously fixed categories, like gender, are not assigned at the beginning but emerge from behaviours and data. Pagano talks about this, whilst Cheney-Lippold (2011) talks about “cybernetic relationship to individual” and the idea of individuation (Simondon). For Simondon we are not individuals, individuals are an effect of individuation, not the cause. A focus on individuation transforms our relationship to recommendation systems… We shouldn’t be asking if they understand who we are, but the extent to which the person is an effect of personalisation. Personalisation is seen as about you and your need. From a Simondonian perspective there is no “you” or “want” outside of technology. In taking this perspective we have to acknowledge the political economy of music streaming systems…

And the reality is that streaming services are increasingly important to industry and advertisers, particularly as many users use the free variants. And a developer of Pandora talks about the importance for understanding profiles for advertisers. Pandora boasts that they have 700 audience segments to data. “Whether you want to reach fitness-driven moms in Atlanta or mobile Gen X-er… “. The Echo Nest, now owned by Spotify, had created highly detailed consumer profiling before it was brought up. That idea isn’t new, but the detail is. The range of segments here is highly granular… And this brings us to the point that we need to take seriously what Nick Seaver (2015) says we need to think of: “contextualisation as a practice in its own right”.

This matters as the categories that emerge online have profound impacts on how we discover and encounter our world.

Panel Q&A

Q1) I think it’s about music category but also has wider relevance… I had an introduction to the NLP process of Topic Modelling – where you label categories after the factor… The machine sorts without those labels and takes it from the data. Do you have a sense of whether the categorisation is top down, or is it emerging from the data? And if there is similar top down or bottom up categorisation in the other presentations, that would be interesting.

A1 – Robert) I think that’s an interesting question. Many segments are impacted by advertisers, and identifying groups they want to reach… But they may also

Micheal) You talked about the Ashley Madison bots – did they have categorisation, A/B testing, etc. to find successful bots?

Tero) I don’t know but I think looking at how machine learning and machine learning history

Micheal) The idea of content filtering from the bottom to the top was part of the thinking behind Everything…

Q2) I wanted to ask about the feedback loop between the platforms and the users, who are implicated here, in formation of categories and shaping platforms.

A2 – Taina) Not so much in the work I showed but I have had some in-depth Skype interviews with school children, and they all had awareness of some of these (Facebook algorithm) issues, press coverage and particularly the review of the year type videos… People pick up on this, and the power of the algorithm. One of the participants emails me since the study noting how much she sees writing about the algorithm, and about algorithms in other spaces. Awareness is growing much more about the algorithms shaping spaces. It is more prominent than it was.

Q3) I wanted to ask Michael about that idea of positioning Everything2 in relation to other sites… And also the idea of the individual being transformed by platforms like Spotify…

A3 – Michael) I guess the Bourdieun vision is that anyone who wants to position themselves on the spectrum, they can. With Everything you had this moment during the Internet Bubble, a form of utopianism… You see it come together somewhat… And the gap between Wired – traditional mass media – and smaller players but then also a coming together around shared interests and common enemies.

A3 – Robert) There were segments that did come from media, from radio and for advertisers and that’s where the idea of genre came in… That has real effects… When I was at High School there were common groups around particular genres… But right now the move to streaming and online music means there are far more mixed listening and people self-organise in different ways. There has been de-bunking of Bourdieu, but his work was at a really different time.

Q4) I wanted to ask about interactions between humans and non-human. Taina, did people feel positive impacts of understanding Facebook algorithms… Or did you see frustrations with the Twitter algorithms. And Tero, I was wondering how those bots had been shaped by humans.

A4 – Taina) The human and non-human, and whether people felt more or less frustrated by understanding the algorithm. Even if they felt they knew, it changes all the time, their strategies might help but then become obsolete… And practices of concealment and misinformation were tactics here. But just knowing what is taking place, and trying to figure it out, is something that I get a sense is helpful… But maybe that is’t the right answer to it. And that notion of a human and a non human is interesting, particularly for when we see something as human, and when we see things as non-human. In terms of some of the controversies… When is an algorithm blamed versus a human… Well there is no necessary link/consistency there… So when do we assign humanness and non-humanness to the system and does it make a difference?

A4 – Tero) I think that’s a really interesting questions…. Looking at social media now from this perspective helps us to understand that, and the idea of how we understand what is human and what is non-human agency… And what it is to be a human.

Q5) I’m afraid I couldn’t here this question

A5 – Richard) Spotify supports what Deleuze wrote about in terms of the individual and how aspects of our personality are highlighted at the points that is convenient. And how does that effect help us regulate. Maybe the individual isn’t the most appropriate unit any more?

A5 – Taine) For users the exposure that they are being manipulated or can be summed up by the algorithm, that is what can upset or disconcert them… They don’t like to feel summed up by that…

Q6) I really like the idea of the imagined… And perceptions of non-human actors… In the Ashley Madison case we assume that men thought bots were real… But maybe not everyone did that. I think that moment of how and when people imagine and ascribe human or non-human status here. In one way we aren’t concerned by the imaginary… And in another way we might need to consider different imaginaries – the imaginary of the platform creators vs. users for instance.

A6 – Tero) Right now I’m thinking about two imaginaries here… Ashley Madison’s imaginary around the bots, and the users encountering them and how they imagine those bots…

A6 – Taine) A good question… How many imaginaries o you think?! It is about understanding more who you encounter, who you engage with. Imaginaries are tied to how people conceive of their practice in their context, which varies widely, in terms of practices and what you might post…

And with that session finished – and much to think about in terms of algorithmic roles in identity – it’s off to lunch… 

PS-09: Privacy (Chair: Michael Zimmer)

Unconnected: How Privacy Concerns Impact Internet Adoption – Eszter Hargittai, Ashley Walker, University of Zurich

The literature in this area seems to target the usual suspects – age, socio-economic status… But the literature does not tend to talk about privacy. I think one of the reasons may be the idea that you can’t compare users and non-users of the internet on privacy. But we have located a data set that does address this issue.

The U.S. Federal Communication Commission’s issued a National Consumer’s Broadband Service Capability Service in 2009 – when about 24% of Americans were still not yet online. This work is some years ago but our insterest is in the comparison rather than numbers/percentages. And this questioned both internet users and non-users.

One of the questions was: “It is too easy for my personal information to be stolen online” and participants were asked if they Strongly agreed, somewhat agreed, somewhat disagreed, disagreed. We looked at that as bivariate – strongly agreed or not. And analysing that we found that among internet users 63.3% said they strongly agreed versus 81% of non internet users. Now we did analyse demographically… It is what you expect generally – more older people are not online (though interestingly more female respondents are online). But even then the internet non-users again strongly agreed about that privacy/concern question.

So, what does that mean? Well getting people online should address people’s concerns about privacy issues. There is also a methodological takeaway – there is value to asking non-users about internet-related questions – as they may explain their reasons.

Q&A

Q1) Was it asked whether they had previously been online?

A1) There is data on drop outs, but I don’t know if that was captured here.

Q2) Is there a differentiation in how internet use is done – frequently or not?

A2) No, I think it was use or non-use. But we have a paper coming out on those with disabilities and detailed questions on internet skills and other factors – that is a strength of the dataset.

Q3) Are there security or privacy questions in the dataset?

A3) I don’t think there are, or we would have used them. It’s a big national dataset… There is a lot on type of internet connection and quality of access in there, if that is of interest.

Note, there is more on some of the issues around access, motivations and skills in the Royal Society of Edinburgh Spreading the Benefits of Digital Participation in Scotland Inquiry report (Fourman et al 2014). I was a member of this inquiry so if anyone at AoIR2016 is interested in finding out more, let me know. 

Enhancing online privacy at the user level: the role of internet skills and policy implications – Moritz Büchi, Natascha Just, Michael Latzer, U of Zurich, Switzerland

Natascha: This presentation is connected with a paper we just published and where you can read more if you are interested.

So, why do we care about privacy protection? Well there is increased interest in/availability of personal data. We see big data as a new asset class, we see new methods of value extraction, we see growth potential of data-driven management, and we see platformisation of internet-based markets. Users have to continually balance the benefits with the risks of disclosure. And we see issues of online privacy and digital inequality – those with fewer digital skills are more vulnerable to privacy risks.

We see governance becoming increasingly important and there is an issue of understanding appropriate measures. Market solutions by industry self-regulation is problematic because of a lack of incentives as they benefit from data. At the same time states are not well placed to regulate because of their knowledge and the dynamic nature of the tech sector. There is also a route through users’ self-help. Users self-help can be an effective method to protect privacy – whether opting out, or using privacy enhancing technology. But we are increasingly concerned but we still share our data and engage in behaviour that could threaten our privacy online. And understanding that is crucial to understand what can trigger users towards self-help behaviour. To do that we need evidence, and we have been collecting that through a world internet study.

Moritz: We can imperically address issues of attitudes, concerns and skills. The literature finds these all as important, but usually at most two factors covered in the literature. Our research design and contributions look at general population data, nationally representative so that they can feed into policy. The data was collected in the World Internet Project, though many questions only asked in Switzerland. Participants were approached on landline and mobile phones. And our participants had about 88% internet users – that maps to the approx. population using the internet in Switzerland.

We found a positive effect of privacy attitudes on behaviours – but a small effect. There was a strong effect of privacy breaches and engaging in privacy protection behaviours. And general internet skills also had an effect on privacy protection. Privacy breaches – learning the hard way – do predict privacy self-protection. Caring is not enough – that pro-privacy attitudes do not really predict privacy protection behaviours. But skills are central – and that can mean that digital inequalities may be exacerbated because users with low general internet skills do not tend to engage in privacy protection behaviour.

Q&A

Q1) What do you mean by internet skills?

A1 – Moritz): In this case there were questions that participants were asked, following a model by Alexander von Durnstern and colleagues developed, that asks for agreement or disagreement

Navigating between privacy settings and visibility rules: online self-disclosure in the social web – Manuela Farinosi1,Sakari Taipale2, 1: University of Udine; 2: University of Jyväskylä

Our work is focused on self-disclosure online, and particularly whether young people are concerned about privacy in relation to other internet users, privacy to Facebook, or privacy to others.

Facebook offers complex privacy settings allowing users to adopt a range of strategies in managing their information and sharing online. Waters and Ackerman (2011) talk about the practice of managing privacy settings and factors that play a role including culture, motivation, risk-taking ratio, etc. And other factors are at play here. Fuchs (2012) talks about Facebook as commercial organisation and concerns around that. But only some users are aware of the platform’s access to their data, may believe their content is (relatively) private. And for many users privacy to other people is more crucial than privacy to Facebook.

And there are differences in privacy management… Women are less likely to share their phone number, sexual orientation or book preferences. Men are more likely to share corporate information and political views. Several scholars have found that women are more cautious about sharing their information online. Nosko et al (2010) found no significant difference in information disclosure except for political informaltion (which men still do more of).

Sakari: Manuela conducted an online survey in 2012 in Italy with single and multiple choice questions. It was issued to university students – 1125 responses were collected. We focused on 18-38 year old respondents, and only those using facebook. We have slightly more female than male participants, mainly 18-25 years old. Mostly single (but not all). And most use facebook everyday.

So, a quick reminder of Facebook’s privacy settings… (a screenshot reminder, you’ve seen these if you’ve edited yours).

To the results… We found that the data that are most often kept private and not shared are mobile phone number, postal address or residence, and usernames of instant messaging services. The only data they do share is email address. But disclosure is high of other types of data – birth date for instance. And they were not using friends list to manage data. Our research also confirmed that women are more cautious about sharing their data, and men are more likely to share political views. The only not gender related issues were disclosure of email and date of birth.

Concerns were mainly about other users, rather than Facebook, but it was not substantially different in Italy. We found very consistent gender effects across our study. We also checked factors related to concerns but age, marital status, education, and perceived level of expertise as Facebook user did not have a significant impact. The more time you spend on Facebook, the less likely you are to care about privacy issues. There was also a connection between respondents’ privacy concerns were related to disclosures by others on their wall.

So, conclusions, women are more aware of online privacy protection than men, and protection of private sphere. They take more active self protection there. And we speculate on the reasons… There are practices around sense of security/insecurity, risk perception between men and women, and the more sociological understanding of women as maintainers of social labour – used to taking more care of their material… Future research needed though.

Q&A

Q1) When you asked users about privacy settings on Facebook how did you ask that?

A1) They could go and check, or they could remember.

WHOSE PRIVACY? LOBBYING FOR THE FREE FLOW OF EUROPEAN PERSONAL DATA – Jockum Philip Hildén, University of Helsinki, Finland

My focus is related to political science… And my topic is lobbying for the free flow of European Personal Data – and how the General Data Protection Regulation come into being and which lobbyists influenced the legislators. This is a new piece of regulation coming in next year. It was the subject of a great deal of lobbying – it became visible when the regulation was in parliament, but the lobbying was much earlier than that.

So, a quick description of EU law making. There is the European Commission which proposes legislation and that goes to both the Council of Europe and also to the Parliament. Both draw up regulations based on the proposal and then that becomes final regulation. In this particular case there was public consultation before the final regulation so I looked at a wide range of publicly available position pages. Looking across here I could see 10 types of stakeholders offering replies to the position papers – far more in 2011 than to the first version in 2009. Companies in the US participated to a very high degree – almost as much as those in the UK and France. That’s interesting… And that’s partly to do with the extended scope of this new regulation that covers EU but also service providers in the US and other locations. This idea is not exclusive to this regulation, known as “the Brussels effect”.

In terms of sector I have categorised the stakeholders so I have divided IP and Node communications for instance, to understand their interests. But I am interested in what they are saying, so I draw on Kluver (2013) and the “preference attainment model” to compare policy preferences of interest groups with the Commissions preliminary draft proposal, the Commission’s final proposal, and the final legislative act adopted by the council. So, what interests did the council take into account? Well almost every article changed – which makes those changes hard to pin down. But…

There is an EU Power Struggle. The Commission draft contained 26 different cases where it was empowered to adopt delegated acts. All but one of these articles were removed from the Council’s draft. And there were 48 exceptions for member states, most of them are “in the public interest”… But that could mean anything! And thus the role of nation states comes into question. The idea of European law is to have consistent policy – that amount of variance undermines that.

We also see a degree of User disempowerment. Here we see responses from Digital Europe – a group of organisations doing any sort of surveillance; But we also see the American Chambers of Commerce submitting responses. In these responses both are lobbying for “implicit consent” – the original draft requested explicit consent. And the Commission sort of brought into this, using a concept of unambiguous consent… Which is itself very ambiguous. Looking at the Council vs Free Data Advocates and then compared to Council vs Privacy Advocates. The Free Data Advocates are pro free movement of data, and privacy – as that’s useful to them too, but they are not keen on greater Commission powers. Privacy Advocates are pro privacy and more supportive of Commission powers.

In Search of Safe Harbors – Privacy and Surveillance of Refugees in Europe – Paula Kift, New York University, United States of America

Over 2015 a million refugees and migrants arrived at the borders of Europe. One of the ways in which the EU attempted to manage this influx was to gather information on these peoples. In particular satellite surveillance and data collection on individuals on arrival.   
The EU does acknowledge that biometric data does raise privacy issues, but that satellites and drones is not personally identifiable or an issue here. I will argue that the right to privacy does not require presence of Personally Identifiable Information.
As background there are two pieces of legislation, Eurosur – regulations to gather and share satelite and drone data across Member States. Although the EU justifies this on the basis of helping refugees in distress, it isn’t written into the regulation. Refugee and human rights organisations say that this surveillance is likely to enable turning back of migrants before they enter EU waters.
If they do reach the EU, according to Eurodac (2000) refugees must give fingerprints (if over 14 years old) and can only apply for asylum status in one country. But in 2013 this regulation has been updated so that fingerprinting can be used in law enforcement – that goes again EU human rights act and Data Protection law. It is also demeaning and suggests that migrants are more likely to be criminal, something not backed up by evidence. They have also proposed photography and fingerprinting be extended to everyone over 6 years old. There are legitimate reasons for this… Refugees come into Southern Europe where opportunities are not as good, so some have burned off fingerprints to avoid registration there, so some of these are attempts to register migrants, and to avoid losing children once in the EU.
The EU does not dispute that biometric data is private data. But with Eurodac and Eurosur the right to data protection does not apply – they monitor boats not individuals. But I argue that the Right to Private Life is jeapodised here, through prejudice, reachability and classifiability… The bigger issue may actually be the lack of personal data being collected… The EU should approach boats and identify those with asylum claim, and manage others differently, but that is not what is done.
So, how is big data relevant? Well big data can turn non personally identifiable information into PII through aggregation and combination. And classifying individuals also has implications for the design of Data Protection Laws. Data protection is a procedural right, but privacy is a substantive right, less dependent on personally identifiable information. Ultimately the right to privacy protects the person, rather than the integrity of the data.
Q&A
Q1) In your research have you encountered any examples of when policy makers have engaged with research here?
A1 – Paula) I have not conducted any on the ground interviews or ethnographic work with policy makers but I would suggest that the increasing focus on national security is driving this activity, whereas data protection is shrinking in priority.
A1 – Jockum) It’s fairly clear that the Council of Europe engaged with digital rights groups, and that the Commission did too. But then for every one of those groups, there are 10 lobby groups. So you have Privacy International and European Digital Rights who have some traction at European level, but little traction at national level. My understanding is that researchers weren’t significantly consulted, but there was a position paper submitted by a research group at Oxford, submitted by lawers, but their interest was more aligned with national rather than digital rights issues.
Q2) You talked about the ? being embedded in the new legislation… You talk about information and big data… But is there any hope? We’ve negotiated for 4 years, won’t be in force until 2018…
A2 – Paula) I totally agree… You spend years trying to come up with a framework, but it all rests on PII…. And so how do we create Data Protection Act that respects personal privacy without being dependent on PII? Maybe the question is not about privacy but about profiles and discrimination.
A2 – Jockum) I looked at all the different sectors to look at surveillance logic, to understand why surveillance is related to regulation. The problem with Data Protection regulation is inherently problematic as it has opposing goals – to protect individuals and to enable the sharing of data… So, in that sense, surveillance logic is informing this here.
Q3) Could you outline again the threats here beyond PII?
A3 – Paula) Refugees who are aware of these issues don’t take their phones – but that reduces chance of identification but also stops potential help calls and rescues. But the risk is also about profiling… High ranking job offers are more likely to be made to women than men… Google thinks I am between 60 and 80 years old and Jewish, I’m neither, they detect who I am… And that’s where the risk is here… profiling… e.g. transactions being blocked through proposals.
Q4) Interesting mixture of papers here… Many people are concerned about social side of privacy… But know little of institutional privacy concerns. Some become more cynical… But how can we improve literacy… How can we influence people here about Data Protection laws, and privacy measures…
A4 – Esther) It varies by context. In the US the concern is with government surveillance, the EU it’s more about corporate surveillance… You may need to target differently. Myself and a colleague wrote a paper on apathy of privacy… There are issues of trust, but also work on skills. There are bigger conversations, not just with users, to be had. There are conversations to have generally with the population… Where do you infuse that, I don’t know… How do you reach adults, I don’t know?
A4 – Natascha) Not enough to strengthen awareness and rights… Skills are important here too… That you really need to ensure that skills are developed to adapt to policies and changes. Skills are key.
Q5) You talked about exclusion and registration,,, And I was wondering how exclusion to and exclusion of registration (e.g. the dead are not registered).
A5 – Paula) They collect how many are registered… But that can lead to threat inflation and very flawed data. In terms of data that is excluded there is a capacity issue… That may be the issue with deaths. The EU isn’t responsible for saving lives, but doesn’t want to be seen as responsible for those deaths either.
Q6) I wanted to come back to what you see as the problematic implications of the boat surveillance.
A6 – Paula) For many data collection is fine until something happens to you… But if you know it takes place it can have an impact on your behaviours… So there is work to be done to understand if refugees are aware of that surveillance. But the other issue here is about the use of drone surveillance to turn people back then that has clear impact on private lives, particularly as EU states have bilateral agreements with nations that have not all ratified refugee law – meaning turned back boats may result in significantly different rights and opportunities.
RT-07: IR (Chair: Victoria Nash)

The Politics of Internet Research: Reflecting on the challenges and responsibilities of policy engagement

Victoria Nash (University of Oxford, United Kingdom), Wolfgang Schulz (Hans-Bredow-Institut für Medienforschung, Germany), Juan-Carlos De Martin (Politecnico di Torino, Italy), Ivan Klimov, New Economic School, Russia (not attending), Bianca C. Reisdorf (representing Bill Dutton, Quello Center, Michigan Statue University), Kate Coyer, Central European University, Hungary (not attending)

Victoria: I am Vicky Nash and I have convened a round table of members of the international network of internet research centres.

Juan-Carlos: I am director of the Nexa Center for Internet and Society in Italy and we are mainly computer scientists like myself, and lawers. We are ten years old.

Wolfgang: I am associated with two centres, in Humboldt primarily and our interest is in governance and surveillance primarily. We are celebrating our five birthday this year. I also work with the Hans-Bredow-Institut a traditional media institute, multidisciplinary, and we increasingly focus on the internet and internet studies as part of our work.

Bianca: I am representing Bill Dutton. I am Assistant Director of the Quello Center at Michigan State University centre. We were more focused on traditional media but have moved towards internet policy in the last few years as Bill moved to join us. There are three of us right now, but we are currently recruiting for a policy post-doc.

Victoria: Thanks for that, I should talk about the department I am representing… We are in a very traditional institution but our focus has explicitly always been involvement in policy and real world impact.

Victoria: So, over the last five or so years, it does feel like there are particular challenges arising now, especially working with politicians. And I was wondering if other types of researchers are facing those same challenges – is it about politics, or is it specific to internet studies. So, can I kick off and ask you to give me an example of a policy your centre has engaged in, how you were involved, and the experience of that.

Juan-Carlos: There are several examples. One with the regional government in our region of Italy. We were aware of data and participatory information issues in Europe. We reached out and asked if they were aware. We wanted to make them aware of opportunities to open up data, and build on OECD work, but we were also doing some research ourselves. Everybody agreed in the technical infrastructure and on political level… We assisted them in creating the first open data portal in Italy, and one of the first in Europe. And that was great, it was satisfying at the time. Nothing was controversial, we were following a path in Europe… But with a change of regional government that portal has somewhat been neglected so that is frustrating…

Victoria: What motivated that approach you made?

JC: We had a chance to do something new and exciting. We had the know-how and the way it could be, at least in Italy, and that seemed like a great opportunity.

Wolfgang: My centres, I’m kind of an outsider in political governance as I’m concerned with media. But in internet governance it feels like this is our space and we are invested in how it is governed – more so than in other areas. The example I have is from more traditional media work… And that’s from the Hans-Bredow-Institute. We were asked to investigate for a report on usage patterns changes, technology changes, and puts strain on governance structures in Germany… And where there is a need for solutions to make federal and state law in Germany more convergent and able to cope with those changes. But you have to be careful when providing options, because of course you can make some options more appealing than others… So you have to be clear about whether you will be and present it as neutral, or whether you prefer an option and present it differently. And that’s interesting and challenging as an academic and with the role of an academic and institution.

Victoria: So did you consciously present options you did not support?

Wolfgang: Yes, we did. And there were two reasons for this… They were convinced we would come up with a suggestion and basis to start working with… And they accepted that we would not be specifically taking a side – for the federal or local government. And also they were confident we wouldn’t attempt to mess up the system… We didn’t present the ideal but we understood other dependencies and factors and trusted us to only put in suggestions to enhance and practically work, not replace the whole thing…

Victoria: And did they use your options?

Wolfgang: They ignored some suggestions, but where they acted they did take our options.

Bianca: I’ll talk about a semi-successful project. We were looking at detailed postcode level data on internet access and quality and reasons for that. We submitted to the National Science Foundation, it was rejected, then two weeks later we were invited to an event on just that topic by the NPIA. So we are collectively drafting suggestions from the NPIA and from a wide range of many research centres, and we are drafting that now. It was nice to be invited by policy makers… and interesting to see that idea picked up through that process in some way…

Victoria: That’s maybe an unintended consequences aspect there… And that suggestion to work with others was right for you?

Bianca: We were already keen to work with other research centres but actually we also now have policy makers and other stakeholders around the table and that’s really useful.

Victoria: those were all very positive… Maybe you could reflect on more problematic examples…

JC: Ministers often want to show that they are consulting on policy but often that is a gesture, a political move to listen but then policy made an entirely different way… After a while you get used to that. And then you have to calculate whether you participate or not – there is a time aspect there.

Victoria: And for conflict of interest reasons you pay those costs of participating…

JC: Absolutely, the costs are on you.

Wolfgang: We have had contact from ministeries in Germany but then discovered they are interested in the process as a public relations tool rather than as a genuine interest in the outcome. So now we assess that interest and engage – or don’t – accordingly. We try to say at the beginning “no, please speak to someone else” when needed. At Humboldt is reluctant to engage in policy making, and that’s a historical thing, but people expect us to get involved. We are one of the few places that can deliver monitoring on the internet, and there is an expectation to do that… And when ministeries design new programmes, we are often asked to be engaged and we have learned to be cautious about when we engage. Experience helps but you see different ways to approach academia – can be PR, sometimes you want support for your position or support politically, or you can actually be engaged in research to learn and have expertise and information. If you can see what approach it is, you can handle it appropriately.

Victoria: I think as a general piece of advice – to always question “why am I being approached” in the framing of “what are their motivations?”, that is very useful.

Wolfgang: I think starting in terms of research questions and programmes that you are concerned with gives you a counterpoint in your own thinking to dealing with requests. Then when good opportunities come up you can take it and make use of it… But academic value can be limited of some approaches so you need a good reason to engage in those projects and they have to align with your own priorities.

Bianca: My bad example is related to that. The Net Neutrality debate is a big part of our work… There are a lot of partisan opinions on that, and not a lot of neutral research there. We wanted to do a big project there but when we try to get funding for that we have been steered to stay away. We’ve been steered that talking about policy with policy makers is very negative, it is taken poorly. This debate has been bouncing around for 10 years, we want to see where Net Neutrality is imposed if we see changes in investment… But we need funding to do that… And funders don’t want to do it and are usually very cosy with policy makers…

Victoria: This is absolutely an issue, these concerns are in the minds of policy makers as well and that’s important.

Wolfgang: When we talk about research in our field and policy makers, it’s not just about when policy makers approach you to do something… You have a term like Net Neutrality at the centre that requires you to be either neutral or not neutral, that really shapes how you handle that as an academic… You can become, without wanting it, someone promoting one side sometimes. On a minor protection issue we did some work on co-regulation with Australia that seemed to solve a problem… But then after this debate in Germany and started drafting the inter-state treaty on media regulation, the policy makers were interested… And then we felt that we should support it… and I entered the stage but it’s not my question anymore… So you have opinion about how you want something done…

JC: As a coordinator of a European project there was a call that included a topic of “Net Neutrality” – we made a proposal but what happened afterwards clearly proved that that whole area was topic. It was in the call… But we should have framed it differently. Again at European level you see the Commission funds research, you see the outcomes, and then they put out a call that entirely contradicts the work that they funded for political reasons. There is such a drive for evidence-based policy making that it is important that they frame that way… It is evidence-based when it fits their agenda, not when it doesn’t.

Victoria: I did some work with the Department of Media, Culture and Sport last year, again on minor protection, and we were told at the offset to assume porn caused harm to minors. And the frames of reference was shaped to be technical – about access etc. They did bring in a range of academic expertise but the terms of reference really constrained the contribution that was possible. So, there are real bear traps out there!

Wolfgang: A few years back the European Commission asked researchers to look at broadcasters and interruptions to broadcasts and the role of advertising, even though we need money we do not do that, it isn’t answering interesting research questions for us.

Victoria: I raised a question earlier about the specific stakes that academia has in the internet, it isn’t just what we study. Do you want to say more about that.

Wolfgang: Yes, at the pre-conference we had an STS stream… People said “of course we engage with policy” and I was wondering why that is the main position… But the internet comes from academia and there is a long standing tradition of engagement in policy making. Academics do engage with media policy, but they would’t class it as “our domain”, but they were not there are part of the beginning – academia was part of that beginning of the internet.

Q&A

Q1) I wonder if you are mistaking the “of-ness” with the fact that the internet is still being formed, still in the making. Broadcast is established, the internet is in constant construction.

A1 – Wolfgang) I see that

Q1) I don’t know about Europe but in the US since the 1970s there have been deliberate efforts to reduce the power of decision makers and policy makers to work with researchers…

A1 – Bianca) The Federal Communications Commission is mainly made of economists…

Q1) Requirements and roles constrain activities. The assumption of evidence-based decisions is no longer there.

Q2) I think that there is also the issue of shifting governance. Internet governance is changing and so many academics are researching the governance of the internet, we reflect greatly on that. The internet and also the governance structure are still in the making.

Victoria: Do you feel like if you were sick of the process tomorrow, you’d still want to engage with policy making?

A2 – Phoebe) We are a publicly funded university and we are focused on digital inequalities… We feel real responsibility to get involved, to offer advice and opinions based on our advice. On other topics we’d feel less responsible, depending on the impact it would have. It is a public interest thing.

A2 – Wolfgang) When we look at our mission at the Hans-Bredow-Institute we have a vague and normative mission – we think a functioning public sphere is important for democracy… Our tradition is research into public spheres… We have a responsibility there. But we also have a responsibility that the evaluation of academic research becomes more and more important but there is no mechanism to ensure researchers answer the problems that society has… We have a completely divided set of research councils and their yardsticks are academic excellence. State broadcasters do research but with no peer review at all… There are some calls from the Ministry of Science that are problem-orientated but on the whole there isn’t that focus on social issues and relevance in the reward process, in the understanding of prestige.

Victoria: In the UK we have a bizarre dichotomy where research is measured against two measures: impact – where policy impact has real value – and that applies in all fields; but there is also regulation that you cannot use project funds to “lobby” government – which means you potentially cannot communicate research to politicians who disagree. This happened because a research organisation (not a university) opposed government policy with research funded by them… Implications for universities is currently uncleared.

JC: Italy is implementing a similar system to the UK. Often there is no actual mandate on a topic, so individuals come up with ideas without numbers and plans… We think there is a gap – but it is government and ministries work. We are funded to work in the national interest… But we need resources to help there. We are filling gaps in a way that is not sustainable in the long term really – you are evaluated on other criteria.

Q3) I wanted to ask about policy research… I was wondering if there is policy research we do not want to engage in. In Europe, and elsewhere, there is increasing need to attract research… What are the guidelines or principles around what we do or do not go for funding wise.

A3 – Bianca) We are small so we go for what interests us… But we have an advisory board that guides us.

A3 – Wolfgang) I’m not sure that there are overarching guidelines – there may be for other types of special centres – but it’s an interesting thing to have a more formalised exchange like we have right now…

A3 – JC) No, no blockers for us.

A3 – Victoria) Academic freedom is vigorously held up at Oxford but that can mean we have radically different research agendas in the same centre.

Q4) With that lack of guidance, isn’t there a need for academics to show that they have trust, especially in the public sphere, especially when getting funding from, say, Google or Microsoft. And how can you embed that trust?

A4 – Wolfgang) I think peer review as a system functions to support that trust. But we have to think about other institutional settings, and that there is enough oversight… And many associations, like Liebneiz, requires an institutional review board, to look over the research agenda and ensure some outside scrutiny. I wouldn’t say every organisation or research centre needs that – it can be helpful but costly in terms of time in particular. And you cannot trust the general public to do that, you need it to be peers. An interesting question though, especially as Humboldt has national funding from Google… In this network academics play a role, and organisations play a role, and you have to understand the networks and relationships of partners you work with, and their interests.

A4 – Bianca) That’s a question that we’ve faced recently… That concern that corporate funding may sway result and the best way to face that is to publish methodology, questionnaires, process… to ensure the work is understood in that context that enables trust in the work.
A4 – JC) We spent years trying to deal with the issue of independence and it is very important as academia has responsibility to provide research that is independent and unbiased by funding etc. And not just about the work itself, but also perceptions of the work… It is quite a local/contextual issue. So, getting money from Google is perceived differently in different countries, and at different times…
Victoria: This is something we have to have more conversations about this. In medicine there is far more conversation about codes of conduct around funding. I am also concerned that PhD funding is now requiring something like a third of PhDs to be co-funded by industry, without any understanding from UK Government about what that means and what that means for peer review… That’s something we need to think about far more stringently.
Q5) For companies there are requirements to review outputs before publications to check for proprietary information and ensure it is not released. That makes industry the final arbiter here. In Canada our funding is also increasingly coming from industry and there that means that proprietary data gives them final say…
A5 – Bianca) Sometimes it has to be about negotiating contracts and being clear what is and is not acceptable.
Victoria) That’s my concern with new PhD funding models, and also with use of industry data. It will be non-negotiable that the research is not compromised but how you make that process clear is important.
Q6) What are your models here – are you academic or outside academia?
A6 – JC) Academic and policy are part of the work we are funded to do.
A6 – Bianca) We are 99% Endowment funded, hence having a lot of freedom but also advisory board guidance.
A6 – Wolfgang) Our success is assessed by academic publication. The Humboldt Institute is funded largely by private companies but a range of them, but also from grants. The Hans-Bredow-Institute is mainly directly funded by the Hamburg Ministry of Science but we’d like to be funded from other funders across Germany.
A6 – Victoria) Our income is research income, teaching income from masters degrees… We are a department of the university. Our projects are usually policy related, but not always government related.
Q7) I was wondering if others in the room have been funded for policy work – my experience has been that policy makers had expectations and an idea of how much control they wanted… By contrast money from Google comes with a “research something on the internet” type freedom. This is not what I would have expected so I just wondered how others experiences compared.
Comment) I was asked to do work across Europe with public sector broadcasters… I don’t know how well my report was seen by policy makers but it was well received by the public sector broadcaster organisations.
Comment) I’ve had public sector funding, foundation funding… But I’ve never had corporate money… My cynical take is that corporations maybe are doing this as PR, hence not minding what you work on!
Comment) I receive money from funding agencies, I did a joint project that I proposed to a think tank… Which was orientated to government… But a real push for impact… Numbers needed to be in the title. I had to be an objective researcher but present it the right way… And that worked with impact… And then the government offered me a contract to continue the research – working for them not against them. The funding was coming from a position close to my own idea… I felt it was a bit instrumentalised in this way…
A7 – Wolfgang) I think that it is hard to generalise… Companies as funders do sometimes make demands and expect control of publishing of results… And whether it is published or not. We don’t do that – our work is always public domain. It’s case by case… But there is one aspect we haven’t talked about and that is the relationship between the individual researcher and their political engagement (or not) and how that impacts upon the neutrality of the organisation. As a lawyer I’m very aware of that… For instance if giving expert evidence in court, the importance of being an individual not the organisation. Especially if partners/funders before or in the future are on the opposite side. I was an expert for Germany in a court case, with private broadcasters on the other side, and you have to be careful there…
A7 – JC) There is so little money for research in Italy… Regarding corporations… We got some money from Google to write an open source library, it’s out there, it’s public… There was no conflict there. But money from companies for policy work is really difficult. But lots of case by case issues in-between.
Q8) But companies often fund social science work that isn’t about policy but has impact on policy.
A8 – JC) We don’t do social science research so we don’t face that issue.
A8 – Victoria) Finding ways to make that work that guarantees independence is often the best way forward – you cannot and often do not want to say no… But you work with codes of conduct, with advisory board, with processes to ensure appropriate freedoms.
JC: A question to the audience… A controversial topic arises, one side owns the debate and a private company approaches to support your voice… Do you take their funding?
Comment) I was asked to do that and I kind of stalled so that I didn’t have to refuse or take part, but in that case I didn’t feel
Comment) If having your voice in the public triggers the conversation, you do make it visible and participate, to progress the issue…
Comment) Maybe this comes down to personal versus institutional points of view. And I would need to talk to colleagues to help me make that decision, to decide if this would be important or not… Then I would say yes… Better solution is to say “no, I’m talking in a private capacity”.
JC) I think that the point of separating individual and centres here is important. Generally centres like ours do not take a position… And there is an added element that if a corporation wants to be involved, a track record of past behaviour makes it less troublesome. Saying something for 10 years gives you credibility in a way that suddenly engaging does not.
Wolfgang) In Germany it is general practice that if your arguments are not being heard, then you engage expertise – it is general practice in German legal academic practice. It is ok I think.
Comment) In the Bundestag they bring in experts… But of course the choice of expert reflects values and opinions made in articles. So you have a range of academics supporting politics… If I am invited to talk to parliament, I say what I always say “this is not a problem”.
Victoria: And I think that nicely reminds us why this is the politics of internet research! Thank you.
Plenary Panel: Who Rules the Internet? Kate Crawford (Microsoft Research NYC), Fieke Jansen (Tactical Tech), Carolin Gerlitz (University of Siegen) – Chair: Cornelius Puschmann
Jennifer Stromer-Galley, President of the Association of Internet Researchers: For those of you who are new to the AoIR, this is our 17th conference and we are an international organisation that looks at issues around the internet – now including those things that have come out of the internet including mobile apps. And our panel today we will be focusing on governance issues. Before that I would like to acknowledge this marvellous city of Berlin, and to thank all of my colleagues in Germany who have taken such care, and to Humboldt University for hosting us in this beautiful venue. And now, I’d like to handover to Herr Matthias Graf von Kielmansegg, representing Professor Dr Elizabeth Wacker, Federal Minister of Labour and Social Affairs.
Matthias Graf von Kielmansegg: is here representing Professor Wacker, who takes a great interest in internet and society, including the issues that you are looking at here this week. If you are not familiar with our digitisation policy, the German government published a digital agenda for the first time two years ago, covering all areas of government operation. In terms of activities it concentrates on the term 2013-2017, and it needs to be extended, and it reaches strategically far into the next decade. Additionally we have a regular summit bringing together the private sector, unions, government and the academic world looking at key issues.
You all know that digital is a fundamental gamechanger, in the way goods and services are used, the ways we communicate and collaborate, and digital loosens our ties to time and place… And we aren’t at the end but at the middle of this process. Wikipedia was founded 16 years ago, the iPhone launched 9 years ago, and now we talk about Blockchain… So we do not know where we will be in 10 or 20 years time. And good education and research are key to that. And we need to engage proactively. In Germany we are incorporating Internet of Things into our industries. In Germany we used to have a technology-driven view of these things, but now we look at economic and cultural contexts or ecosystems to understand digital systems.
Research is one driver, the other is that science, education, and research are users in their own right. Let me focus first on education… Here we must answer some major issues – what will drive change here, technology or pedagogy? Who will be the change agents? And what of the role of teachers and schools? They must take the lead in change and secure the dominance of pedagogy, using digital tools to support our key education goals – and not vice versa. And that means digital education must offer more opportunities, flexibilities, and better preparation for tomorrow’s world of work. With this in mind we plan to launch a digital education campaign to help young people find their place in an ever changing digital world, and to be ready to adapt to the changes that arise. How education can support our economic model and higher education. And we will need to address issues of technical infrastructure, governance – and for us how this plays out with our 60 federal states. Closer to your world is the world of science. Digital tools create huge amounts of new data and big data. The challenges organisations face is not just infrastructure but how to access and use this data. We call our approach Securing the Life Cycle of Data, concerned with aceess, use, reuse, interoperability. And how will be decide what we save, and what we delete? And who will decide how third parties use this data. And big data goes alongside other aspects such as high powered computing. We plan to launch an initiative of action in this area next year. To oversee this we have a Scientific Oversight Body with stakeholders. We are also keen to embrace Open Data and the resources to support that. We have added new conditions to our own funding conditions – any publication based on research funded by us, must be published open access.
We need to know more about internet and society need to be known, and there is research to be done. So, the federal government has decided to establish a German Internet Institute. It will address a number of areas of importance: access and use of the digital world; work and value creation and our democracy. We want an interdisciplinary team of social scientists, economists, and information scientists. The competitive selection process is just underway, and we expect the winner to be announced next spring. There is readiness to spend up to €15M over the first five years. And this highlights the importance of the digital world in Germany.
Let me just make one comment. The overall title of this conference is Internet Rules! It is still up to us to be the fool or the wise… We need to understand what might happen is politics, economics and society do not find the answers to the challenges we face. And so hopefully we will find that it’s not the internet that rules, but that democracy rules!
Kate Crawford
When Cornelius asked me to look at the idea of “Who rules the internet?” I looked up at my bookshelf, and found lots of books written by people in this community, many of you in this room, looking at just this question. And we have moved from the ’90s utopianism to the world of infrastructure, socio-technical aspects, the Internet of Things layer – and zombie web cams being coopted by hackers. So many of you have enhanced my understanding of this issue.
Right now we see machine learning and AI being rapidly build into our world without implications being fully understood… I am talking narrowly about AI here… Sometimes they have lovely feminine names: Siri, Alexa, etc… But these systems are embedded in our phones, we have AI analysing images on Facebook. It will never be separate from humans, but it is distinct and significant, and we see AI beyond the internet and into systems – on who gets released from jail, on hospital stays, etc. I am sure all of us were surprised by the fact that Facebook, last month, censored a Pulitzer Prize winning image of a girl being napalmed in Vietnam… We don’t know the processes that triggered this, though an image of a nude girl likely triggers these processes… Now that had attention, the Government of Norway accused Facebook or erasing our shared history. The image was restored but this is the tip of the iceberg – and most images and actions are not so apparent to us…
This lack of visibility is important but it isn’t new… There are many organisational and procedural aspects that are opaque… I think we are having a moment around AI where we don’t know what is taking place… So what do we do?
We could make them transparent… But this doesn’t seem likely to work. A colleague and I have written about the history of transparency and that process and availability code does not necessarily tell you exactly what is happening and how this is used. Y Combinator has installed a system, called HAL 9000 brilliantly, and have boasted that they don’t know how it filters applications, only the system could do that. That’s fine until that system causes issues, denies you rights, gets in your way…
So we need to understand these algorithms from the outside… We have to poke them… And I think of Christian Salmand(?)’s work on algorithmic auditing. Christian couldn’t be here this evening and my thoughts are with him. But he is also part of a group who are trying to pursue legal rights to enable this type of research.
And there are people that say that AI can fix this system… This is something that the finance sector talks about. They have an environment of predatory machine learning hunting each other – Terry Cary has written about this. It’s tempting to create a “police AI” to watch these… I’ve been going back to the 1970s books on AI, and the work of Joseph Weizenbaum who created ELIZA. And he suggested that if we continue to ascribe AI to human acting systems it might be a slow acting poison. It is a reminder to not be seduced by these new forms of AI.
Carolin Gerlitz, University of Siegen
I think after the last few days the answer to the question of “who rules the internet?”, I think the answer is “platforms”!
Their rules of who users are, what they can do, can seem very rigid. Before Facebook introduced the emotions, the Like button was used in a range of ways. With the introduction of emotions they have rigidly defined responses, creating discreet data points to be advertiser ready and available to be recombined.
There are also rules around programmability, that dictate what data can be extracted, how, by whom, in what ways… And platforms also like to keep the interpretation of data in control, and adjust the rules of APIs. Some of you have been working to extract data from platforms where things are changing rapidly – Twitter API changes, Facebook API and Research changes, Instagram API changes, all increasingly restricting access, all dictating who can participate. And limiting the opportunity to hold platforms to account, as my colleague Anne Helmond argues.
Increasingly platforms are accessed indirectly through intermediaries which create their own rules, a cascade of rules for users to engage with. Platforms don’t just extend to platforms but also to apps… As many of you have been writing about in regard to platforms and apps… And Christian, if he were here today, would talk about the increasing role of platforms in this way…
And platforms reach out not only to users but also non-users. They these spaces are also contextual – with place, temporality and the role of commercial content all important here.
These rules can be characterised in different ways… There is a dichotomy of openness and closedness. Much of what takes place is hidden and dictated by cascading sets of rule sets. And then there is the issue of evaluation – what counts, for whom, and in what way? Tailorism refers to the mass production of small tasks – and platforms work in these fine grained algorithmed way. But platforms don’t earn money from users’ repetitive actions… Or from use of platform data by third parties. They “put life to work” (Lazlo) by using data points raising questions of who counts and what counts.
Fieke Jansen, Tactical Tech
I work at an NGO, on the ground in real world scenarios. And we are concerned with the Big Five: Apple, Amazon, Google, Microsoft and Facebook. How did we get like this? People we work with are uncomfortable with this. When we ask activists and ask them to draw the internet, they mostly draw a cloud. We asked at a session “what happens if the government bans Facebook” and they cannot imagine it – and if Facebook is beyond government then where are we at here? And I work with an open source company who use Google Apps for Business – and that seems like an odd situation to me…
But I’ll leave the Big Five for now and turn to BitNik… They used the dark net shopper and brought random stuff for $50… And then placed them in a gallery… They did
Iced T watch… After Wikileaks an activist in Berlin found all the NSA services spying on this and worked out who was working for the secret service… But that triggers a real debate… There was real discussion of being anti-patriotic, and puts people in data… But the data he used, from LinkedIn, is sold every day…. He just used it in a way that raised debate. We allow that selling use… But this coder’s work was not… Isn’t that debate needed.
So, back to the Big Five. In 2014 Google (now Alphabet) was the second biggest company in the world – with equivalent GDP bigger than Austria. We choose to use many of their services every day… But many of their services are less in our face. In the sensor world we have fewer choices about data… And with the big companies it is political too… In Brussels you have to register lobbists – there are 9 for Google, 7 used to work for the European Parliament… There is a revolving door here.
There is also an issue of skill… Google has wealth and power and knowledge that are very large to counter. Facebook have, around 400m active users a month, 300m likes a day, they are worth $190m… And here we miss the political influence. They have an enormous drive to conquer the global south… They want to roll out Facebook Sero as “the internet”…
So, who rules the internet? It’s the 1% of the 1%… It is the Big Five, but also the venture capitalists who back them… Sequoia and Kleiner Perkins Caufield & Byers, and you have Peter Thiel… It is very few people behind many of the biggest companies including some of the Big Five…
People use these services that work well, work easily… I only use open source… Yes, it is harder… Why are so few questioning and critiquing that? We feed the beast on an every day basis… It is our universities – also moving to decentralised Big Five platforms in preference to their own, it is our government… and if we are not critical what happens?
Panel Discussion
Cornelius: Many here study internet governance… So I want to ask, Kate, does AI rule the internet?
Kate: I think it is really hard to think about who rules the internet. The interesting thing about automated decision making networks have been with us for a while… It’s less about ruling, and who… And it’s more about the entanglements, fragmentation and governance. We talk about the Big Five… I would probably say there are Seven companies here, deciding how we get into university, healthcare, housing, filtering far beyond the internet… And governments do have a role to play.
Cornelius: How do we govern what we don’t understand?
Kate: That’s a hard question… That keeps me up at night that question… Governments look to us academics, technology sectors, NGOs, trying to work out what to do. We need really strong research groups to look at this – we tried to do this with AI Now. Interdisciplinary is crucial – these issues cannot be solved by computer science alone or social science alone… This is the biggest challenge of the next 50 years.
Cornelius: What about how national governments can legislate for Facebook, say? (I’m simplifying a longer question that I didn’t catch in time here, correction welcome!)
Carolyn: I’m not sure about Facebook but in our digital methods workshop we talked about how on Twitter content can be deleted, that can then be exposed in other locations via the API. And it is also the case that these services are specific and localised… We expect national governments to have some governance, when what you understand and how you access information varies by location… Increasing that uncanny notion. I also wanted to comment on something you asked Kate – thinking about the actors here, they all require engagement of users – something Fieke pointed to. Those actors involved in rulers are dependent on actions of other actors.
Cornelius: So how else we be running these things? The Chinese option, the Russion options, are there better options?
Carolyn: I think I cannot answer – I’d want to put it to these 570 smart people for the next two days. My answer would be to acknowledge distributedness to which we have to respond and react… We cannot understand algorithms and AI without understanding context…
Carolyn: Fieke, what you talked about… Being extreme… Are we whining because as Europeans we are being colonised by other areas of the world, even as we use and are obsessed by our devices and tools – complaining then checking our iPhones. I’m serious… If we did care that much, maybe actions would change… You said people have the power here, maybe it’s not a big enough issue…
Fieke: Is it Europeans concerned about Americans from a libertarian point of view? Yes. I work mainly in non-European parts of the world and particularly in the North America… For many the internet is seen as magical and neutral – but those who research it we know it is not. But when you ask why people use tools, it’s their friends or community. If you ask them who owns it, that raises questions that are framed in a relevant way. The framing has to fit people’s reality. In South America talk of Facebook Sero as the new colonialism, you will have a political conversation… But we also don’t always know why we are uncomfortable… It can feel abstract, distant, and the concern is momentary. Outside of this field, people don’t think about it.
Kate: Your provocation that we could just step away, and move to open source. The reality includes opportunity costs to employment, to friends and family… But even if you do none of those things then you walk down the streets and you are tracked by sensors, by other devices…
Fieke: I absolutely agree. All the data collected beyond our control is the concern… But we can’t just roll over and die, we have to try and provoke and find mechanisms to play…
Kate: I think that idea of what the political levers may be… Those conversation of legal, ethical, technical parameters seem crucial, more than consumer choice. But I don’t think we have sufficient collective models of changing information ecologies… and they are changing so rapidly.
Q&A
Q1) Thank you for this wonderful talk and perspectives here. You talked about the infrastructure layer… What about that question. You say this 1% of 1% own the internet, but do they own the infrastructure? Facebook is trying to balloon in the internet so that they cannot be cut off… It also – second question – used to be that YOU owns the internet that changed the dominance of big companies… This happens in history quite often… So what about that?
A1 – Fieke) I think that Kate talked about the many levels of ownership… Facebook piggy backs on other infrastructures, Google does the balloons. It used to be that government owned the infrastructure. There are new cables rolling out… EU funding, governments, private companies, rich people… The infrastructure is mainly owned by companies now.
A1 – Kate) I think infrastructure studies has been extraordinarily rich – work of Nicole Serafichi for instance – but also we have art responses. Infrastructure is very of the moment… But what happens next… It is not just about infrastructures and their ownerships, but also surveillance access to these. There are things like MESH networks… And there are people working here in Berlin to flag up faux police networks during protests to help protestors protect themselves.
A1 – Carolyn) I think that platforms would have argued differently ten years ago about who owned the internet – but “you” probably wouldn’t have been the answer…
Q2) I wonder if the real issue is that we are running on very vague ideas of government that have been established for a very different world. People are responding to elections and referenda in very irrational ways that suggest that model is not fit for purpose. Is there a better form of governance or democracy that we should move towards? Can AI help us there?
A2 – Kate) What a beautiful and impossible to answer question! Obviously I cannot answer that properly but part of the reason I do AI research is to try to inform and shape that… Hence my passion for building research in this space. We don’t have much data to go on but the imaginative space here has been dominated by those with narrow ideas. I want to think about how communities can develop and contribute to AI, and what potential there is.
Q3) Do we need to rethink what we mean by democratic control and regulations… Regulations are closely associated with nation states, but that’s not the context that most of the internet operates. Do we need to re-engage with the question of globalisation again.
A3) As Carolyn said, who is the “you” in web 2.0, and whose narrative is there. Globalisation is similar. I pay taxes to a nation state that has rules of law and governance… By denying that they buy into the narrative of mainly internet companies and huge multinational organisations.
Cornelius: I have the declaration of independence of the internet by Perry Barlow which I was tempted to quote you… But it is interesting to reflect on how we have moved from utopian positions to where we are today.
Q4 – participant from Google!) There is an interesting question here… If this question was pointing to deeper truth… A clear ruler, an internet, would allow this question of who rules to be answered. I would ask how we have agency over how the proliferation of internet technologies and how we benefit from them… ?
A4 – Kate) A great title, but long for the programme! But your phrasing is so interesting – if it is so diverse and complex then how we engage is crucial. I think that is important but, the optimistic part, I think we can do this.
A4 – Carolyn) One way to engage is through descent… and negotiating on a level that ensures platforms work beyond economic values…
Q5) The last time I was forced to give away my data was by the Australian state (where I live) in completing the census… I had to complete it or I would be fined over $1000 AUS – Facebook, Twitter, etc. never did that… I rule this kind of internet, I am still free in my choices. But on the other hand why is it that states that are best at governing platforms are the ones I want to live in the least. Maybe without the platforms no-one would use the internet so we’d have one problem less… If we as academics think about platforms in these mythic ways, maybe we end up governing in a way that is more controlled and has undesirable effects.
A5 – Kate) Many questions there, I’ll address two of those. On the census I’d refer you to articles
University of Cambridge study showed huge accuracy in determining marital status, sexuality and whether a drug or alcohol user based on Facebook likes… You may feel free but those data patterns are being built. But we have to move beyond thinking that only by active participation do you contribute to these platforms…
A5 – Fieke) The Census issue you brought up is interesting… In the UK, US and Australia the contractor for the Census is conducted by one of the world’s biggest arms manufacturers… You don’t give data to the Big Five… But…  So, we do need to question the politics behind our actions… There is also a perception that having technical skills makes you superior to those without, and if we do that we create a whole new class system and that raises whole new questions.
Q6) The question of internet raises issues of boundaries, and how we do governance and work of governance and rule-making. Ideally when we do that governance and rule-making there are values behind that… So what are the values that you think need to underlie those structures and systems…
A6 – Carolyn) I think values that do not discriminate people through algorithmic processing, AI, etc. Those tools should allow people to not be discriminated on the basis of things they have done in the past… But that requires understanding of how that discrimination is taking place now…
A6 – Kate) I love that question… All of these layers of control come with values baked in, we just don’t know what they are… I would be interested to see what values drop out of those systems, that don’t fit the easy metricisation of our world. Some great things to fall out of feminist and race theory and values from that…
A6 – Fieke) I would add that values should not just be about the individual, and should ensure that the collective is also considered…
Cornelius: Thank you for offering a glimmer of hope! Thank you all!
Oct 052016
 

If you’ve been following my blog today you will know that I’m in Berlin for the Association of Internet Researchers AoIR 2016 (#aoir2016) Conference, at Humboldt University. As this first day has mainly been about workshops – and I’ve been in a full day long Digital Methods workshop – we do have our first conference keynote this evening. And as it looks a bit different to my workshop blog, I thought a new post was in order.

As usual, this is a live blog post so corrections, comments, etc. are all welcomed. This session is also being videoed so you will probably want to refer to that once it becomes available as the authoritative record of the session. 

Keynote: The Platform Society – José van Dijck (University of Amsterdam) with Session Chair: Jennifer Stromer-Galley

We are having an introduction from Wolfgang (?) from Humboldt University, welcoming us and noting that AoIR 2016 has made the front page of a Berlin newspaper today! He also notes the hunger for internet governance information, understanding, etc. from German government and from Europe.

Wolfgang: The theme of “Internet Rules!” provides lots of opportunities for keynotes, discussions, etc. and it allows us to connect the ideas of internet and society without deterministic structures. I will now hand over to the session chair Cornelius Puschmann.

Cornelius: It falls to me to do the logistical stuff… But first we have 570 people registered for AoIR 2016  so we have a really big conference. And now the boring details… which I won’t blog in detail here, other than to note the hashtag list:

  • Official: #aoir2016
  • Rebel: #aoir16
  • Retro: #ir17
  • Tim Highfield: #itisthesevebeenthassociationofinternetresearchersconferenceanditishappeningin2016

And with that, and a reminder of some of the more experimental parts of the programme to come.

Jennifer: Huge thanks to all of my colleagues here for turning this crazy idea into this huge event with a record number of attendees! Thank you to Cornelius, our programme chair.

Now to introduce our speaker… Jose van Dijck, professor at the University of Amsterdam as well as visiting work across the world. She is the first woman to hold the Presidency of the Royal Academy of Arts, Science and Research in The Netherlands. Her most recent book is the Culture of Connectivity: A History of Social Media. It takes a critical look back at social media and social networking, not only as social spaces but as business spaces. And her lecture tonight will give a preview of her forthcoming work on the Public Values in a Platform Society.

Jose: It is lovely to be here, particularly on this rather strange day…. I became President of the Royal Academy this year and today my colleague won the Nobel Prize in Chemistry – so instead of preparing for my keynote today I was dealing with press inquiries, so it is nice to focus back on my real job…

So a few years ago Thomas Poell wrote an article on the politics of social platforms. His work on platforms inspired my work on networked platforms being interwoven into an ecology economically and socially. Since I wrote that book, the last chapter is on platforms, many of which have now become the main players… I talked about Google (now Alphabet), Facebook, Amazon, Microsoft, LinkedIn (now owned by Microsoft), Apple… And since then we’ve seen other players coming in and creating change – like Uber, AirBnB, Coursera. These platforms have become the gateways to our social life… And they have consolidated and expanded…

So a Platform is an online site that deploys automated technologies and business models to organise data streams, economic interactions, and social exchanges between users of the internet. That’s the core of the social theory I am using. Platforms ARE NOT simple facilitators, and they are not stand alone systems – they are interconnected.

And a Platform Ecosystem is an assemblage of networked platforms, governed by its own dynamics and operating on a set of mechanisms…

Now a couple of years ago Thomas and I wrote about platform mechanisms and the very important idea of “Datafication”. Commodification – a platform’s business model and governance defines the way in which datafied information is transformed into (economic, societal) value. There are many business models and many governance models – they vary but governance models are maybe more important than business models, and they can be hard to pin down. Selection are about data flows filtered by algorithms and bots, allowing for automated selection such as personalisation, rankings, reputation. Those mechanisms are not visible right now, and we need to make those explicit so that we can talk about them and their implications. Can we hold Facebook accountable for Newsfeed in the ways that traditional media are accountable? That’s an important question for us to consider…

The platform ecosystem is not a level playing field. They are gaining traction not through money but through the number of users. And network effects mean that user numbers are the way we understand the size of the network. There is Platformisation (thanks Anna?) across sectors… And that power is gained through cross ownership and cross platform, but also through true architecture and shared platforms. In our book we’ll give both private and public sectors and how they are penetrated by platform ecosystems. We used to have big oil companies, or big manufacturing companies… But now big companies operate across sectors.

So transport for instance… Uber is huge, partly financed by Google and also in competition with Google. If we look at News as a sector we have Huffington Post, Buzzfeed, etc. they are also used as content distribution and aggregators for Google, Facebook, etc.

In health – a second becoming most proliferated – we see fitness and health apps, with Google and Apple major players here. And in your neighbourhood there are apps available, some of these are global apps localised to your neighbourhoods, sitting alongside massive players.

In Education we’ve seen the rise of Massive Online Open Courses, with Microsoft and Google investing heavily alongside players like EdX, Coursera, Udacity, FutureLearn, etc.

All of the sectors are undergoing platformisation… And if you look across them all, all areas of private and public life the activity is revolving around the big five: Google, Facebook. Apple, Amazon, with LinkedIn and Twitter also important. And take, for example, AirBnB

Platform society is a society which social, economic and interpersonal traffic is largely channeled by an (overwhelmingly corporate) global online platform ecosystem that is driven by algorithms and fuelled by data. That’s not a revolution, it’s something we are part of and see every day.

Now we have promises of “participatory culture” and the euphoria of the idea of web 2.0, and of individuals contributing. More recently that idea has shifted to the idea of the “sharing economy”… But sharing has shifted in it’s meaning too. It is about sharing resources or services for some sort of fee, that’s a transaction based idea. And from 2015 we see awareness of the negative sides of the sharing economy. So a Feb 2015 Time cover read: “Strangers crashed my car, ate my food and wore my pants. Tales from the sharing economy” – about the personal discomfort of the downsides. And we see Technology Quarterly writing about “When it’s not so good to share” – from the perspective of securing the property we share here. But there is more at stake than personal discomfort…

We have started to see disruptive protest against private platforms, like posters against AirBnB. City Councils have to hire more inspectors to regulate AirBnB hosts for safety reasons – a huge debate in Amsterdam now, and the public values changing as a consequence of so many AirBnB hosts in this city. And there are more protests about changing values… Saying people are citizens not entrepreneurs, that the city is not for sale…

In another sector we see Uber protests, by various stakeholders. We see these from licenced taxi drivers, accusing them of safety issues and social values; but also protests by drivers. Uber do not call themselves a “transportation” company, instead calling themselves a connectivity company. Now Uber drivers have complained that Uber don’t pay insurance or pensions…

So, AirBnB and Uber are changing public values, they haven’t anchored existing values in their own design and development. There are platform promises and paradoxes here… They offer personalised services whilst contributing to the public good… The idea is that they are better at providing services than existing players. They promote community and connectedness whilst bypassing cumbersome institutions – based on the idea that we can do without big government or institutions, and without those values. These platforms also emphasize public values, whilst obscuring private gain. These are promises claiming that they are in the public interest… But that’s a paradox with hidden private gains.

And so how do we anchor collective, public values in a platform society and how do we govern this. ? has the idea of governance of platforms as opposed to governance by platforms. Our government is mainly concerned with governing platforms – regulations, privacy etc. and that is appropriate but there are public values like fairness, like accuracy, like safety, like privacy, like transparency, like democracy… Those values are increasingly being governed by platforms, and that governance is hidden from us in the algorithms and design decisions…

Who rules the platform society? Who are the stakeholders here? There are many platform societies of course, but who can be held accountable? Well it is an intense ideological battleground… With private stakeholders like (global) corporations, businesses, (micro-)entrepreneurs; consumer groups; consumers. And public stakeholders like citizens; co-ops and collectives, NGOs, public institutions, governments, supra-national bodies… And matching those needs up is never going to happen really…

Who uses health apps here? (many do) In 2015 there were 165,000 health apps in the Google Play store. Most of them promise personalised health and, whilst that is in the future, they track data… They take data right from individual to companies, bi-passing other actors and health providers… They manage a wide variety of data flows (patients, doctors, companies). There is a variety of business models, particularly unclear. There is a site called “Patients like me” which says that it is “not just for profit” – so it is for profit, but not just for profit… Data has become currency in our health economy. And that private gain is hiding behind the public good arguement. A few months ago in Holland we started to have insurance discounts (5%) if you send FitBit scores… But I thin the next step will be paying more if you do not send your scores… That’s how public values change…

Finally we have regulation – government should be regulating security, safety, accuracy, and privacy. It takes the Dutch FDA 6 months to check the safety and accuracy of one app – and if it is updated, you have to start again! In the US the US Dept of Health and Human Services, Office of National Coordinator for Health Information Technology (ONC), Office for Civil Rights (OCR) and Food and Drug Administration (FDA) released a guide called “Developing a mobile health app?” providing guidance on which federal laws need to be followed. And we see not just insurance using apps, but insurance and healthcare providers having to buy data services from providers and that changing the impact of these apps. You have things like 23 and Me, and those are global – and raises global regulation issues – so hard to govern around that issue. But our platform ecosystem is transnational, and governments are national. We also see platforms coming from technology companies – Phillips was building physical kit, MRI machines, but it now models itself as a data company. What you see here is that the big five internet and technology players are also big players in this field – Google Health and 23 and Me (financed by Sergei Brin, run by his ex-wife), Apple HealthKit, etc. And even then you have small independent apps like mPower but they are distributed via the app stores, led by big players and again, hard to govern.

 

We used to build trust in society through institutions and institutional norms and codes, which were subject to democratic controls. But these are increasingly bi-passed… And that may be subtle but it is going uncontrolled. So, how can we build trust in a platformed world? Well, we have to understand who rules the platform ecosystem, and by understanding how it is governed. And when you look at this globally you see competing ideological hemispheres… You see the US model of commercial values, and those are literally imposed on others. And you have Yandex and the Chinese model, and that that’s an interesting model…

I think coming back to my main question: what do we do here to help? We can make visible how this platformised society works… So I did a presentation a few weeks ago and shared recommendations there for users:

  • Require transparency in platforms
  • Do not trade convenience for public values
  • Be vigilant, be informed

But can you expect individuals to understand how each app works and what its implications are? I think government have a key role to protect citizens rights here.

In terms of owners and developers my recommendations are:

  • Put long-term trust over short-term gain
  • Be transparent about data flows, business models, and governance structure
  • Help encode public values in platform architecture (e.g. privacy by design)

A few weeks back the New York Times ran an article on holding algorithms accountable, and I think that that is a useful idea.

I think my biggest recommendations are for governments, and they are:

  • Defend public values and common good; negotiate public interests with platforms. What it could also do is to, for instance, legislate to manage demands and needs in how platforms work.
  • Upgrade regulatory institutions to deal with the digital constellations we are facing.
  • Develop (inter)national blueprint for a democratic platform society.

And we, as researchers, we can help expose and share the platform society so that it is understaood and engaged with in a more knowledgeable way. Governments have a special responsibility to govern the networked society – right now it is a Wild West. We are struggling to resolve these issues, so how can we help govern the platforms to shape society, when the platforms themselves are so enormous and powerful. In Europe we see platforms that are mainly US-based private sector spaces, and they are threatening public sector organisations.. It is important to think about how we build trust in that platform society…

Q&A

Q1) You talked about private interests being concealed by public values, but you didn’t talk about private interests of incumbents…

A1) That is important of course. Those protests that I mentioned do raise some of those issues – undercutting prices by not paying for insurance, pensions etc. of taxi drivers. In Europe those costs can be up to 50% of costs, so what do we do with those public values, how do we pay for this? We’ll pay for it one way or the other. The incumbents do have their own vested interests… But there are also social values there… If we want to retain those values though we need to find a model for that… European economic values have had collective values inscribed in… If that is outmoded, than fine, but how do we build those in in other ways…

Q2) I think in my context in Australia at least the Government is in cahoots with private companies, with public-private partnerships and security arms of government heavily benefitting from data collection and surveillance… I think that government regulating these platforms is possible, I’m not sure that they will.

A2) A lot of governments are heavily invested in private industries… I am not anti-companies or anti-government… My first goal is to make them aware of how this works… I am always surprised how little governments are aware of what runs underneath the promises and paradoxes… There is reluctance to work with companies from regulators but there is also exhaustion and a lack of understanding about how to update regulations and processes. How can you update health regulations with 165k health apps out there? I probably am an optimist… But I want to ensure governments are aware and understand how this is transforming society. There is so much ignorance in the field, and there is nievete about how this will play out. Yes, I’m an optimist. But no, there is something we can do to shape the direction that the platform society will develop.

Q3) You have great faith in regulation, but there are real challenges and issues… There are many cases where governments have colluded with industry to inflate the costs of delivery. There is the idea of regulatory capture. Why should we expect regulators to act in public interest when historically they act in the interest of private companies.

A3) It’s not that I put all my trust there… But I’m looking for a dialogue with whoever is involved in this space, in the contested play of where we start… It is one of many actors in this whole contested battlefield. I don’t think we have the answers, but it is our job to explain the underlying mechanisms… And I’m pretty shocked by how little they know about the platforms and the underlying mechanisms there. Sometimes it’s hard to know where to start… But you have to make a start somewhere…

Oct 052016
 

After a few weeks of leave I’m now back and spending most of this week at the Association of Internet Researchers (AoIR) Conference 2016. I’m hugely excited to be here as the programme looks excellent with a really wide range of internet research being presented and discussed. I’ll be liveblogging throughout the week starting with today’s workshops.

This is a liveblog so all corrections, updates, links, etc. are very much welcomed – just leave me a comment, drop me an email or similar to flag them up!

I am booked into the Digital Methods in Internet Research: A Sampling Menu workshop, although I may be switching session at lunchtime to attend the Internet rules… for Higher Education workshop this afternoon.

The Digital Methods workshop is being chaired by Patrik Wikstrom (Digital Media Research Centre, Queensland University of Technology, Australia) and the speakers are:

  • Erik Borra (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Axel Bruns (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Jean Burgess (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Carolin Gerlitz (University of Siegen, Germany),
  • Anne Helmond (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Ariadna Matamoros Fernandez (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Peta Mitchell (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Richard Rogers (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Fernando N. van der Vlist (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Esther Weltevrede (Digital Methods Initiative, University of Amsterdam, the Netherlands).

I’ll be taking notes throughout but the session materials are also available here: http://tinyurl.com/aoir2016-digmethods/.

Patrik: We are in for a long and exciting day! I won’t introduce all the speakers as we won’t have time!

Conceptual Introduction: Situating Digital Methods (Richard Rogers)

My name is Richard Rogers, I’m professor of new media and digital culture at the University of Amsterdam and I have the pleasure of introducing today’s session. So I’m going to do two things, I’ll be situating digital methods in internet-related research, and then taking you through some digital methods.

I would like to situate digital methods as a third era of internet research… I think all of these eras thrive and overlap but they are differentiated.

  1. Web of Cyberspace (1994-2000): Cyberstudies was an effort to see difference in the internet, the virtual as distinct from the real. I’d situate this largely in the 90’s and the work of Steve Jones and Steve (?).
  2. Web as Virtual Society? (2000-2007) saw virtual as part of the real. Offline as baseline and “virtual methods” with work around the digital economy, the digital divide…
  3. Web as societal data (2007-) is about “virtual as indication of the real. Online as baseline.

Right now we use online data about society and culture to make “grounded” claims.

So, if we look at Allrecipes.com Thanksgiving recipe searches on a map we get some idea of regional preference, or we look at Google data in more depth, we get this idea of internet data as grounding for understanding culture, society, tastes.

So, we had this turn in around 2008 to “web as data” as a concept. When this idea was first introduced not all were comfortable with the concept. Mike Thelwell et al (2005) talked about the importance of grounding the data from the internet. So, for instance, Google’s flu trends can be compared to Wikipedia traffic etc. And with these trends we also get the idea of “the internet knows first”, with the web predicting other sources of data.

Now I do want to talk about digital methods in the context of digital humanities data and methods. Lev Manovich talks about Cultural Analytics. It is concerned with digitised cultural materials with materials clusterable in a sort of art historical way – by hue, style, etc. And so this is a sort of big data approach that substitutes “continuous change” for periodisation and categorisation for continuation. So, this approach can, for instance, be applied to Instagram (Selfiexploration), looking at mood, aesthetics, etc. And then we have Culturenomics, mainly through the Google Ngram Viewer. A lot of linguists use this to understand subtle differences as part of distance reading of large corpuses.

And I also want to talk about e-social sciences data and method. Here we have Webometrics (Thelwell et al) with links as reputational markers. The other tradition here is Altmetrics (Priem et al), which uses online data to do citation analysis, with social media data.

So, at least initially, the idea behind digital methods was to be in a different space. The study of online digital objects, and also natively online method – methods developed for the medium. And natively digital is meant in a computing sense here. In computing software has a native mode when it is written for a specific processor, so these are methods specifically created for the digital medium. We also have digitized methods, those which have been imported and migrated methods adapted slightly to the online.

Generally speaking there is a sort of protocol for digital methods: Which objects and data are available? (links, tags, timestamps); how do dominant devices handle them? etc.

I will talk about some methods here:

1. Hyperlink

For the hyperlink analysis there are several methods. The Issue Crawler software, still running and working, enable you to see links between pages, direction of linking, aspirational linking… For example a visualisation of an Armenian NGO shows the dynamics of an issue network showing politics of association.

The other method that can be used here takes a list of sensitive sites, using Issue Crawler, then parse it through an internet censorship service. And variations on this that indicate how successful attempts at internet censorship are. We do work on Iran and China and I should say that we are always quite thoughtful about how we publish these results because of their sensitivity.

2. The website as archived object

We have the Internet Archive and we have individual archived web sites. Both are useful but researcher use is not terribly signficant so we have been doing work on this. See also a YouTube video called “Google and the politics of tabs” – a technique to create a movie of the evolution of a webpage in the style of timelapse photography. I will be publishing soon about this technique.

But we have also been looking at historical hyperlink analysis – giving you that context that you won’t see represented in archives directly. This shows the connections between sites at a previous point in time. We also discovered that the “Ghostery” plugin can also be used with archived websites – for trackers and for code. So you can see the evolution and use of trackers on any website/set of websites.

6. Wikipedia as cultural reference

Note: the numbering is from a headline list of 10, hence the odd numbering… 

We have been looking at the evolution of Wikipedia pages, understanding how they change. It seems that pages shift from neutral to national points of view… So we looked at Srebenica and how that is represented. The pages here have different names, indicating difference in the politics of memory and reconciliation. We have developed a triangulation tool that grabs links and references and compares them across different pages. We also developed comparative image analysis that lets you see which images are shared across articles.

7. Facebook and other social networking sites

Facebook is, as you probably well know, is a social media platform that is relatively difficult to pin down at a moment in time. Trying to pin down the history of Facebook find that very hard – it hasn’t been in the Internet Archive for four years, the site changes all the time. We have developed two approaches: one for social media profiles and interest data as means of stufying cultural taste ad political preference or “Postdemographics”; And “Networked content analysis” which uses social media activity data as means of studying “most engaged with content” – that helps with the fact that profiles are no longer available via the API. To some extend the API drives the research, but then taking a digital methods approach we need to work with the medium, find which possibilities are there for research.

So, one of the projects undertaken with in this space was elFriendo, a MySpace-based project which looked at the cultural tastes of “friends” of Obama and McCain during their presidential race. For instance Obama’s friends best liked Lost and The Daily Show on TV, McCain’s liked Desperate Housewives, America’s Next Top Model, etc. Very different cultures and interests.

Now the Networked Content Analysis approach, where you quantify and then analyse, works well with Facebook. You can look at pages and use data from the API to understand the pages and groups that liked each other, to compare memberships of groups etc. (at the time you were able to do this). In this process you could see specific administrator names, and we did this with right wing data working with a group called Hope not Hate, who recognised many of the names that emerged here. Looking at most liked content from groups you also see the shared values, cultural issues, etc.

So, you could see two areas of Facebook Studies, Facebook I (2006-2011) about presentation of self: profiles and interests studies (with ethics); Facebook II (2011-) which is more about social movements. I think many social media platforms are following this shift – or would like to. So in Instagram Studies the Instagram I (2010-2014) was about selfie culture, but has shifed to Instagram II (2014-) concerned with antagonistic hashtag use for instance.

Twitter has done this and gone further… Twitter I (2006-2009) was about urban lifestyle tool (origins) and “banal” lunch tweets – their own tagline of “what are you doing?”, a connectivist space; Twitter II (2009-2012) has moved to elections, disasters and revolutions. The tagline is “what’s happening?” and we have metrics “trending topics”; Twitter III (2012-) sees this as a generic resource tool with commodification of data, stock market predictions, elections, etc.

So, I want to finish by talking about work on Twitter as a storytelling machine for remote event analysis. This is an approach we developed some years ago around the Iran event crisis. We made a tweet collection around a single Twitter hashtag – which is no longer done – and then ordered by most retweeted (top 3 for each day) and presented in chronological (not reverse) order. And we then showed those in huge displays around the world…

To take you back to June 2009… Mousavi holds an emergency press conference. Voter turn out is 80%. SMS is down. Mousavi’s website and Facebook are blocked. Police use pepper spray… The first 20 days of most popular tweets is a good succinct summary of the events.

So, I’ve taken you on a whistle stop tour of methods. I don’t know if we are coming to the end of this. I was having a conversation the other day that the Web 2.0 days are over really, the idea that the web is readily accessible, that APIs and data is there to be scraped… That’s really changing. This is one of the reasons the app space is so hard to research. We are moving again to user studies to an extent. What the Chinese researchers are doing involves convoluted processes to getting the data for instance. But there are so many areas of research that can still be done. Issue Crawler is still out there and other tools are available at tools.digitalmethods.net.

Twitter studies with DMI-TCAT (Fernando van der Vlist and Emile den Tex)

Fernando: I’m going to be talking about how we can use the DMI-TCAT tool to do Twitter Studies. I am here with Emile den Tex, one of the original developers of this tool, alongside Eric Borra.

So, what is DMI-TCAT? It is the Digital Methods Initiative Twitter Capture and Analysis Toolset, a server side tool which tries to capture robust and reproducible data capture and analysis. The design is based on two ideas: that captured datasets can be refined in different ways; and that the datasets can be analysed in different ways. Although we developed this tool, it is also in use elsewhere, particularly in the US and Australia.

So, how do we actually capture Twitter data? Some of you will have some experience of trying to do this. As researchers we don’t just want the data, we also want to look at the platform in itself. If you are in industry you get Twitter data through a “data partner”, the biggest of which by far is GNIP – owned by Twitter as of the last two years – then you just pay for it. But it is pricey. If you are a researcher you can go to an academic data partner – DiscoverText or Hexagon – and they are also resellers but they are less costly. And then the third route is the publicly available data – REST APIs, Search API, Streaming APIs. These are, to an extent, the authentic user perspective as most people use these… We have built around these but the available data and APIs shape and constrain the design and the data.

For instance the “Search API” prioritises “relevance” over “completeness” – but as academics we don’t know how “relevance” is being defined here. If you want to do representative research then completeness may be most important. If you want to look at how Twitter prioritises the data, then that Search API may be most relevant. You also have to understand rate limits… This can constrain research, as different data has different rate limits.

So there are many layers of technical mediation here, across three big actors: Twitter platform – and the APIs and technical data interfaces; DMI-TCAT (extraction); Output types. And those APIs and technical data interfaces are significant mediators here, and important to understand their implications in our work as researchers.

So, onto the DMI-TCAT tool itself – more on this in Borra & Reider (2014) (doi:10.1108/AJIM-09-2013-0094). They talk about “programmed method” and the idea of the methodological implications of the technical architecture.

What can one learn if one looks at Twitter through this “programmed method”? Well (1) Twitter users can change their Twitter handle, but their ids will remain identical – sounds basic but its important to understand when collecting data. (2) the length of a Tweet may vary beyond maximum of 140 characters (mentions and urls); (3) native retweets may have their top level text property stortened. (4) Unexpected limitations  support for new emoji characters can be problematic. (5) It is possible to retrieve a deleted tweet.

So, for example, a tweet can vary beyond 140 characters. The Retweet of an original post may be abbreviated… Now we don’t want that, we want it to look as it would to a user. So, we capture it in our tool in the non-truncated version.

And, on the issue of deletion and witholding. There are tweets deleted by users, and their are tweets which are withheld by the platform – and the withholding is a country by country issue. But you can see tweets only available in some countries. A project that uses this information is “Politwoops” (http://politwoops.sunlightfoundation.com/) which captures tweets deleted by US politicians, that lets you filter to specific states, party, position. Now there is an ethical discussion to be had here… We don’t know why tweets are deleted… We could at least talk about it.

So, the tool captures Twitter data in two ways. Firstly there is the direct capture capabilities (via web front-end) which allows tracking of users and capture of public tweets posted by these users; tracking particular terms or keywords, including hashtags; get a small random (approx 1%) of all public statuses. Secondary capture capabilities (via scripts) allows further exploration, including user ids, deleted tweets etc.

Twitter as a platform has a very formalised idea of sociality, the types of connections, parameters, etc. When we use the term “user” we mean it in the platform defined object meaning of the word.

Secondary analytical capabilities, via script, also allows further work:

  1. support for geographical polygons to delineate geographical regions for tracking particular terms or keywords, including hashtags.
  2. Built-in URL expander, following shortened URLs to their destination. Allowing further analysis, including of which statuses are pointing to the same URLs.
  3. Download media (e.g. videos and images (attached to particular Tweets).

So, we have this tool but what sort of studies might we do with Twitter? Some ideas to get you thinking:

  1. Hashtag analysis – users, devices etc. Why? They are often embedded in social issues.
  2. Mentions analysis – users mentioned in contexts, associations, etc. allowing you to e.g. identify expertise.
  3. Retweet analysis – most retweeted per day.
  4. URL analysis – the content that is most referenced.

So Emile will now go through the tool and how you’d use it in this way…

Emile: I’m going to walk through some main features of the DMI TCAT tool. We are going to use a demo site (http://tcatdemo.emiledentex.nl/analysis/) and look at some Trump tweets…

Note: I won’t blog everything here as it is a walkthrough, but we are playing with timestamps (the tool uses UTC), search terms etc. We are exploring hashtag frequency… In that list you can see Bengazi, tpp, etc. Now, once you see a common hashtag, you can go back and query the dataset again for that hashtag/search terms… And you can filter down… And look at “identical tweets” to found the most retweeted content. 

Emile: Eric called this a list making tool – it sounds dull but it is so useful… And you can then put the data through other tools. You can put tweets into Gephi. Or you can do exploration… We looked at Getty Parks project, scraped images, reverse Google image searched those images to find the originals, checked the metadata for the camera used, and investigated whether the cost of a camera was related to the success in distributing an image…

Richard: It was a critique of user generated content.

Analysing Social Media Data with TCAT and Tableau (Axel Bruns)

My talk should be a good follow on from the previous presentation as I’ll be looking at what you can do with TCAT data outside and beyond the tool. Before I start I should say that both Amsterdam and QUT are holding summer schools – and we have different summers! – so do have a look at those.

You’ve already heard about TCAT so I won’t talk more about that except to talk about the parts of TCAT I have been using.

TCAT Data Export allows you to export all tweets from selection – containing all of the tweets and information about them. You can also export a table of hashtags – tweet ids from your selection and hashtags; and mentions – tweet ids from your selection with mentions and mention type. You can export other things as well – known users (politicians, celebrities, etc); URLs; etc. And the structure that emerges are the Main TCAT export file (“full export”) and associating Hashtags; Mentions; Any other additional data. If you are familiar with SQL you are essentially joining databases here. If not then that’s fine, Tableau does this for you.

In terms of processing the data there are a number of tools here. Excel just isn’t good enough at scale – limited to 100,000 rows and that Trump dataset was 2.8 M already. So a tool that I and many others have been working with is Tableau. It’s a tool that copes with scale, it’s user-friendly, intuitive, all-purpose data analytics tool, but the downside is that it is not free (unless you are a student or are using it in teaching). Alongside that, for network visualisation, Gephi is the main tool at the moment. That’s open source and free and a new version came out in December.

So, into Tableau and an idea of what we can do with the data… Tableau enables you to work with data sources of any form, databases, spreadsheets, etc. So I have connected the full export I’ve gotten from TCAT… I have linked the main file to hashtag and mention files. Then I have also generated an additional file that expands the URLs in that data source (you can now do this in TCAT too). This is a left join – one main table that other tables are connected to. I’ve connected based on (tweet) id. And the dataset I’m showing here is from the Paris 2015 UN Climate Change. And all the steps I’m going through today are in a PDF guidebook that is available in that session resources link (http://tinyurl.com/aoir2016-digmethods/).

Tableau then tries to make sense of the data… Dimensions are the datasets which have been brought in, clicking on those reveals columns in the data, and then you see Measures – countable features in the data. Tableau makes sense of the file itself, although it won’t always guess correctly.

Now, we’ve joined the data here so that can mean we get repetition… If a tweet has 6 hashtags, it might seem to be 6 tweets. So I’m going to use the unique tweet ids as a measure. And I’ll also right click to ensure this is a distinct count.

Having done that I can begin to visualise my data and see a count of tweets in my dataset… And I can see when they were created – using Created at but also then finessing that to Hour (rather than default of Year). Now when I look at that dataset I see a peak at 10pm… That seems unlikely… And it’s because TCAT is running on Brisbane time, so I need to shift to CET time as these tweets were concerned with events in Paris. So I create a new Formula called CET, and I’ll set it to be “DateAdd (‘hour’, -9, [Created at])” – which simply allows us to take 9 hours off the time to bring it to the correct timezone. Having done that the spike is 3.40pm, and that makes a lot more sense!

Having generated that graph I can click on, say, the peak activity and see the number of tweets and the tweets that appeared. You can see some spam there – of course – but also widely retweeted tweet from the White House, tweets showing that Twitter has created a new emoji for the summit, a tweet from the Space Station. This gives you a first quick visual inspection of what is taking place… And you can also identify moments to drill down to in further depth.

I might want to compare Twitter activity with number of participating users, comparing the unique number of counts (synchronising axes for scale). Doing that we do see that there are more tweets when more users are active… But there is also a spike that is independent of that. And that spike seems to be generated by Twitter users tweeting more – around something significant perhaps – that triggers attention and activity.

So, this tool enables quantitative data analysis as a starting point or related route into qualitative analysis, the approaches are really inter-related. Quickly assessing this data enables more investigation and exploration.

Now I’m going to look at hashtags, seeing the volume against activity. By default the hashtags are ordered alphabetically, but that isn’t that useful, so I’m going to reorder by use. When I do that you can see that by far COP21 – the official hashtag – is by far the most popular. These tweets were generated from that hashtags but also from several search terms for the conference – official abbreviations for the event. And indeed some tweets have “Null” hashtags – no hashtags, just the search terms. You also see variance in spelling and capitalisation. Unlike Twitter Tableau is case sensitive so I would need to use some sort of Formula to resolve this – combining terms to one hashtag. A quick way to do that is to use “LOWER(‘Hashtag’)” which converts all data in the hashtag fields to lower case. That clustering shows COP21 as an even bigger hashtag, but also identifies other popular terms. We do see spikes in a given hashtag – often very brief – and these are often related to one very popular and heavily retweeted tweet has emerged. So, e.g. a prominent actor/figure has tweeted – e.g. in this data set Cara Delevingne (a British supermodel) triggers a short sharp spike in tweets/retweets.

And we can see these hashtags here, their relative popularity. But remember that my dataset is just based on what I asked TCAT to collect… TCOT might be a really big hashtag but maybe they don’t usually mention my search terms, hence being smaller in my data set. So, don’t be fooled into assuming some of the hashtags are small/low use just because they may not be prominent in a collected dataset.

Turning now to Mentions… We can see several Mention Types: original/null (no mentions); mentions; retweet. You also see that mentions and retweets spikes at particular moments – tweets going viral, key figures getting involved in the event or the tweeting, it all gives you a sense of the choreography of the event…

So, we can now look at who is being mentioned. I’m going to take all Twitter users in my dataset… I’ll see how many tweets mention them. I have a huge Null group here – no mentions – so I’ll start by removing that. The most mentioned accounts we see COP21 being the biggest mentioned account, and others such as Narendra Modi (chair of event?), POTUS, UNFCCC, Francois Hollande, the UN, Mashi Rafael, COP21en – the English language event account; EPN – Justin Trudeau; StationCDRKelly; C Figueres; India4Climate; Barack Obama’s personal account, etc. And I can also see what kind of mention they get. And you see that POTUS gets mentions but no retweets, whilst Barack Obama has a few retweets but mainly mentions. That doesn’t mean he doesn’t get retweets, but not in this dataset/search terms. By contrast Station Commander Kelly gets almost exclusively retweets… The balance of mentions, how people are mentioned, what gets retweeting etc… That is all a starting point for closer reading and qualitative analysis.

And now I want to look at who tweets the most… And you’ll see that there is very little overlap between the people who tweet the most, and the people who are mentioned and retweeted. The one account there that appears in both is COP21 – the event itself. Now some of the most active users are spammers and bots… But others will be obsessive, super-active users… Further analysis lets you dig further. Having looked at this list, I can look at what sort of tweets these users are sending… And that may look a bit different… This uses the Mention type and it may be that one tweet mentions multiple users, so get counted multiple times… So, for instance, DiploMix puts out 372 tweets… But when re-looked at for mentions and retweets we see a count of 636. That’s an issue you have to get your head around a bit… And the same issue occurs with hashtags. Looking at the types of tweets put out show some who post only or mainly original tweets, some who do mention others, some only or mainly retweet – perhaps bots or automated accounts. For instance DiploMix retweets diplomats and politicians. RelaxinParis is a bot retweeting everything on Paris – not useful for analysis, but part of lived experience of Twitter of course.

So, I have lots of views of data, and sheets saved here. You can export tables and graphs for publications too, which is very helpful.

I’m going to finish by looking at URLs mentioned… I’ve expanded these myself, and I’ve got the domain/path as well as the domain captured. I remove the NULL group here. And the most popular linked to domain is Twitter – I’m going to combine http and https versions in Tableau – but Youtube, UN, Leader of Iran, etc. are most popular. If I dig further into the Twitter domains, looking at Path, I can see whose accounts/profiles etc. are most linked to. If I dig into Station Commander Kelly you see that the most shared of these URLs are images… And we can look at that… And that’s a tweet we had already seen all day – a very widely shared image of a view of earth.

My time is up but I’m hoping this has been useful… This is the sort of approach I would take – exploring the data, using this as an entry point for more qualitative data analysis.

Analysing Network Dynamics with Agent Based Models (Patrik Wikström)

I will be talking about network dynamics and how we can understand some of the theory of network dynamics. And before I start a reminder that you can access and download all these materials at the URL for the session.

So, what are network dynamics? Well we’ve already seen graphs and visualisations of things that change over time. Network dynamics are very much about things that change and develop over time… So when we look at a corpus of tweets they are not all simultaneous, there is a dimension of time… And we have people responding to each other, to what they see around them, etc. So, how can we understand what goes on? We are interested in human behaviour, social behaviour, the emergence of norms and institutions, information diffusion patterns across multiple networks, etc. And these are complex and related to time, we have to take time into account. We also have to understand how macro level patterns emerge from local interactions between heterogenous agents, and how macro level patterns influence and impact upon those interactions. But this is hard…

It is difficult to capture complexity of such dynamic phenomena with verbal or conceptual models (or with static statistical models). And we can be seduced by big data. So I will be talking about using particular models, agent-based models. But what is that? Well it’s essentially a computer program, or a computer program for each agent… That allows it to be heterogeneous, autonomous and to interact with the environment and with other agents; that means they can interact in a (physical) space or as nodes in a network; and we can allow them to have (limited) perception, memory and cognition, etc. That’s something it is very hard for us to do and imagine with our own human brains when we look at large data sets.

The fundamental goal of this model is to develop a model that represents theoretical constructs, logics and assumptions and we want to be able to replicate the observed real-world behaviour. This is the same kind of approach that we use in most of our work.

So, a simple example…

Let’s assume that we start with some inductive idea. So we want to explain the emergence of the different social media network structures we observe. We might want some macro-level observations of Structure – clusters, path lengths, degree distributions, size; Time – growth, decline, cyclic; Behaviours – contagion, diffusion. So we want to build some kind of model to transfer or take our assumptions of what is going on, and translate that into a computer model…

So, what are our assumptions?

Well lets say we think people use different strategies when they decide which accounts to follow, with factors such as familiarity, similarity, activity, popularity, random… They may all be different explanations of why I connect with one person rather than another…  And lets also assume that when a user joins Twitter they immediately start following a set of accounts, and once part of the network they add more. And lets also assume that people are different – that’s really important! People are interested in different things – they have different passions, topics that interest them, some are more active, some are more passive. And that’s something we want to capture.

So, to do this I’m going to use something called NetLogo – which some of you may have already played with – it is a tool developed maybe 25 years back at Northwestern University. You can download it – or use a limited browser-based version -from: http://ccl.northwestern.edu/netlogo/.

In NetLogo we start with a 3 node network… I initialise the network and get three new nodes. Then I can add a new node… In this model I have a slider for “randomness” – if I set it to less random, it picks existing popular nodes, in the middle it combines popularity with randomness, and at most random it just adds nodes randomly…

So, I can run a simulation with about 200 nodes with randomness set to maximum… You can see how many nodes are present, how many friends the most popular node has, and how many nodes have very few friends (with 3 which is minimum connections in this model). If I now change the formation strategy here to set randomness to zero… then we see the nodes connecting back to the same most popular nodes… A more broadcast-like network. This is a totally different kind of network.

Now, another simulation here toggles the size of nodes to represent number of followers… Larger blobs represent really popular nodes… So if I run this in random mode again, you’ll see it looks very different…

So, why am I showing you this? Well I live to show a really simple model. This is maybe 50 lines of code – you could build it in a few hours. The first message is that it is easy to build this kind of model. And even though we have a simple model we have at least 200 agents… We normally work with thousands or much greater scale, but you can still learn something here. You can see how to replicate the structure of a network. Maybe it is a starting point that requires more data to be added, but it is a place to start and explore. Even though a simple model you can use this to build theory, to guide data collection and so forth.

So, having developed a model you can set up a simulation to run hundreds of times, to analyse with your data analytics tools… So I’ve run my 200 node network, 5000 simulations, comparing randomness and maximum links to a nodes – helping understand that different formation strategy creates different structures. And that’s interesting but it doesn’t take us all the way. So I’d like to show you a different model that takes this a little bit further…

This model is an extension of the previous model – with all the previous assumptions – so you have two formation strategies, but also other assumptions we were talking about… That I am more likely to connect to accounts with shared interests, more inclines to connect with accounts with shared interests, and with that we generate a simulation which is perhaps a better representation of the kinds of network we might see. And this accommodates the idea that this network has content, sharing, and other aspects that inform what is going on in the formation of that network. This visualisation looks pretty but the useful part is the output you can get at an aggregate level… We are looking at population level, seeing how local interactions at local levels, influence macro level patterns and behaviours… We can look at in-degree distribution, we can look at out-degree… We can look at local clustering coefficients, longest/shortest path, etc. And my assumptions might be plausible and reasonable…

So you can build models that give a much deeper understanding of real world dynamics… We are building an artificial network BUT you can combine this with real world data – load a real world network structure into the model and look at diffusion within that network, and understand what happens when one node posts something, what impact would that have, what information diffusion would that have…

So I’ve shown you NetLogo to play with these models. If you want to play around, that’s a great first step. It’s easy to get started with and it has been developed for use in educational settings. There is a big community and lots of models to use. And if you download NetLogo you can download that library of models. Pretty soon, however, I think you’ll find it too limited. There are many other tools you can use… But in general you can use any programming language that you want… Repast and Mason are very common tools. And they are based on Java or C++. You can also use an ABM Python module.

In the folder for this session there are some papers that give a good introduction to agent-based modelling… If we think about agent-based modelling and network theory there are some books I would recommend: Natatame & Chen: Agent-based modelling and Network dynamics. ABM look at Miller & Scott; Gilbert and Troitzsch; Epstein. Network theory – look at Jackson, Watts (& Strogatz), Barabasi.

So, three things:

Simplify! – You don’t need millions of agents. A simple model can be more powerful than a realistic one

Iterate! – Start simple and, as needed, build up complexity, add more features, but only if necessary.

Validate? – You can build models in a speculative way to guide research, to inform data collection… You don’t always have to validate that model as it may be a tool for your thinking. But validation is important if you want to be able to replicate and ensure relevance in the real world.

We started talking about data collection, analysis, and how we build theory based on the data we collect. After lunch we will continue with Carolin, Anne and Fernando on Tracking the Trackers. At the end of the day we’ll have a full panel Q&A for any questions.

And we are back after lunch and a little exposure to the Berlin rain!

Tracking the Trackers (Anne Helmond, Carolin Gerlitz, Esther Weltevrede and Fernando van der Vlist)

Carolin: Having talked about tracking users and behaviours this morning, we are going to talk about studying the media themselves, and of tracking the trackers across these platforms. So what are we tracking? Berry (2011) says:

“For every explicit action of a user, there are probably 100+ implicit data points from usage; whether that is a page visit, a scroll etc.”

Whenever a user makes an action on the web, a series of tracking features are enabled, things like cookies, widgets, advertising trackers, analytics, beacons etc. Cookies are small pieces of text that are placed on the user’s computer indicating that they have visited a site before. These are 1st party trackers and can be accessed by the platforms and webmasters. There are now many third party trackers such as Facebook, Twitter, Google, and many websites now place third party cookies on the devices of users. And there are widgets that enable this functionality with third party trackers – e.g. Disquus.

So we have first party tracker files – text files that remember, e.g. what you put in a shopping cart; third party tracker files used by marketers and data-gathering companies to track your actions across the web; you have beacons; and you have flash cookies.

The purpose of tracking varies, from functionality that is useful (e.g. the shopping basket example) but increasingly prevelant for use in profiling users and behaviours. The increasing use of trackers has resulted in them becoming more visible. There is lots of research looking at the prevalence of tracking across the web, from the Continuum project and the Guardian’s Tracking the Trackers project. One of the most famous plugins that allows you to see the trackers in your own browser is Ghostery – a browser plugin that you can install and immediately detects different kinds of trackers, widgets, cookies, analytics tracking on the sites that you browse to… It shows these in a pop up. It allows you to see the trackers and to block trackers, or selectively block trackers. You may want to selectively block trackers as whole parts of websites disappear when you switch off trackers.

Ghostery detects via tracker library/code snippets (regular expressions). It currently detects around 2295 trackers – across many different varieties. The tool is not uncontroversial. It started as an NGO but was bought by analytics company Evidon in 2010, using the data for marketing and advertising.

So, we thought that if we, as researchers, want to look at trackers and there are existing tools, lets repurpose existing tools. So we did that, creating a Tracker tracker tool based on Ghostery. It takes up a logic of Digital Methods, working with lists of websites. So the Tracker Tracker tool has been created by the Digital Methods Initiative (2012). It allows us to detect which tracers are present on lists of wevsites and create a network view. And we are “repurposing analytical capabilities”. So, what sort of project can we use this with?

One of our first project was on the Like Economy. Our starting point was the fact that social media widgets place cookies (Gerlitz and Helmond 2013), where are they present. These cookies track both platform users and website users. We wanted to see how pervasive these cookies were on the web, and on the most used sites on the web.

We started by using Alexa to identify a collection of 1000 most-visited websites. We inputted it into the Tracking Tracker tool (it’s only one button so options are limited!). Then we visualised the results with Gephi. And what did we get? Well, in 2012 only 18% of top websites had Facebook trackers – if we did it again today it would probably be different. This data may be connected to personal user profiles – when a user has been previously logged in and has a profile – but it is also being collected for non-users of Facebook, they create anonymous profiles but if they subsequently join Facebook that tracking data can be fed into their account/profile.

Since we did this work we have used this method on other projects. Now I’ll hand over to Anne to do a methods walkthrough.

Anne: Now you’ve had a sense of the method I’m going to do a dangerous walkthrough thing… And then we’ll look at some other projects here.

So, a quick methodological summary:

  1. Research question: type of tracker and sites
  2. Website (URL) collection making: existing expert list.
  3. Input list for Tracker Tracker
  4. Run Tracker Tracker
  5. Analyse in Gephi

So we always start with a research question… Perhaps we start with websites we wouldn’t want to find trackers on – where privacy issues are heightened e.g. childrens’ websites, porn websites, etc. So, homework here – work through some research question ideas.

Today we’ll walk through what we will call “adult sites”. So, we will go to Alexa – which is great for locating top sites in categories, in specific countries, etc. We take that list, we put it into Tracker Tracker – choosing whether or not to look at the first level of subpages – and press the button. The tool looks at the Ghostery database, which now scans those websites for the possible 2600 trackers that may exist.

Carolyn: Maybe some of you are wondering if it’s ok to do this with Ghostery? Well, yes, we developed Tracker Tracker in collaboration with Ghostery when it was an NGO, with one of their developers visiting us in Amsterdam. One other note here: if you use Ghostery on your machine, it may be different to your neighbours trackers. Trackers vary by machine, by location, by context. That’s something we have to take into account when requesting data. So for news websites you may, for instance, have more and more trackers generated the longer the site is open – this tool only captures a short window of time so may not gather all of the trackers.

Anne: Also in Europe you may encounter a so-called cookie walls. You have to press OK to accept cookies… And the tool can’t emulate user experience in clicking beyond the cookie walls… So zero trackers may indicate that issue, rather than no trackers.

Q: Is it server side or client side?

A: It is server side.

Q: And do you cache the tracker data?

A: Once you run the tool you can save the CSV and Gephi files, but we don’t otherwise cache.

Anne: Ghostery updates very frequently so that makes it most useful to always use the most up to date list of trackers to check against.

So, once we’ve run the Tracker Tracker tool you get outputs that can be used in a variety of flexible formats. We will download the “exhaustive” CSV – which has all of the data we’ve found here.

If I open that CSV (in Excel) we can see the site, the scheme, the patterns that was used to find the tracker, the name of the tracker… This is very detailed information. So for these adult sites we see things like Google Analytics, the Porn Ad network, Facebook Connect. So, already, there is analysis you could do with this data. But you could also do further analysis using Gephi.

Now, we have steps of this procedure in the tutorial that goes with today’s session. So here we’ve coloured the sites in grey, and we’ve highlighted the trackers in different colours. The purple lines/nodes are advertising trackers for instance.

If you want to create this tracker at home, you have all the steps here. And doing this work we’ve found trackers we’d never seen before – for instance the porn industry ad network DoublePimp (a play on DoubleClick) – and to see regional and geographic difference between trackers, which of course has interesting implications.

So, some more examples… We have taken this approach looking at Jihadi websites, working with e.g. governments to identify the trackers. And found that they are financially dependent on advertising included SkimLinks, DoubleClick, Google AdSense.

Carolyn: And in almost all networks we encounter DoubleClick, AdSense, etc. And it’s important to know that webmasters enable these trackers, they have picked these services. But there is an issue of who selects you as a client – something journalists collaborating on this work raised with Google.

Anne: The other usage of these trackers has been in historical tracking analysis using the internet archive. This enables you to see the website in the context in a techno-commercial configuration, and to analyse it in that context. So for instance looking at New York Times trackers and the wevsite as an ecosystem embedded in the wider context – in this case trackers decreased but that was commercial concentration, from companies buying each other therefore reducing the range of trackers.

Carolyn: We did some work called the Trackers Guide. We wanted to look not only at trackers, but also look at Content Delivery Networks, to visualise on a website how websites are not single items, but collections of data with inflows and outflows. The result became part artwork, part biological fieldguide. We imagined content and trackers as little biological cell-like clumps on the site, creating a whole booklet of this guide. So the image here shows the content from other spaces, content flowing in and connected…

Anne: We were also interested in what kind of data is being collected by these trackers. And also who owns these trackers. And also the countries these trackers are located in. So, we used this method with Ghostery. And then we dug further into those trackers. For Ghostery you can click on a tracker and see what kind of data it collects. We then looked at privacy policies of trackers to see what it claims to collect… And then we manually looked up ownership – and nationality – of the trackers to understand rules, regulations, etc. – and seeing where your data actually ends up.

Carolyn: Working with Ghostery, and repurposing their technology, was helpful but their database is not complete. And it is biased to the English-speaking world – so it is particularly lacking in Chinese contexts for instance. So there are limits here. It is not always clear what data is actually being collected. BUT this work allows us to study invisible participation in data flows – that cannot be found in other ways; to study media concentration and the emergence of specific tracking ecologies. And in doing so it allows us to imagine alternative spatialities of the web – tracker origins and national ecologies. And it provides insights into the invisible infrastructures of the web.

Slides for this presentation: http://www.slideshare.net/cgrltz/aoir-2016-digital-methods-workshop-tracking-the-trackers-66765013

Multiplatform Issue Mapping (Jean Burgess & Ariadna Matamoros Fernandez)

Jean: I’m Jean Burgess and I’m Professor of Digital Media and Director of the DRMC at QUT. Ariadna is one of our excellent PhD students at QUT but she was previously at DMI so she’s a bridge to both organisations. And I wanted to say how lovely it is to have the DRMC and DMI connected like this today.

So we are going to talk about issue mapping, and the idea of using issue mapping to teach digital research methods, particularly with people who may not be interested in social media outside of their specific research area. And about issue mapping as an approach that is outside the dominant “influencers” narrative that is dominant in the marketing side of social media.

We are in the room with people who have been working in this space for a long time but I just want to raise that we are making connections to AMT and cultural and social studies. So, a few ontological things… Our approach combines digital methods and controversy analysis. We understand there to be Controversies which are discreet, acute, often temporality that are sites of intersectionality, bringing together different issues in new combination. And drawing on Latour, Callon etc. we see controversies as generative. They can reveal the dynamics of issues, bring them together in new combinations, trasform them and mode them forward. And we undertake network and content analysis to understand relations among stakeholders, arguments and objects.

There are both very practical applications and more critical-reflexive possibilities of issue mapping. And we bring our own media studies viewpoint to that, with an interest in the vernacular of the space.

So, issue mapping with social media frequently starts with topical Twitter hashtags/hashtag communities. We then have iteractive “issue inventories” – actors, hashtags, media objects from one dataset used as seeds on their own. We then undertake some hybrid network/thematic analysis – e.g. associations among hashtags; thematic network clusters And we inevitably meet the issue of multi-platform/cross-platform engagement. And we’ll talk more about that.

One project we undertook on the #agchatoz, which is a community in Australia around weekly Twitter chats, but connected to a global community, explored the hashtag as a hybrid community. So here we looked at, for instance, the network of followers/followees in this network. And within that we were able to identify clusters of actors (across: Left-learning Twitterati (30%); Australian ag, farmers (29%); Media orgs, politicians (13%); International ag, farmers (12%); Foodies (10%); Right-wing Australian politics and others), and this reveals some unexpected alliances or crossovers – e.g. between animal rights campaigners and dairy farmers. That suggests opportunities to bridge communities, to raise challenges, etc.

We have linked, in the files for this session, to various papers. One of these, Burgess and Matamoros-Fernandez (2016) looks at Gamergate and I’m going to show a visualisation of the YouTube video network (Reider 2015; Gephi), which shows videos mentioned in tweets around that controversy, showing those that were closely related to each other.

Ariadna: My PhD is looking at another controversy, this one is concerned by Adam Goodes, an Australian Rules Footballer who was a high profile player until he retired last year. He has been a high profile campaigner against racism, and has called out racism on the field. He has been criticised for that by one part of society. And in 2014 he performed an indiginous war dance on the pitch, which again received booing from the crowd and backlash. So, I start with Twitter, follow the links, and then move to those linked platforms and moving onwards…

Now I’m focusing on visual material, because the controversy was visual, it was about a gesture. So there is visual content (images, videos, gifs) are mediators of race and racism on social media. I have identified key media objects through qualitative analysis – important gestures, different image genres. And the next step has been to reflect on the differences between platform traces – YouTube relates videos, Facebook like network, Twitter filters, notice and take down automatic messages. That gives a sense of the community, the discourse, the context, exploring their specificities and how their contributes to the cultural dynamics of face and racism online.

Jean: And if you want to learn more, there’s a paper later this week!

So, we usually do training on this at DMRC #CCISS16 Workshops. We usually ask participants to think about YouTube and related videos – as a way to encourage to people to think about networks other than social networks, and also to get to grips with Gephi.

Ariadna: Usually we split people into small groups and actually it is difficult to identify a current controversy that is visible and active in digital media – we look at YouTube and Tumblr (Twitter really requires prior collection of data). So, we go to YouTube to look for a key term, and we can then filter and find results changing… Usually you don’t reflect that much. So, if you look at “Black Lives Matter”, you get a range of content… And we ask participants to pick out relevant results – and what is relevant will depend on the research question you are asking. That first choice of what to select is important. Once this is done we get participants to use the YouTube Data Tools: https://tools.digitalmethods.net/netvizz/youtube/. This tool enables you to explore the network… You can use a video as a “seed”, or you can use a crawler that finds related videos… And that can be interesting… So if you see an Anti-Islamic video, does YouTube recommend more, or other videos related in other ways?

That seed leads you to related videos, and, depending on the depth you are interested in, videos related to the related videos… You can make selections of what to crawl, what the relevance should be. The crawler runs and outputs a Gephi file. So, this is an undirected network. Here nodes are videos, edges are relationships between videos. We generally use the layout: Force Atlas 2. And we run the Modularity Report to colour code the relationships on thematic or similar basis. Gephi can be confusing at first, but you can configure and use options to explore and better understand your network. You can look at the Data Table – and begin to understand the reasons for connection…

So, I have done this for Adam Goodes videos, to understand the clusters and connections.

So, we have looked at YouTube. Normally we move to Tumblr. But sometimes a controversy does not resonate on a different social media platform… So maybe a controversy on Twitter, doesn’t translate on Facebook; or one on YouTube doesn’t resonate on Tumblr… Or keywords will vary greatly. It can be a good way to start to understand the cultures of the platforms. And the role of main actors etc. on response in a given platform.

With Tumblr we start with the interface – e.g. looking at BlackLivesMatter. We look at the interface, functionality, etc. And then, again, we have a tool that can be used: https://tools.digitalmethods.net/netvizz/tumblr/. We usually encourage use of the same timeline across Tumblr and YouTube so that they can be compared.

So we can again go to Gephi, visualise the network. And in this case the nodes and edges can look different. So in this example we see 20 posts that connect 141 nodes, reflecting the particular reposting nature of that space.

Jean: The very specific cultural nature of the different online spaces can make for very interesting stuff when looking at controversies. And those are really useful starting points into further exploration.

And finally, a reminder, we run our summer schools in DMRC in February. When it is summer! And sunny! Apply now at: http://dmrcss.org/!

Analysing and visualising geospatial data (Peta Mitchell)

Normally when I would do this as a workshop I’d give some theoretical and historical background of the emergence of geospatial data, and then move onto the practical workshop on Carto (was CartoDB). Today though I’m going to talk about a case study, around the G8 meeting in Melbourne, and then talk about using Carto to create a social media map.

My own background is a field increasingly known as the geo humanities or the spatial humanities. And I did a close reading project of novels and films to create a Cultural Atlas of Australia. And how locations relate to narrative. For instance almost all films are made in South Australia, regardless of where they are set, mapping patterns of representation. We also created a CultureMap – an app that went with a map to alert you to literary or filmic places nearby that related back to that atlas.

I’ll talk about that G8 stuff. I now work on rapid spatial analytics; participatory geovisualisation and crowdsourced data; VGI – Volunteered Geographic Information; placemaking etc. But today I’ll be talking about emerging forms of spatial information/geodata, neogeographical tools etc.

So Godon and de Souza e Silva (2011) talk about us witnessing the increasing proliferation of geospatial data. And this is sitting alongside a geospatial revolution – GPS enabled devices, geospatial data permeating social media, etc. So GPS emerged in the late ’90s/early 00’s with a slight social friend-finder function. But the geospatial web really begins around 2000, the beginning of the end of the idea of the web as a “placeless space”. To an extent this came from a legal case brought by a French individual against Yahoo!, who were allowing Nazi memorabilia to be sold. That was illegal in France, and Yahoo! claimed that the internet is global, and claimed that it wasn’t possible. A French judge found in favour of the individual, Yahoo! were told it was both doable and easy, and Yahoo! went on to financially benefit from IP based location information. As Richard Rogers that case was the “revenge of geography against the idea of cyberspace”.

Then in 2005 Google Maps was described by John Yudell as that platform having the potential to be a “service factory for the geospatial web”. So in 2005 the “geospatial web” really is there as a term. By 2006 the concept of “Neogeography” was defined by Andrew (?) to describe the kind of non-professional, user-orientated, web 2.0-enabled mapping. There are are critiques in cultural geography, in geospatial literature about this term, and the use of the “neo” part of it. But there are multiple applications here, from humanities to humanitariasm; from cultural mapping to crisis mapping. An example here is Ushahidi maps, where individuals can send in data and contribute to mapping of crisis. Now Ushahidi is more of a platform for crisis mapping, and other tools have emerged.

So there are lots of visualisation tools and platforms. There are traditional desktop GIS – ArcGIS, QGIS. There is basic web-mapping (e.g. Google Maps); Online services (E.g. CARTO, Mapbox); Custom map design applications (e.g. MapMill); and there are many more…

Spatial data is not new, but there is a growth in ambient and algorithmic spatial data. So for instance ABC (TV channel in Australia) did some investigation, inviting audiences to find out as much as they could based on their reporter Will Ockenden’s metadata. So, his phone records, for instance, revealed locations, a sensitive data point. And geospatial data is growing too.

We now have a geospatial sub stratum underpinning all social media networks. So this includes check-in/recommendation platforms: Foursquare, Swarm, Gowalla (now defunct), Yelp; Meetup/hookup apps: Tinder, Grindr, Meetup; YikYak; Facebook; Twitter; Instagram; and Geospatial Gaming: Ingress; Pokemon Go (from which Google has been harvesting improvements for its pedestrian routes).

Geospatial media data is generated from sources ranging from VGI (Volunteered geographic information) to AGI (ambient geographic information), where users are not always aware that they are sharing data. That type of data doesn’t feel like crowd sourced data or VGI, hence the potential challenges, potential and ethical complexity of AGI.

So, the promises of geosocial analysis include a focus on real-time dynamics – people working with geospatial data aren’t used to this… And we also see social media as a “sensor network” for crisis events. There is also potential to provide new insights into spatio-temporal spread of ideas and actions; human mobilities and human behaviours.

People do often start with Twitter – because it is easier to gather data from it – but only between 1% and 3% of Tweets are located. But when we work at festivals we see around 10% being location data – partly a nature of the event, partly as Tweets are often coming through Instagram… On Instagram we see between 20% and 30% of images georeferenced, but based on upload location, not where image is taken.

There is also the challenge of geospatial granularity. On a tweet with Lat Long, that’s fairly clear. When we have a post tagged with a place we essentially have a polygon. And then when you geoparse, what is the granularity – street, city? Then there are issues of privacy and the extent to which people are happy to share that data.

So, in 2014 Brisbane hosted the G20, at a cost of $140 AUS for one highly disruptive weekend. In preceeding G20 meetings there had been large scale protests. At the time the premier of the city was former military and he put the whole central business district was in lockdown and designated a “declared area” – under new laws made for this event. And hotels for G20 world leaders were inside the zone. So, Twitter mapping is usually during crisis events – but you don’t know where this will happen, where to track it, etc. In this case we knew in advance where to look. So, a Safety and Security Act (2013) was put in place for this event, requiring prior approval for protests; arrests for the duration of the event; on the spot strip search; banning of eggs in the central Business District, no manure, no kayaks or floatation devices, no remote control cars or reptiles!

So we had these fears of violent protests, given all of these draconian measures. We had elevated terror levels. And we had war threatened after Abbott said he would “shirtfront” Vladimir Putin over MH17. But all that concern made city leaders concerned that the city might be a ghost town, when they wanted it marketed as a new world city. They were offering free parking etc. to incentivise them to come in. And tweets reinforced the ghost town trope. So, what geosocial mapping enabled was a close to realtime sensor network of what might be happening during the G20.

So, the map we did was the first close to real time social media map that was public facing, using CARTODB, and it was never more than an hour behind reality. We had few false matches. But we had clear locations and clear keywords – e.g. G20 – to focus on. A very few “the meeting will now be held in G20” but otherwise no false matches. We tracked the data through the meeting… Which ran over a weekend and bank holiday. This map parses around 17,000(?) tweets, most of which were not geotagged but parsed. Only 10% represent where someone was when they tweeted, the remaining 90% are subjects of posts from geoparsing of tweets.

Now, even though that declared area isn’t huge, there are over 300 streets there. I had to build a manually constructed gazeteer, using Open Street Map (OSM) data, and then new data. Picking a bounding box that included that area generated a whole range of features – but I wasn’t that excited about fountains, benches etc. I was looking for places people might mention. And I wanted to know about features people might actually mention in their tweets. So, I had a bounding box, and the declared area before… Would have been ideal if the G20 had given me their bounding polygon but we didn’t especially want to draw attention to what we were doing.

So, at the end we had lat, long, amenity (using OSM terms), name (e.g. Obama was at the Marriott so tweets about that), associated search terms – including local/vernacular versions of names of amenities; Status (declared or restricted); and confidence (of location/coordinates – score of 1 for geospatially tagged tweets, 0.8 for buildings, etc.). We could also create category maps of different data sets. On our map we showed geospatial and parsed tweets inside the area, but we only used geotweets outside the declared area. One of my colleagues created a Python script to “read” and parse tweets, and that generated a CSV. That CSV could then be fed into CARTODB. CARTODB has a time dimension, could update directly every half hour, and could use a Dr0pbox source to do that.

So, did we see much disruption? Well no… About celebrity spotting – the two most tweeted images were Obama with a koala and Putin with a koala. It was very hot and very secured so little disruption happened. We did see selfies with Angela Merkel, images of phallic motorcade. And after the G20 there was a complaint filed to board of corruption about the cooling effect of security on participation, particularly in environmental protests. There was still engagement on social media, but not in-person. Disruption, protest, criticism were replaced by spectacle and distant viewing of the event.

And, with that, we turn to an 11 person panel session to answer questions, wrap up, answer questions, etc. 

Panel Session

Q1) Each of you presented different tools and approaches… Can you comment on how they are connected and how we can take advantage of that.

A1 – Jean) Implicitly or explicitly we’ve talked about possibilities of combining tools together in bigger projects. And tools that Peta and I have been working on are based on DMI tools for instance… It’s sharing tools, shared fundamental techniques for analytics for e.g. a Twitter dataset…

A1 – Richard) We’ve never done this sort of thing together… The fact that so much has been shared has been remarkable. We share quite similar outlooks on digital methods, and also on “to what end” – largely for the study of social issues and mapping social issues. But also other social research opportunities available when looking at a variety of online data, including geodata. It’s online web data analysis using digital methods for issue mapping and also other forms of social research.

A1 – Carolyn) All of these projects are using data that hasn’t been generated by research, but which has been created for other purposes… And that’s pushing the analysis in their own way… And tools that we combine bring in levels, encryptions… Digital methods use these, but also a need to step back and reflect – present in all of the presentations.

Q2) A question especially for Carolyn and Anne: what do you think about the study of proprietary algorithms. You talked a bit about the limitations of proprietary algorithms – for mobile applications etc? I’m having trouble doing that…

A2 – Anne) I think in the case of the tracker tool, it doesn’t try to engage with the algorithm, it looks at presence of trackers. But here we have encountered proprietary issues… So for Ghostery, if you download a Firefox plugin you can access the content. We took the library of trackers from that to use as a database, we took that apart. We did talk to Ghostery, to make them aware… The question of algorithms… Of how you get to the blackbox things… We are developing methods to do this… One way in is to see the outputs, and compare that. Also Christian Zudwig is doing the auditing algorithms work.

A2 – Carolyn) Was just a discussion on Twitter about currency of algorithms and research on them… We’ve tried to ride on them, to implement that… Otherwise difficult. One element was on studying mobile applications. We are giving a presentation on this on Friday. Similar approach here, using infrastructures of app distribution and description etc. to look into this… Using existing infrastructures in which apps are built or encountered…

A2 – Anne) We can’t screenscrape and we are moving to this more closed world.

A2 – Richard) One of the best ways to understand algorithms is to save the outputs – e.g. we’ve been saving Google search outputs for years. Trying to save newsfeeds on Facebook, or other sorts of web apps can be quite difficult… You can use the API but you don’t necessarily get what the user has seen. The interface outputs are very different from developer outputs. So people think about recording rather than saving data – an older method in a way… But then you have the problem of only capturing a small sample of data – like analysing TV News. The new digital methods can mean resorting to older media methods… Data outputs aren’t as friendly or obtainable…

A2 – Carolyn) This one strand is accessing algorithms via transparancy; you can also think of them as situated and in context, seeing it in operation and in action in relation to the data, associated with outputs. I’d recommend Salam Marocca on the Impact of Big Data which sits in legal studies.

A2 – Jean) One of the ways we approach this is the “App Walkthrough”, a method Ben Light and I have worked on and will shortly be published in Media and Society, is to think about those older media approaches, with user studies part of that…

Q3) What is your position as researchers on opening up data, and doing ethically acceptable data on the other side? Do you take a stance, even a public stance on these issues.

A3 – Anne) Many of these tools, like the YouTube tool, and his Facebook tools, our developer took the conscious decision to anonymise that data.

A3 – Jean) I do have public positions. I’ve published on the political economy of Twitter… One interesting thing is that privacy discourses were used by Twitter to shut down TwapperKeeper at a time it was seeking to monetise… But you can’t just published an archive of tweets with username, I don’t think anyone would find that acceptable…

A3 – Richard) I think it is important to respect or understand contextual privacy. People posting, on Twitter say, don’t have an expectation of its use in commercial or research uses. Awareness of that is important for a researcher, no matter what terms of service the user has signed/consented to, or even if you have paid for that data. You should be aware and concerned about contextual privacy… Which leads to a number of different steps. And that’s why, for instance, NetVis – the Facebook tool – usernames are not available for comments made, even though FacePager does show that. Tools vary in that understanding. Those issues need to be thought about, but not necessarily uniformly thought about by our field.

A3 – Carolyn) But that becomes more difficult in spaces that require you to take part to research them – WhatsApp? for instance – researchers start pretending to be regular users… to generate insights.

Comment (me): on native vs web apps and approaches and potential for applying Ghostery/Tracker Tracker methods to web apps which are essentially pointing to URLs.

Q4) Given that we are beholden to commercial companies, changes to algorithms, APIs etc, and you’ve all spoken about that to an extent, how do you feel about commercial limitations?

A4 – Richard) Part of my idea of digital methods is to deal with ephemerality… And my ideal to follow the medium… Rather than to follow good data prescripts… If you follow that methodology, then you won’t be able to use web data or social media data… Unless you either work with the corporation or corporate data scientist – many issues there of course. We did work with Yahoo! on political insights… categorising search queries around a US election, which was hard to do from outside. But the point is that even on the inside, you don’t have all the insight or the full access to all the data… The question arises of what can we still do… What web data work can we still do… We constantly ask ourselves, I think digital methods is in part an answer to that, otherwise we wouldn’t be able to do any of that.

A4 – Jean) All research has limitations, and describing that is part of the role here… But also when Axel and I started doing this work we got criticism for not having a “representative sample”… And we have people from across humanities and social sciences seem to be using the same approaches and techniques but actually we are doing really different things…

Q5) Digital methods in social sciences looks different from anthropology where this is a classical “informant” problem… This is where digital ethnography is there and understood in a way that it isn’t in the social sciences…

Resources from this workshop:

Aug 162016
 

This is a very belated posting of my liveblog notes from the eLearning@Ed/LTW Monthly Meet Up #4 on Learning Design which took place on 25th April 2016. You can find further information on the event, and all of our speakers’ slides, on the eLearning@ed wiki.

Despite the delay in posting these notes, the usual cautionary notes apply, and that all corrections, additions, etc. are very much welcomed. 

Becoming an ELDeR – Fiona Hale, Senior eLearning Advisor, IS

Unfortunately I missed capturing notes for the very beginning of Fiona’s talk but I did catch most of it. As context please be aware that she was talking about a significant and important piece of work on Learning Design, including a scoping report by Fiona, which has been taking place over the last year. My notes start as she addresses the preferred formats for learning design training… 

We found that two-day workshops provided space to think, to collaborate, and had the opportunity to both gain new knowledge and apply it on the same day. And also really useful for academic staff to understand the range of colleagues in the room, knowing who they could and should follow up with.

Scoping report recommended developing reusable and collaborative learning design as a new university services within IS, which positions the learning design framework as a scaffold, support staff as facilitators, etc.

There are many recommendations here but in particular I wanted to talk about the importance of workshops being team based and collaborative in approach – bringing together programme team, course team, admin, LT, peer, student, IAD, IS Support librarian, IS EDE, Facilitator, all in the room. Also part of staff development, reward and recognition – tying into UKSPF (HEA) and the Edinburgh Teaching Award. And ensuring this is am embedded process, with connection to processes, language, etc. with registry, board of studies, etc. And also with multiple facilitators.

I looked for frameworks and focused on three to evaluate. These tend to be theoretical, and don’t always work in practice. After trying those all out we found CAIeRO works best, focusing on designing learning experiences over development of content, structured format of the two day workshop. And it combines pedagogy, technology, learner experience.

We have developed the CAIeRO into a slightly different form, the ELDeR Framework, with the addition of assessment and feedback.

Finally! Theory and Practice – Ruth McQuillan, Co-Programme Director, Master of Public Health (online)

Prior to the new MPH programme I have been working in online learning since 2011. I am part of a bigger team – Christine Matthews is our learning technologist and we have others who have come on board for our new programme. Because we had a new programme launching we were very keen to be part of it. So I’m going to talk about how this worked, how we felt about it, etc.

We launched the online MPG in September 2015, which involved developing lots of new courses but also modifying lots of existing courses. And we have a lot of new staff so we wanted to give a sense of building a new team – as well as learning for ourselves how to do it all properly.

So, the stages of the workshop we went through should give you a sense of it. I’ve been on lots of courses and workshops where you learn about something but you don’t have the practical application. And then you have a course to prepare in practice, maybe without that support. So having both aspects together was really good and helpful.

The course we were designing was for mid career professionals from across the world. We were split into two teams – with each having a blend of the kinds of people Fiona talked about – programme team and colleagues from IS and elsewhere. We both developed programme and course mission statements as a group, then compared and happily those were quite close, we reached consensus and that really felt like we were pulling together as a team. And we also checked the course for consistency with the programme.

Next, we looked at the look and feel aspects. We used cards that were relevant for our course, using workshop cards and post it notes, rejecting non relevant cards, using our choice of the cards and some of our own additions.

So, Fiona talked about beginning with the end in mind, and we tried to do that. We started by thinking about what we wanted our students to be able to do at the end of the course. That is important as this is a professional course where we want to build skills and understanding. So, we wanted to focus on what they should know at the end of the course, and only then look at the knowledge they would need. And that was quite a different liberating approach.

And at this point we looked at the SCQF level descriptors to think about learning outcomes, the “On completion of this course you will be able to…” I’m not sure we’d appreciated the value and importance of our learning outcomes before, but actually in the end this was one of the most useful parts of the process. We looked for Sense (are they clear to the learner); Level (are they appropriate to the level of module); Accessibility (are they accessible).

And then we needed to think about assessment and alignment, looking at how we would assess the course, how this fitted into the bigger picture etc.

The next step was to storyboard the course. And by the end of Day One we had a five week course and a sixth week for assessment, we has learning outcomes and how they’d be addressed, assessment, learning activities, concerns, scaffolding. And we thought we’d done a great job! We came back on day two and when we came back we spend maybe half a day recapping, changing… Even if you can’t do a 2 day workshop at least try to do two half days with a big gap between/overnight as we found that space away very helpful.

And once finalised we built a prototype online. And we had a reality check from a critical friend, which was very helpful. We reviewed and adjusted and then made a really detailed action plan. That plan was really helpful.

Now, at the outside we were told that we could come into this process at any point. We had quite a significantly complete idea already and that helped us get real value from this process.

So, how did it feel and what did we learn? Well it was great to have a plan, to see the different areas coming together. The struggle was difficult but important, and it was excellent for team building. “To learn and not to do is really not to learn. To do and not to learn is really not to know. And actually at the end of the day we were really enthusiastic about the process and it was really good to see that process, to put theory into practice, and to do this all in a truly collaborative experience.

How has it changed us? Well we are putting all our new courses through this process. We want to put all our existing courses through this process. We involved more people in the process, in different roles and stages, including students where we can. And we have modified the structure.

Q&A

Q1) Did you go away to do this?

A1) Yes, we went to Dovecot Gallery on Infirmary Street.

A1 – FH) I had some money to do that but I wasn’t kidding that a new space and nice food is important. We are strict on you being there, or not. We expect full on participation. So for those going forward we are looking at rooms in other places – in Evolution House, or in Moray House, etc. Somewhere away from normal offices etc. It has to be a focused. And the value of that is huge, the time up front is really valuable.

A1 – RM) It is also really important for understanding what colleagues are doing, which helps ensure the coherence of the programme, and it is really beneficial to the programme.

Q2) Dow different do you think your design ended up if you hadn’t done this?

A2 – RM) I think one of my colleagues was saying today that she was gently nudged by colleagues to avoid mistakes or pitfalls, to not overload the course, to ensure coherence, etc. I think it’s completely different to how it would have been. And also there were resources and activities – lectures and materials – that could be shared where gaps were recognised.

A2 – FH) If this had been content driven it would be hard as a facilitator. But thinking about the structure, the needs, the learner experience, that can be done, with content and expertise already being brought into that process. It saves time in the long run.

A2 – RM) I know in the past when I’ve been designing courses you can find that you put activities in a particular place without purpose, to make sure there is an activity there… But this process helped keep things clear, coherent and to ensure any activity is clearly linked to a learning outcome, etc.

Q3) Once you’d created the learning outcomes, did you go back and change any of theme?

A3 – FH) On Day 2 there was something that wasn’t quite right…

A3 – RM) It was something too big for the course, and we needed to work that through. The course we were working on in February and that will run for the first time in the new academic year. But actually the UoE system dictates that learning outcomes should be published many months/more than a year in advance. So with new courses we did ask the board of studies if we could provide the learning outcomes to them later on, once defined. They were fine.

A3 – FH) That is a major change that we are working on. But not all departments run the same process or timetable.

A3 – RM) Luckily our board of studies were very open to this, it was great.

Q4) Was there any focus on student interaction and engagement in these process.

A4 – FH) It was part of those cards early in the process, it is part of the design work. And that stage of the cards, the consensus building, those are huge collaborative and valuable sessions.

Q5) And how did you support/require that?

A5 – FH) In that storyboard you will see various (yellow) post its showing assessment and feedback wove in across the course, ensuring the courses you design really do align with that wider University strategy.

Learning Design: Paying It Forward – Christina Matthews

There is a shift across the uni to richer approaches.

I’m going to talk about getting learning technologist involved and why that matters.

The LT can inform the process in useful and creative ways. They can bring insights into particular tools, affordances, and ways to afford or constrain the behaviours of students. They also have a feel for digital literacy of students, as well as being able to provide some continuity across the course in terms of approaches and tools. And having LT in the design process, academic staff can feel supported and better able to take risks and do new things. And the LT can help that nothing is lost between the design workshop, and the actual online course and implementation.

So, how are we paying this forward? Well we are planning learning design workshops for all our new courses for 2015-16 and 2016-17. We really did feel the benefits of 2 days but we didn’t think it was going to be feasible for all of our teams. We felt that we needed to adapt the workshop to fit into one day, so we will be running these as one day workshops and we have prioritised particular aspects to enable that.

The two day workshop format for CAIeRO follows several stages:

  • Stage 1: Course blueprint (mission, learning outcomes, assessment and feedback)
  • Stage 2: Storyboarding
  • Stage 3: Rapid prototyping in the VLE
  • Stage 4: Critical friend evaluation of VLE prototype
  • Stage 5: adjust and review from feedback
  • Stage 6: Creating an action plan
  • Stage 7: reflecting on the workshop in relation to the UK Professional Standards Framework.
  • For the one day workshop we felt the blue print (1), storyboard (2) and action plan stages (6) were essential. The prototyping can be done afterwards and separately, although it is a shame to do that of course.

So, we are reviewing and formalising our 1 day workshop model, which may be useful elsewhere. And we are using these approaches for all the courses on our programme, including new and existing courses. And we are very much looking forward to the ELDeR (Edinburgh Learning Design Roadmap).

Q&A

Q1) When you say “all” programmes, do you mean online or on-campus programmes?

A1) Initially the online courses but we have a campus programme that we really want to connect up, to make the courses more blended, so I think it will feed into our on campus courses. A lot of our online tutors teach both online and on campus, so that will also lead some feeding in here.

Q2) How many do you take to the workshop?

A2) You can have quite a few. We’ve had programme director, course leader, learning technologist, critical friends, etc.

A2 – FH) There are no observers in the room for workshops – lots are wanting to understand that. There are no observers in the room, you have to facilitate the learning objectives section very carefully. Too many people is not useful. Everyone has to be trusted, they have to be part of the process. You need a support librarian, the learning technologist has to squarely be part of the design, student, reality checker, QA… I’ve done at most 8 people. In terms of students you need to be able to open and raw…. So, is it OK to have students in the room… Some conversations being had may not be right for that co-creation type idea. Maybe alumni are better in some cases. Some schools don’t have their own learning technologist, so we bring one. Some don’t have a VLE, so we bring one they can play with.

A2 – CM) In the pilot there were 8 in some, but it didn’t feel like too many in the room.

Q3) As a learning technologist have the workshops helped your work?

A3 – CM) Yes, hugely. That action plan really maps out every stage very clearly. Things can come in last minute and all at the same time otherwise, so that is great. And when big things are agreed in the workshop, you can then focus on the details.

A3 – FH) We are trying to show how actually getting this all resolved up front actually saves money and time later on, as everything is agreed.

Q4) Thinking way ahead… People will do great things… So if we have the course all mapped out here, and well agreed, what happens when teams change – how do you capture and communicate this. Should you have a mini reprise of this to revisit it? How does it go over the long term?

A4 – FH) That’s really true. Also if technologist isn’t the one delivering it, that can also be helpful.

A4 – CM) One thing that comes out of this is a CAIeRO planner that can be edited and shared, but yes, maybe you revisit it for future staff…

A4 – FH) Something about ownership of activities, to give the person coming in and feel ownership. And see how it works before and afterwards. Pointing them to document, to output of storyboard, to get ownership. That’s key to facilitation too.

Q4) So, you can revisit activities etc. to achieve Learning outcome…

A4 – FH) That identification of learning outcomes are clear in the storyboards and documents.

Q5) How often do you meet and review programmes? Every 2 years, every 5 years?

A5 – FH) You should review every 5 years for PG.

Comment) We have an annual event, see what’s working and what isn’t and that is very very valuable and helpful. But that’s perhaps unusual.

A5 – FH) That’s the issue of last minute or isolated activities. This process is a good structure for looking at programme and course. Clearly programme has assessment across it so even though we are looking at the course here, it has that consistency. With any luck we can get this stuff embedded in board of studies etc.

A5 – RM) For us doing this process also changed us.

A5 – FH) That report is huge but the universities I looked at these processes are mandatory not optional. But mandatory can make things more about box ticking in some ways…

Learning Design: 6 Months on – Meredith Corey, School of Education 

We are developing a pilot UG course in GeoSciences and Education collaboration, Sustainability and Social Responsibility, running 2016/17. We are 2 online learning educators working from August 2015 to April 2016. This is the first online level 8 course for on-campus students. And there are plans to adapt the course for the wider community – including staff, alumni etc.

So in the three months before the CAIeRO session, we had started looking at existing resources, building a course team, investigating VLEs. The programme is on sustainability. We looked into types of resources and activities. And we had started drafting learning outcomes and topic storyboarding, with support from Louise Connelly who was (then) in IAD.

So the workshop was a 2 day event and we began with the blueprinting. We had similar ideas and very different ways to describe them so, what was very useful for us, was finding common language and ways to describe what we were doing. We didn’t drastically change our learning outcomes, but lots of debate about the wording. Trying to ensure the learning outcomes were appropriate for level 8 SCQF levels, trying not to overload them. And this whole process has helped us focus on our priorities, our vocabulary, the justification and clear purpose.

The remainder of the workshop was spent on storyboarding. We thought we were really organised in terms of content, videos, etc. But actually that storyboarding, after that discussion of priorities, was really useful. Our storyboard generated three huge A0 sheets to understand the content, the ways students would achieve the learning outcomes. It is an online course and there are things you don’t think about but need to consider – how do they navigate the course? How do they find what they need? How do they find what they need? And Fiona and colleagues were great for questioning and probing that.

We did some prototyping but didn’t have time for reality checks – but we have that process lined up for our pilot in the summer. We also took that storyboard and transferred that information to a huge Popplet that allowed us to look at how the feedback and feed forward fits into the course; how we could make that make sense across the course – it’s easy to miss that feedback and feed forward is too late when you are looking week by week.

The key CAIeRO benefits for us were around exploring priorities (and how these may differ for different cohorts); it challenged our assumptions; it formalised our process and this is useful for future projects; focused on all learners and their experience; and really helped us understand our purpose here. And coming soon we shall return to the Popplet to think about the wider community.

Q&A

Q1) I know with one course the head of school was concerned that an online programme might challenge the value of the face to face, or the concern of replacing the face to face course, and how that fits together.

A1) The hope with this course is that the strength is that it brings together students from as many different schools as possible, to really deal with timetabling barriers, to mix students between schools. It would be good if both exists to complement in each others.

A1 – FH) Its not intended as a replacement… In this course’s mission statement for this, it plays up interdisciplinary issues, and that includes use of OERs, reuse, etc. And talking about doing this stuff.

A1) And also the idea is to give students a great online learning experience that means they might go on and do online masters programmes. And hopefully include staff and alumni that also help that mix, that interdisciplinary thing.

Q2) Do you include student expectations in this course? What about student backgrounds?

A2) We have tried to ensure that tutorial groups play to student strengths and interests, making combinations across schools. We are trialling the course with evaluation through very specific questions.

A2 – FH) And there will assessment that asks students to place that learning into their own context, location, etc.

Course Design and your VLE – Ross Ward

I want to talk quickly about how you translate a storyboard into your VLE, in very general terms. Taking your big ideas and making them a course. One thing I like to talk about a lot is user experience – you only need one back experience in Learn or Moodle to really put you off. So you really need to think about ensuring the experience of the VLE and the experience of the course all need to fit together. How you manage or use your VLE is up to do. Once you know what you want to do, you can then pick your technology, fitting your needs. And you’ll need a mix of content, tools, activities, grades, feedback, guidance. If you are an ODL student how you structure that will be very very important, if blended it’s still important. You don’t need your VLE to be a filing cabinet, it can be much more. But it also doesn’t have to be a grand immersive environment, you need it to fit your needs appropriately. And the VLE experience should reflect the overall course experience.

When you have that idea of purpose, you hit the technology and you have kind of a blank canvas. It’s a bit Mona Lisa by numbers… The tools are there but there are easier ways to make your course better. The learning design idea of the storyboard and the user experience of the course context can be very helpful. That is really useful for ensuring students understand what they are doing, creating a digital version of your course, and understanding where you are right now as a student. Arguably a good VLE user experience is one where you could find what you are looking for without any prior knowledge of the course… We get many support calls from those simply looking for information. You may have some pre-requisite stuff, but you need to really make everything easy.

Navigation is key! You need menus. You need context links. You need suggested link. You want to minimise the number of clicks and complexity.

Remember that you should present your material for online, not like a textbook. Use sensible headings. Think about structure. And test it out – ask a colleague, as a student, ask LTW.

And think about consistency – that will help ensure that you can build familiarity with approach, consistently presenting your programme/school brand and look and feel, perhaps also template.

We know this is all important, and we want to provide more opportunity to support that, with examples and resources to draw upon!

Closing Fiona Hale

Huge thanks to Ross for organising today. Huge thanks to our speakers today!

If you are interested in this work do find me at the end, do come talk to me. We have workshops coming up – ELDeR workshop evaluations – and there we’ll talk about design challenges and concerns. That might be learning analytics – and thinking about pace and workshops. For all of these we are addressing particular design challenges – the workshop can concertina to that. There is no rule about how long things take – and whether one day or two days is the number, but sometimes one won’t be enough.

I would say for students it’s worth thinking about sharing the storyboards, the assessment and feedback and reasons for it, so that they understand it.

We go into service in June and July, with facilitators across the schools. Do email me with questions, to offer yourselves as facilitators.

Thank you to all of our University colleagues who took part in this really interesting session!

You can read much more about Edinburgh Learning Design roadmap – and read the full scoping report – on the University of Edinburgh Learning Design Service website

Aug 102016
 
Nicola Osborne presenting the Digital Footprint poster at ECSM2016

It has been a while since I’ve posted something other than a liveblog here but it has been a busy summer so it seems like a good time to share some updates…

A Growing Digital Footprint

Last September I was awarded some University of Edinburgh IS Innovation Fund support to develop a pilot training and consultancy service to build upon the approaches and findings of our recent PTAS-funded Managing Your Digital Footprint research project.

During that University of Edinburgh-wide research and parallel awareness-raising campaign we (my colleague – and Digital Footprint research project PI – Louise Connelly of IAD/Vet School, myself, and colleagues across the University) sought to inform students of the importance of digital tracks and traces in general, particularly around employment and “eProfessionalism”. This included best practice advice around use of social media, personal safety and information security choices, and thoughtful approaches to digital identity and online presences. Throughout the project we were approached by organisations outside of the University for similar training, advice, and consulting around social media best practices and that is how the idea for this pilot service began to take shape.

Over the last few months I have been busy developing the pilot, which has involved getting out and about delivering social media training sessions for clients including NHS Greater Glasgow and Clyde (with Jennifer Jones); for the British HIV Association (BHIVA) with the British Association for Sexual Health and HIV (BASHH) (also with Jennifer Jones); developing a “Making an Impact with your Blog” Know How session for the lovely members of Culture Republic; leading a public engagement session for the very international gang at EuroStemCell, and an “Engaging with the Real World” session for the inspiring postgrads attending the Scottish Graduate School of Social Science Summer School 2016. I have also been commissioned by colleagues in the College of Arts, Humanities and Social Sciences to create an Impact of Social Media session and accompanying resources (the latter of which will continue to develop over time). You can find resources and information from most of these sessions over on my presentations and publications page.

These have been really interesting opportunities and I’m excited to see how this work progresses. If you do have an interest in social media best practice, including advice for your organisation’s social media practice, developing your online profile, or managing your digital footprint, please do get in touch and/or pass on my contact details. I am in the process of writing up the pilot and looking at ways myself and my colleagues can share our expertise and advice in this area.

Adventures in MOOCs and Yik Yak

So, what next?

Well, the Managing Your Digital Footprint team have joined up with colleagues in the Language Technology Group in the School of Informatics for a new project looking at Yik Yak. You can read more about the project, “A Live Pulse: Yik Yak for Understanding Teaching, Learning and Assessment at Edinburgh“, on the Digital Education Research Centre website. We are really excited to explore Yik Yak’s use in more depth as it is one of a range of “anonymous” social networking spaces that appear to be emerging as important alternative spaces for discussion as mainstream social media spaces lose favour/become too well inhabited by extended families, older contacts, etc.

Our core Managing Your Digital Footprint research also continues… I presented a paper, co-written with Louise Connelly, at the European Conference on Social Media 2016 this July on “Students’ Digital Footprints: curation of online presences, privacy and peer support”. This summer we also hosted visiting scholar Rachel Buchanan of University of Newcastle, Australia who has been leading some very interesting work into digital footprints across Australia. We are very much looking forward to collaborating with Rachel in the future – watch this space!

And, more exciting news: my lovely colleague Louise Connelly (University of Edinburgh Vet School) and I have been developing a Digital Footprint MOOC which will go live later this year. The MOOC will complement our ongoing University of Edinburgh service (run by IAD) and external consultancy word (led by us in EDINA) and You can find out much more about that in this poster, presented at the European Conference on Social Media 2016, earlier this month…

Preview of Digital Footprint MOOC Poster

Alternatively, you could join me for my Cabaret of Dangerous Ideas 2016 show….

Cabaret of Dangerous Ideas 2016 - If I Googled You, What Would I Find? Poster

The Cabaret of Dangerous Ideas runs throughout the Edinburgh Fringe Festival but every performance is different! Each day academics and researchers share their work by proposing a dangerous idea, a provocative question, or a challenge, and the audience are invited to respond, discuss, ask difficult questions, etc. It’s a really fun show to see and to be part of – I’ve now been fortunate enough to be involved each year since it started in 2013. You can see a short video on #codi2016 here:

In this year’s show I’ll be talking about some of those core ideas around managing your digital footprint, understanding your online tracks and traces, and reflecting on the type of identity you want to portray online. You can find out more about my show, If I Googled You What Would I Find, in my recent “25 Days of CODI” blog post:

25 Days of CoDI: Day 18

You’ll also find a short promo film for the series of data, identity, and surveillance shows at #codi2016 here:

So… A very busy summer of social media, digital footprints, and exciting new opportunities. Do look out for more news on the MOOC, the YikYak work and the Digital Footprint Training and Consultancy service over the coming weeks and months. And, if you are in Edinburgh this summer, I hope to see you on the 21st at the Stand in the Square!

 

Jun 152016
 

Today I’m at the University of Edinburgh Principal’s Teaching Award Scheme Forum 2016: Rethinking Learning and Teaching Together, an event that brings together teaching staff, learning technologists and education researchers to share experience and be inspired to try new things and to embed best practice in their teaching activities.

I’m here partly as my colleague Louise Connelly (Vet School, formerly of IAD) will be presenting our PTAS-funded Managing Your Digital Footprint project this afternoon. We’ll be reporting back on the research, on the campaign, and on upcoming Digital Foorprints work including our forthcoming Digital Footprint MOOC (more information to follow) and our recently funded (again by PTAS) project: “A Live Pulse: YikYak for Understanding Teaching, Learning and Assessment at Edinburgh.

As usual, this is a liveblog so corrections, comments, etc. welcome. 

Velda McCune, Deputy Director of the IAD who heads up the learning and teaching team, is introducing today:

Welcome, it’s great to see you all here today. Many of you will already know about the Principal’s Teaching Award Scheme. We have funding of around £100k from the Development fund every year, since 2007, in order to look at teaching and learning – changing behaviours, understanding how students learn, investigating new education tools and technologies. We are very lucky to have this funding available. We have had over 300 members of staff involved and, increasingly, we have students as partners in PTAS projects. If you haven’t already put a bid in we have rounds coming up in September and March. And we try to encourage people, and will give you feedback and support and you can resubmit after that too. We also have small PTAS grants as well for those who haven’t applied before and want to try it out.

I am very excited to welcome our opening keynote, Paul Ashwin of Lancaster University, to kick off what I think will be a really interesting day!

Why would going to university change anyone? The challenges of capturing the transformative power of undergraduate degrees in comparisons of quality  – Professor Paul Ashwin

What I’m going to talk about is this idea of undergraduate degrees being transformative, and how as we move towards greater analytics, how we might measure that. And whilst metrics are flawed, we can’t just ignore these. This presentation is heavily informed by Lee Schumers work on Pedagogical Content Knowledge, which always sees teaching in context, and in the context of particular students and settings.

People often talk about the transformative nature of what their students experience. David Watson was, for a long time, the President for the Society of Higher Education (?) and in his presidential lectures he would talk about the need to be as hard on ourselves as we would be on others, on policy makers, on decision makers… He said that if we are talking about education as educational, we have to ask ourselves how and why this transformation takes place; whether it is a planned transformation; whether higher education is a nesseccary and/or sufficient condition for such transformations; whether all forms of higher education result in this transformation. We all think of transformation as important… But I haven’t really evidenced that view…

The Yerevan Communique: May 2015 talks about wanting to achieve, by 2020, a European Higher Education area where there are common goals, where there is automatic recognition of qualifictions and students and graduates can move easily through – what I would characterise is where Bologna begins. The Communique talks about higher education contributing effectively to build inclusive societies, found on democratic values and human rights where educational opportunities are part of European Citizenship. And ending in a statement that should be a “wow!” moment, valuing teaching and learning. But for me there is a tension: the comparability of undergraduate degrees is in conflict with the idea of transformational potential of undergraduate degrees…

Now, critique is too easy, we have to suggest alternative ways to approach these things. We need to suggest alternatives, to explain the importance of transformation – if that’s what we value – and I’ll be talking a bit about what I think is important.

Working with colleagues at Bath and Nottingham I have been working on a project, the Pedagogic Quality and Inequality Project, looking at Sociology students and the idea of transformation at 2 top ranked (for sociology) and 2 bottom ranked (for sociology) universities and gathered data and information on the students experience and change. We found that league tables told you nothing about the actual quality of experience. We found that the transformational nature of undergraduate degrees lies in changes in students sense of self through their engagement with discplinary knowledge. Students relating their personal projects to their disciplines and the world and seeing themselves implicated in knowledge. But it doesn’t always happen – it requires students to be intellectually engaged with their courses to be transformed by it.

To quote a student: “There is no destination with this discipline… There is always something further and there is no point where you can stop and say “I understaood, I am a sociologist”… The thing is sociology makes you aware of every decision you make: how that would impact on my life and everything else…” And we found the students all reflecting that this idea of transformation was complex – there were gains but also losses. Now you could say that this is just the nature of sociology…

We looked at a range of disciplines, studies of them, and also how we would define that in several ways: the least inclusive account; the “watershed” account – the institutional type of view; and the most inclusive account. Mathematics has the most rich studies in this area (Wood et al 2012) where the least inclusive account is “Numbers”, watershed is “Models”, most inclusive is “approach to life”. Similarly Accountancy moves from routine work to moral work; Law from content to extension of self; Music from instrument to communicating; Geograpy is from general world to interactions; Geoscience is from composition of earth – the earth, to relations earth and society. Clearly these are not all the same direction, but they are accents and flavours of the same time. We are going to do a comparison next year on chemistry and chemical engineering, in the UK and South Africa, and actually this work points at what is particular to Higher Education being about engaging with a system of knowledge. Now, my colleague Monica McLean would ask why that’s limited to Higher Education, couldn’t it apply to all education? And that’s valid but I’m going to ignore it just for now!

Another students comments on transformation of all types, for example from wearing a tracksuit to lectures, to not beginning to present themselves this way. Now that has nothing to do with the curriculum, this is about other areas of life. This student almost dropped out but the Afro Carribean society supported and enabled her to continue and progress through her degree. I have worked in HE and FE and the way students talk about that transformation is pretty similar.

So, why would going to university change anyone? It’s about exposure to a system of knowledge changing your view of self, and of the world. Many years ago an academic asked what the point of going to university was, given that much information they learn will be out of date. And the counter argument there is that engagement with seeing different perspectives, to see the world as a sociologist, to see the world as a geographer, etc.

So, to come back to this tension around the comparability of undergraduate degrees, and the transformational potential of undergraduate degrees. If we are about transformation, how do we measure it? What are the metrics for this? I’m not suggesting those will particularly be helpful… But we can’t leave metrics to what is easy to gather, we have to also look at what is important.

So if we think of the first area of compatibility we tend to use rankings. National and international higher education rankings are a dominant way of comparing institutions’ contributions to student success. All universities have a set of figures that do them well. They have huge power as they travel across a number of contexts and audiences – vice chancellors, students, departmental staff. It moves context, it’s portable and durable. It’s nonsense but the strength of these metrics is hard to combat. They tend to involved unrelated and incomparable measures. Their stability reinforces privilege – higher status institutions tend to enrol a much greated proportion of privileged students. You can have some unexpected outcomes but you have to have Oxford, Cambridge, Edinburgh, UCL, Imperial all near the top then your league table is rubbish… Because we already know they are the good universities… Or at least those rankings reinforce the privilege that already exists, the expectations that are set. They tell us nothing about transformation of students. But are skillful performances shaped by generic skills or students understanding of a particular task and their interactions with other people and things?

Now the OECD has put together a ranking concept on graduate outcomes, the AHELO, which uses tests for e.g. physics and engineering – not surprising choices as they have quite international consistency, they are measurable. And they then look at generic tests – e.g a deformed fish is found in a lake, using various press releases and science reports write a memo for policy makers. Is that generic? In what way? Students doing these tests are volunteers, which may not be at all representative. Are the skills generic? Education is about applying a way of thinking in an unstructured space, in a space without context. Now, the students are given context in these texts so it’s not a generic test. But we must be careful about what we measure as what we measure can become an index of quality or success, whether or not that is actually what we’d want to mark up as success. We have strategic students who want to know what counts… And that’s ok as long as the assessment is appropriately designed and set up… The same is true of measures of success and metrics of quality and teaching and learning. That is why I am concerned by AHELO but it keeps coming back again…

Now, I have no issue with the legitimate need for comparison, but I also have a need to understand what comparisons represent, how they distort. Are there ways to take account of students’ transformation in higher education?

I’ve been working, with Rachel Sweetman at University of Oslo, on some key characteristics of valid metrics of teaching quality. For us reliability is much much more important than availability. So, we need ways to assess teaching quality that:

  • are measures of the quality of teaching offered by institutions rather than measures of institutional prestige (e.g. entry grades)
  • require improvements in teaching practices in order to improve performance on the measures
  • as a whole form a coherent set of metrics rather than a set of disparate measures
  • are based on established research evidence about high quality teaching and learning in higher education
  • reflect the purposes of higher education.

We have to be very aware of Goodhearts’ rule that we must be wary of any measure that becomes a performance indicator.

I am not someone with a big issue with the National Student Survey – it is grounded in the right things but the issue is that it is run each year, and the data is used in unhelpful distorted ways – rather than acknowledging and working on feedback it is distorting. Universities feel the need to label engagement as “feedback moments” as they assume a less good score means students just don’t understand when they have that feedback moment.

Now, in England we have the prospect of the Teaching Excellence Framework English White Paper and Technical Consultation. I don’t think it’s that bad as a prospect. It will include students views of teaching, assessment and academic support from the National Student Survey, non completion rates, measures over three years etc. It’s not bad. Some of these measures are about quality, and there is some coherence. But this work is not based on established research evidence… There was great work here at Edinburgh on students learning experiences in UK HE, none of that work is reflected in TEF. If you were being cynical you could think they have looked at available evidence and just selected the more robust metrics.

My big issue with Year 2 TEF metrics are how and why these metrics have been selected. You need a proper consultation on measures, rather than using the White Paper and Technical Consultation to do that. The Office for National Statistics looked at measures and found them robust but noted that the differences between institutions scores on the selected metrics tend to be small and not significant. Not robust enough to inform future work according to the ONS. It seems likely that peer review will end up being how we differentiate between institution.

And there are real issues with TEF Future Metrics… This comes from a place of technical optimism that if you just had the right measures you’d know… This measure ties learner information to tax records for “Longitudinal Education Outcomes data set” and “teaching intensity”. Teaching intensity is essentially contact hours… that’s game-able… And how on earth is that about transformation, it’s not a useful measure of that. Unused office hours aren’t useful, optional seminars aren’t useful…  Keith Chigwell told me about a lecturer he knew who lectured a subject, each week fewer and fewer students came along. The last three lectures had no students there… He still gave them… That’s contact hours that count on paper but isn’t useful. That sort of measure seems to come more from ministerial dinner parties than from evidence.

But there are things that do matter… There is no mechanism outlines for a sector-wide discussion of the development of future metrics. What about expert teaching? What about students relations to knowledge? What about the first year experience – we know that that is crucial for student outcomes? Now the measures may not be easy, but they matter. And what we also see is the Learning Gains project, but they decided to work generically, but that also means you don’t understand students particular engagement with knowledge and engagement. In generic tests the description of what you can do ends up more important than what you actually do. You are asking for claims for what they can do, rather than performing those things. You can see why it is attractive, but it’s meaningless, it’s not a good measure of what Higher Education can do.

So, to finish, I’ve tried to put teaching at the centre of what we do. Teaching is a local achievement – it always shifts according to who the students are , what the setting is, and what the knowledge is. But that also always makes it hard to capture and measure. So what you probably need is a lot of different imperfect measures that can be compared and understood as a whole. However, if we don’t try we allow distorting measures, which reinforce inequalities, to dominate. Sometimes the only thing worse than not being listened to by policy makers, is being listened to them. That’s when we see a Frankenstein’s Monster emerge, and that’s why we need to recognise the issues, to ensure we are part of the debate. If we don’t try to develop alternative measures we leave it open to others to define.

Q&A

Q1) I thought that was really interesting. In your discussion of transformation of undergraduate students I was wondering how that relates to less traditional students, particularly mature students, even those who’ve taken a year out, where those transitions into adulthood are going to be in a different place and perhaps where critical thinking etc. skills may be more developed/different.

A1) One of the studies I talked about was London Metropolitan University has a large percentage of mature students… And actually there the interactions with knowledge really did prove transformative… Often students lived at home with family whether young or mature students. That transformation was very high. And it was unrelated to achievements. So some came in who had quite profound challenges and they had transformation there. But you have to be really careful about not suggesting different measures for different students… That’s dangerous… But that transformation was there. There is lots of research that’s out there… But how do we transform that into something that has purchase… recognising there will be flaws and compromises, but ensuring that voice in the debate. That it isn’t politicians owning that debate, that transformations of students and the real meaning of education is part of that.

Q2) I found the idea of transformation that you started with really interesting. I work in African studies and we work a lot on decolonial issues, and of the need to transform academia to be more representative. And I was concerned about the idea of transformation as a decolonial type issue, of being like us, of dressing like that… As much as we want to challenge students we also need to take on and be aware of the biases inherent in our own ways of doing things as British or Global academics.

A2) I think that’s a really important question. My position is that students come into Higher Education for something. Students in South Africa – and I have several projects there – who have nowhere to live, have very little, who come into Higher Education to gain powerful knowledge. If we don’t have access to a body of knowledge, that we can help students gain access to and to gain further knowledge, then why are we there? Why would students waste time talking to me if I don’t have knowledge. The world exceeds our ability to know it, we have to simplify the world. What we offer undergraduates is powerful simplifications, to enable them to do things. That’s why they come to us and why they see value. They bring their own biographies, contexts, settings. The project I talked about is based in the work of Basil Bernstein who argues that the knowledge we produce in primary research… But when we design curriculum it isn’t that – we engage with colleagues, with peers, with industry… It is transformed, changed… And students also transform that knowledge, they relate it to their situation, to their own work. But we are only a valid part of that process if we have something to offer. And for us I would argue it’s the access to body of knowledge. I think if we only offer process, we are empty.

Q3) You talked about learning analytics, and the issues of AHELO, and the idea of if you see the analytics, you understand it all… And that concept not being true. But I would argue that when we look at teaching quality, and a focus on content and content giving, that positions us as gatekeepers and that is problematic.

A3) I don’t see knowledge as content. It is about ways of thinking… But it always has an object. One of the issues with the debate on teaching and learning in higher education is the loss of the idea of content and context. You don’t foreground the content, but you have to remember it is there, it is the vehicle through which students gain access to powerful ways of thinking.

Q4) I really enjoyed that and I think you may have answered my question.. But coming back to metrics you’ve very much stayed in the discipline-based silos and I just wondered how we can support students to move beyond those silos, how we measure that, and how to make that work.

A4) I’m more course than discipline focused. With the first year of TEF the idea of assessing quality across a whole institution is very problematic, it’s programme level we need to look at. inter-professional, interdisciplinary work is key… But one of the issues here is that it can be implied that that gives you more… I would argue that that gives you differently… It’s another new way of seeing things. But I am nervous of institutions, funders etc. who want to see interdisciplinary work as key. Sometimes it is the right approach, but it depends on the problem at hand. All approaches are limited and flawed, we need to find the one that works for a given context. So, I sort of agree but worry about the evangelical position that can be taken on interdisciplinary work which is often actually multidisciplinary in nature – working with others not genuinely working in an interdisciplinary way.

Q5) I think to date we focus on objective academic ideas of what is needed, without asking students what they need. You have also focused on the undergraduate sector, but how applicable to the post graduate sector?

A5) I would entirely agree with your comment. That’s why pedagogic content matters so much. You have to understand your students first, as well as then also understanding this body of knowledge. It isn’t about being student-centered but understanding students and context and that body of knowledge. In terms of your question I think there is a lot of applicability for PGT. For PhD students things are very different – you don’t have a body of knowledge to share in the same way, that is much more about process. Our department is all PhD only and there process is central. That process is quite different at that level… It’s about contributing in an original way to that body of knowledge as its core purpose. That doesn’t mean students at other levels can’t contribute, it just isn’t the core purpose in the same way.

Parallel Sessions from PTAS projects: Social Media – Enhancing Teaching & Building Community? – Sara Dorman, Gareth James, Luke March

Gareth: It was mentioned earlier that there is a difference between the smaller and larger projects funded under this scheme – and this was one of the smaller projects. Our project was looking at whether we could use social media to enhance teaching and community in our programmes but in wider areas. And we particularly wanted to look at the use of Twitter and Facebook, to engage them in course material but also to strengthen relationships. So we decided to compare the use of Facebook used by Luke March in Russian Politics courses, with the use of Twitter and Facebook  in African Politics courses that Sara and I run.

So, why were we interested in this project? Social media is becoming a normal area of life for students, in academic practice and increasingly in teaching (Blair 2013; Graham 2014). Twitter increasingly used, Facebook well established. It isn’t clear what the lasting impact of social media would be but Twitter especially is heavily used by politicians, celebrities, by influential people in our fields. 2014 data shows 90% of 18-24 year olds regularly using social media. For lecturers social media can be an easy way to share a link as Twitter is a normal part of academic practice (e.g. the @EdinburghPIR channel is well used), keeping staff and students informed of events, discussion points, etc. Students have also expressed interest in more community, more engagement with the subject area. The NSS also shows some overall student dissatisfaction, particularly within politics. So social media may be a way to build community, but also to engage with the wider subject. And students have expressed preference for social media – such as Facebook groups – compared to formal spaces like Blackboard Learn discussion boards. So, for instance, we have a hashtag #APTD – the name of one of our courses – which staff and students can use to share and explore content, including (when you search through) articles, documents etc. shared since 2013.

So, what questions did we ask? Well we wanted to know:

  • Does social media facilitate student learning and enhance the learning experience?
  • Does social media enable students to stay informaed?
  • Does it facilitate participation in debates?
  • Do they feel more included and valued as part of the suject area?
  • Is social media complementary to VLEs like Learn?
  • Which medium works best?
  • And what disadvantages might there be around using these tools? \

We collected data through a short questionnaire about awareness, usage, usefulness. We designed just a few questions that were part of student evaluation forms. Students had quite a lot to say on these different areas.

So, our findings… Students all said they were aware of these tools. There was slightly higher levels of awareness among Facebook users, e.g. Russian Politics for both UG and PG students. Overall 80% said they were aware to some extent. When we looked at usage – meaning access of this space rather than necessarily meaningful engagement – we felt that usage of course materials on Twitter and Facebook does not equal engagement. Other studies have found students lurking more than posting/engaging directly. But, at least amongst our students (n=69), 70% used resources at least once. Daily usage was higher amongst Facebook users, i.e. Russian Politics. Twitter more than twice as likely to have never been used.

We asked students how useful they found these spaces. Facebook was seen as more useful than Twitter. 60% found Facebook “very” or “somewhat useful”. Only a third described Twitter as “somewhat useful” and none said “very useful”. But there were clear differences between UG and PG students. UG students were generally more positive than PG students. They noted that it was useful and interesting to keep up with news and events, but not always easy to tie that back to the curriculum. Students claimed it “interesting” a lot – for instance comparing historical to current events. More mixed responses included that there was plenty of material on Learn, so didn’t use FB or Twitter. Another commented they wanted everything on Learn, in one place. One commented they don’t use Twitter so don’t want to follow the course there, would prefer Facebook or Learn. Some commented that too many posts were shared, information overload. Students thought some articles were random, couldn’t tell what was good and what was not.

A lot of these issues were also raised in focus group discussions. Students do appreciate sharing resources and staying informed, but don’t always see the connection to the course. They recognise potential for debate and discussion but often it doesn’t happen, but when it does they find it intimidating for that to be in a space with real academics and others, indeed they prefer discussion away from tutors and academics on the course too. Students found Facebook better for network building but also found social vs academic distinction difficult. Learn was seen as academic and safe, but also too clunky to navigate and engage in discussions. Students were concerned others might feel excluded. Some also commented that not liking or commenting could be hurtful to some. One student comments “it was kind of more like the icing than the cake” – which I think really sums it up.

Students commented that there was too much noise to pick through. And “I didn’t quite have the know-how to get something out of”. “I felt a bit intimidated and wasn’t sure if I should join in”. others commented only using social media for social purpose – that it would be inappropriate to engage with academics there.  Some saw Twitter as a professional, Facebook as social.

So, some conclusions…

It seems that Facebook is more popoular with students than Twitter, seen as better for building community. Some differences between UG and PG students, with UG more interested. Generally less enthusiasm than anticiapted. Students were interested in nd aware of benefits of joining in discussions but also wary of commenting too much in “public”. This suggests that we need to “build community” in order for the “community building” tools to really works.

There is also an issue of lack of integration between FB, Twitter and Learn. Many of our findings reflect others, for instance Matt Graham in Dundee – who saw potential for HE humanities students. Facebook was particularly popular for their students than Twitter. He looked more at engagement and saw some students engaging more deeply with the wider African knowledge. But one outcome was that student engagement did not occur or engage sustainably without some structure – particular tasks and small nudges, connected to Learning Outcomes, flagging clear benefits at beginning, and that students should take a lead in creating groups – which came out of our work too – also suggested.

There are challenges here: inappropriate use, friending between staff and students for instance. Alastair Blair notes in an article that the utility of Twitter, despite the challenge, cannot be ignored. For academics thinking about impact it is important, but also for students it is important for alignment with wider subject area that moves beyond the classroom.

Our findings suggest that there is no need to rush into social media. But at the same time Sara and I still see benefits for areas like African Studies which is fast moving and poorly covered in the mainstream media. But the idea of students wanting to be engaged in the real world was clearly not carried through. Maybe more support and encouragement is needed for students – and maybe for staff too. And it would be quite interesting to see if and how students experiences of different politics and events – #indyref, #euref, etc. differ. Colleagues are considering using social media in a course on the US presidential election, might work out differently as students may be more confident to discuss these. The department has also moved forward with more presences for staff and students, also alumni.

Closing words from Matt Graham that encouraging students to question and engage more broadly with their subject is a key skill.

Q&A

Q1) What sort of support was in place, or guidelines, around that personla/academic identity thing?

A1) Actually none. We didn’t really realise this would happen. We know students don’t always engage in Learn. We didn’t really fully appreciate how intimidating students really found this. I don’t think we felt the need to give guidelines…

A1 – SD) We kind of had those channels before the course… It was organic rather than pedagogic…

Q1) We spoke to students who wanted more guidance especially for use in teaching and learning.

A1 – SD) We did put Twitter on the Learn page… to follow up… Maybe as academics we are the worst people to understand what students would do… We thought they would engage…

Q1) Will you develop guidelines for other courses…

A1) And a clearer explanation might encourage students to engage a bit more… Could be utility in doing some of that. University/institution wise there is cautious adoption and you see guidance issued for staff on using these things… But wouldn’t want overbearing guidance there.

Q1) We have some guidance under CC licence that you can use, available from Digital Footprints space.

Q2) Could you have a safer filtered space for students to engage. We do writing courses with international PG students and thought that might be useful to have social media available there… But maybe it will confuse them.

A2) There was a preference for a closed “safer” environment, talking only to students in their own cohort and class. I think Facebook is more suited to that sort of thing, Twitter is an open space. You can create a private Facebook group… One problem with Russian Politics was that they have a closed group… But had previous cohorts and friends of staff…

A2 – SD) We were trying to include students in real academia… Real tensions there over purpose and what students get out of it… The sense of not knowing… Some students might have security concerns but think it was insecurity in academic knowledge. They didn’t see themselves as co-producers. That needs addressing…

A2) Students being reluctant to engage isn’t new, but we thought we might have more engagement in social media. Now this was the negative side but actually there was positive things here – that wider awareness, even if one directional.

Q3) I just wanted to ask more about the confidence to participate and those comments that suggested that was a bigger issue – not just in social media – for these students, similarly information seeking behaviour

A3) There is work taking place in SPS around study skills, approaching your studies. Might be some room to introduce this stuff earlier on in school wide or subject wide courses… Especially if we are to use these schools. I completely agree that by the end of these studies you should have these skills – how to write properly, how to look for information… The other thing that comes to mind having heard our keynote this morning is the issue of transformative process. It’s good to have high expectations of UG students, and they seem to rise to the occasion… But I think that we maybe need to understand the difference between UG and PG students… And in PG years they take that further more fully.

A3 – SD) UG are really big courses – which may be part of the issue. In PG they are much smaller… Some students are from Africa and may know more, some come in knowing very little… That may also play in…

Q4) On the UG/PG thing these spaces move quickly! Which tools you use will change quickly. And actually the type of thing you post really matters – sharing a news article is great, but how you discuss and create follow up afterwards – did you see that, the follow up, the creation, the response…

A4 – SD) Students did sometimes interact… But the people who would have done that with email/Learn were the same that used social media in that way.

A4) Facebook and Twitter are new technologies still… So perhaps students will be increasingly more engaged and informed and up for engaging in these space. I’m still getting to grips with the etiquette of Twitter. There was more discussion on Facebook Groups than on Twitter… But also can be very surface level learning… It complements what we are doing but there are challenges to overcoming them… And we have to think about whether that is worthwhile. Some real positives and real challenges.

Parallel Sessions from PTAS projects: Managing Your Digital Footprint (Research Strand) – Dr Louise Connelly 

This was one of the larger PTAS-funded projects. This is the “Research Strand” is because it ran in parallel to the campaign which was separately funded.

There is so much I could cover in this presentation so I’ve picked out some areas I think will be practical and applicable to your research. I’m going to start by explaining what we mean by “Digital Footprint” and then talk more about our approach and the impact of the work. Throughout the project and campaign we asked students for quotes and comments that we could share as part of the campaign – you’ll see these throughout the presentation but you can also use these yourself as they are all CC-BY.

The project wouldn’t have been possible without an amazing research team. I was PI for this project – based at IAD but I’m now at the Vet School. We also had Nicola Osborne (EDINA), Professor Sian Bayne (School of Education). We also had two research students – Phil Sheail in Semester 1 and Clare Sowton in Semester 2. But we also had a huge range of people across the Colleges and support services who were involved in the project.

So, I thought I’d show you a short video we made to introduce the project:

YouTube Preview Image

The idea of the video was to explain what we meant by a digital foorprint. We clearly defined what we meant as we wanted to emphasis to students and staff – though students were the focus – was that your footprint is not just what you do but also what other people post about you, or leave behind about you. That can be quite scary to some so we wanted to address how you can have some control about that.

We ran a campaign with lots of resources and materials. You can find loads of materials on the website. That campaign is now a service based in the Institute for Academic Development. But I will be focusing on the research in this presentation. This all fitted together in a strategy. The campaig was to raise awareness and provide practical guidance, the research sought to gain an in-depth understanding of student’s usage and produce resources for schools. Then to feed into learning and teaching on an ongoing basis. Key to the resaerch was a survey we ran during the campaign, which was analysed by the research team..

In terms of the gap and scope of the campaign I’d like to take you back to the Number 8 bus… It was an idea that came out of myself and Nicola – and others – being asked regularly for advice and support. There was a real need here, but also a real digital skills gap. We also saw staff wanting to embed social media in the curriculum and needing support. The brainwave was that social media wasn’t the campaign that was needed, it was about digital footprint and the wider issues. We also wanted to connect to current research. boyd (2014) who works on networked teens talks about the benefits as well as the risks… as it is unclear how students are engaging with social/digital media and how they are curating their online profiles. We also wanted to look at the idea of eprofessionalism (Chester et al 2013), particularly in courses where students are treated as paraprofessionals – a student nurse, for instance, could be struck off before graduating because of social media behaviours so there is a very real need to support ad raise awareness amongst students.

Our overall research aim was to: work with students across current delivery modes (UG, PGT, ODL, PhD) in order to better understand how they 

In terms of our research objectives we wanted to: conduct research which generates a rich understanding; to develop a workshop template – and ran 35 workshops for over 1000 students in that one year; to critically analyse social media guidelines – it was quite interesting that a lot of it was about why students shouldn’t engage, little on the benefits; to work in partnership with EUSA – important to engage around e.g. campaign days; to contribute to the wider research agenda; and to effectively disseminate project findings – we engaged with support services, e.g. we worked with Careers about their LinkedIn workshops which weren’t well attended despite students wanting professional presence help and just rebranding the sessions was valuable. We asked students where they would seek support – many said the Advice Place rather than e.g. IS, so we spoke to them. We spoke to the Councelling service too about cyberbullying, revenge porn, sexting etc.

So we ran two surveys with a total of 1,457 responses. Nicola and I ran two lab-based focus groups. I interviewed 6 individuals over a range of interviews with ethnographic tracing. And we gathered documentary analysis of e.g. social media guidelines. We used mixed methods as we wanted this to be really robust.

Sian and Adam really informed our research methods but Nicola and I really led the publications around this work. We have had various publications and presentations including presentations at the European Conference on Social Media, for the Social Media for Higher Education Teaching and Learning conference. Also working on a Twitter paper. We have other papers coming. Workshops with staff and students have happened and are ongoing, and the Digital Ambassador award (Careers and IS) includes Digital Footprint as a strand. We also created a lot of CC-BY resources – e.g. guidelines and images. Those are available for UoE colleagues, but also for national and international community who have fed into and helped us develop those resources.

I’m going to focus on some of the findings…

The survey was on Bristol Online Survey. It was sent to around 1/3rd of all students, across all cohorts. The central surveys team did the ethics approval and issuing of surveys. Timing had to fit around other surveys – e.g. NSS etc. And we we had relatively similar cohorts in both surveys, the second had more responses but that was after the campaign had been running for a while.

So, two key messages from the surveys: (1) Ensure informed consent – crucial for students (also important for staff) – students need to understand the positive and negative implications of using these non traditional non university social media spaces. In terms of what that means – well guidance, some of the digital skills gap support etc. Also (2) Don’t assume what students are using and how they are using it. Our data showed age differences in what was used, cohort differences (UG, PGT, ODL, PhD), lack of awareness e.g. T&Cs, benefits – some lovely anecdotal evidence, e.g. UG informatics student approached by employers after sharing code on GitHub. Also the important of not making assumptions around personal/educational/professional environments – especially came out of interviews, and generally the implications of Digital Footprint. One student commented on being made to have a Twitter account for a course and not being happy about not having a choice in that (e.g. through embedding of tweets in Learn for instance).

Thinking about platforms…

Facebook is used by all cohorts but ODL less so (perhaps a geographic issue in part). Most were using it as a “personal space” and for study groups. Challenges included privacy management. Also issues of isolation if not all students were on Facebook.

Twitter is used mainly by PGT and PhD students, and most actively by 31-50 year olds. Lots of talk about how to use this effectively.

One of the surprises for us was that we thought most courses using social media would have guidelines in place for the use of social media in programme handbooks. But students reported them not being there, or not being aware of it. So we created example guidance which is on the website (CC-BY) and also an eprofessionalism guide (CC-BY) which you can also use in your own programme handbooks.

There were also tools we weren’t aware were in usage and that has led to a new YikYak research project which has just been funded by PTAS and will go ahead over the next year with Sian Bayne leading, myself, Nicola and Informatics. The ethnographic tracing and interviews gave us a much richer understanding of the survey data.

So, what next? We have been working with researchers in Ireland, Australia, New Zealand… EDINA has had some funding to develop an external facing consultancy service, providing training and support for NHS, schools, etc. We have the PTAS funded YikYak project. We have the Digital Footprint MOOC coming in August. The survey will be issued again in October. Lots going on, more to come!

We’ve done a lot and we’ve had loads of support and collaboration. We are really open to that collaboration and work in partnership. We will be continuing this project into the next year. I realise this is the tip of the iceberg but it should be food for thought.

Q&A 

Q1) We were interested in the staff capabilities

A1 – LC) We have run a lot of workshops for staff and research students, done a series at vet. Theres a digital skills issue, research, and learning and teaching, and personal strands here.

A1 – NO) There were sessions and training for staff before… And much of the research into social media and digital footprint has been very small cohorts in very specific areas,

Comment) I do sessions for academic staff in SPS, but I didn’t know about this project so I’ll certainly work that in.

A1 – LC) We did do a session for fourth year SPS students. I know business school are all over this as part of “Brand You”.

Q2) My background was in medicine and when working in a hospital and a scary colleague told junior doctors to delete their Facebook profiles! She was googling them. I saw an article in the Sun that badly misrepresented doctors – of doctors living the “high life” because there was something sunny.

A2 – LC) You need to be aware people may Google you… And be confident of your privacy and settings. And your professional body guidelines about what you have there. But there are grey areas there… We wanted to emphasise informed choice. You have the Right to be Forgotten law for instance. Many nursing students already knew restrictions but felt Facebook restrictions unfair… A recent article says there are 3.5 degrees of separation on Facebook – that can be risky… In teaching and learning this raises issues of who friends who, what you report… etc. The culture is we do use social media, and in many ways that’s positive.

A2 – NO) Medical bodies have very clear guidance… But just knowing that e.g. Profile pictures are always public on Facebook, you can control settings elsewhere… Knowing that means you can make informed decisions.

Q3) What is “Brand You”?

A3) Essentially it’s about thinking of yourself as a brand, how your presences are uses… And what is consistent, how you use your name, your profile images. And how you do that effectively if you do that. There is a book called “Brand You” which is about effective online presence.

Closing Keynote : Helen Walker, GreyBox Consulting and Bright Tribe Trust

I’m doing my Masters in Digital Education with University of Edinburgh, but my role is around edtech, and technology in schools, so I am going to share some of that work with you. So, to set the scene a wee video: Kids React to Technology: Old Computers:

YouTube Preview Image

Watching the kids try to turn on the machine it is clear that many of us are old enough to remember how to work late 1970s/early 1980s computers and their less than intuitive user experience.

So the gaps are maybe not that wide anymore… But there are still gaps. The gaps for instance between what students experience at home, and what they can do at home – and that can be huge. There is also a real gap between EdTech promises and delivery – there are many practitioners who are enervated about new technologies, and have high expectations. We also have to be aware of the reality of skills – and be very cautious of Prensky’s (2001) idea of the “digital native” – and how intoxicating and inaccurate that can be.

There is also a real gap between industry and education. There is so much investment in technology, and promises of technology. Meanwhile we also see perspectives of some that computers do not benefit pupils. Worse, in September 2015 the OECD reported, and it was widely re-reported that computers do not improve pupil results, and may in fact disbenefit. That risks going back before technology, or technology being the icing on the cake… And then you read the report:

“Technology can amplify great teaching but great technology cannot replace poor teaching.”

Well of course. Technology has to be pedagogically justified. And that report also encourages students as co-creators. Now if you go to big education technology shows like BETT and SETT you see very big rich technology companies offering expensive technology solutions to quite poor schools.

That reflects Education Endowment Fund Report 2012 found that “it’s the pedagogy, not technology” and the technology is a catalyst for change. Glynis Cousins says that technology has to work dynamically with pedagogy.

Now, you have fabulous physical and digital resources here. There is the issue here of what schools have. Schools often have machines that are 9-10 years old, but students have much more sophisticated devices and equipment at home – even in poor homes. Their school experience of using old kit to type essays jars with that. And you do see schools trying to innovate with technology – iPads and such in particular… They brought them, they invest thousands.. But they don’t always use them because the boring crucial wifi and infrastructure isn’t there. It’s boring and expensive but it’s imperative. You need that all in order to use these shiny things…

And with that… Helen guides us to gogopp.com and the web app to ask us why a monkey with its hand in a jar with a coin… We all respond… The adage is that if you wanted to catch a monkey you had to put an orange or some nuts in a jar, and wouldn’t let go, so a hunter could just capture the monkey. I deal with a lot of monkeys… A lot of what I work towards is convincing them that letting go of that coin, or nut, or orange, or windows 7 to move on and change and learn.

Another question for us… What does a shot of baseball players in a field have to do with edtech… Well yes, “if you build it, they will come”. A lot of people believe this is how you deal with edtech… Now although a scheme funding technology for schools in England has come to an end, a lot of Free Schools now have this idea. That if you build something, magic will happen…

BTW this gogopp tool is a nice fun free tool – great for small groups…

So, I do a lot of “change management consultation” – it’s not a great phrase but a lot of what it’s about is pretty straightforward. Many schools don’t know what they’ve got – we audit the kit, the software, the skills. We work on a strategy, then a plan, then a budget. And then we look at changes that make sense… Small scale, pathfinder projects, student led work – with students in positions of responsibility, we have a lot of TeachMeet sessions – a forum of 45 mins or so and staff who’ve worked on pathfinder projects have 2 or max 5 mins can share their experience – a way to drop golden nuggets into the day (much more effective than inset days!), and I do a lot of work with departmental heads to ensure software and hardware aligns with needs.

When there is the right strategy and the right pedagogical approach, brilliant things can happen. For instance…

Abdul Chohan, now principal of Bolton Academy, transformed his school with iPads – giving them out and asking them what to do with them. He works with Apple now…

David Mitchell (no, not that one), Deputy Headteacher in the Northwest, started a project called QuadBlogging for his 6th year students (year 7 in Scotland) whereby there are four organisations involved – 2 schools and 2 other institutions, like MIT, like the Government – big organisations. Students get real life, real world feedback in writing. They saw significant increases in their writing quality. That is a great benefit of educational technology – your audience can be as big or small as you want. It’s a nice safe contained forum for children’s writing.

Simon Blower, had an idea called “Lend me your writing”, crowdfunded Pobble – a site where teachers can share examples of student work.

So those are three examples of pedagogically-driven technology projects and changes.

And now we are going to enter Kahoot.it…

The first question is about a free VLE – Edmodo… It’s free except for analytics which is a paid for option.

Next up… This is a free behaviour management tool. The “Class Story” fundtion has recently been added… That’s Class Dojo.

Next… A wealth of free online courses, primarily aimed at science, math and computing… Khan Academy. A really famous resource now. Came about as Salmon Khan who asked for maths homework help… Made YouTube videos… Very popular and now a global company with a real range of videos from teachers. No adverts. Again free…

And next… an adapting learning platform with origins in the “School of One” in NYC. That’s Knewton. School of One is an interesting school which has done away with traditional classroom one to many systems… They use Knewton, which suggests the next class, module, task, etc. This is an “Intelligent Tutoring System” which I am skeptical of but there is a lot of interest from publishers etc. All around personalised learning… But that is all data driven… I have issues with thinking of kids as data producing units.

Next question… Office 365 tool allows for the creation of individual and class digital notebooks – OneNote. It’s a killer app that Microsoft invest in a lot.

And Patrick is our Kahoot winner (I’m second!)! Now, I use Kahoot I training sessions… It’s fun once… Unless everyone uses it through the day. It’s important that students don’t just experience the same thing again and again, that you work as a learning community to make sure that you are using tools in a way that stays interesting, that varies, etc.

So, what’s happening now in schools?

  • Mobility: BYOD, contribution, cross-platform agility
  • Office365/Google/iCloud
  • VLE/LMS – PLE/PLN – for staff and students
  • Data and tracking

So with mobility we see a growth in Bring Your Own Device… That brings a whole range of issues around esafety, around infrastructure. It’s not just your own devices, but also increasingly a kind of hire-purchase scheme for students and parents. That’s a financial pressure – schools are financially pressured and this is just a practical issue. One issue that is repeatedly coming up is the issue of cross-platform agility – phones, tablets, laptops. And discussion of bringing in keyboards, mice, and traditional set ups… Keyboard skills are being seen as important again in the primary sector. The benefit of mobile devices is collaboration, the idea of the main screen allowing everyone to be part of the classroom… You don’t need expensive software, can use e.g. cheap Reflector mirroring software. Apps… Some are brilliant, some are dreadful… Management of apps and mobile device management has become a huge industry… Working with technicians to support getting apps onto devices… How you do volume purchasing? And a lot of apps… One of two hit propositions… You don’t want the same app every week for one task… You need the trade off of what is useful versus getting the app in place/stafftime. We also have the issue of the student journey. Tools like socrative and nearpod lets you push information to devices. But we are going to look at/try now Plickers… What that does is has one device – the teachers mobile app – and I can make up printed codes (we’ve all been given one today) that can be laminated, handed out at the beginning of the year… So we then hold up a card with the appropriate answer at the top… And the teacher’s device is walked around to scan the room for the answers – a nice job for a student to do… So you can then see the responses… And the answer… I can see who got it wrong, and who got it right. I can see the graph of that….

We have a few easy questions to test this: 2+2 = (pick your answer); and how did you get here today? (mostly on foot!).

The idea is it’s a way to get higher order questioning into a session, otherwise you just hear from the kids that put their hands up all the time. So that’s Plicker… Yes, they all have silly names. I used to live in Iceland where a committee meets to agree new names – the word for computer means “witchcraft machine”.

So, thinking about Office365/Google/iCloud… We are seeing a video about a school where pupils helps promote, manage, coding, supporting use of Office365 in the school. And how that’s a way to get people into technology. These are students at Wyndham High in Norfolk – all real students. That school has adopted Office365. Both Office365 and Google offer educational environments. One of the reasons that schools err towards Office365 is because of the five free copies that students get – which covers the several locations and machines they may use at home.

OneNote is great – you can drag and drop documents… you can annotate… I use it with readings, with feedback from tutors. Why it’s useful for students is the facility to create Class Notebooks where you add classes and add notebooks. You can set up a content library – that students can access and use. You can also view all of the students notebooks in real time. In schools I work in we no longer have planners, instead have a shared class notebook – then colleagues can see and understand planning.

Other new functionality is “Classroom” where you can assign classes, assignments… It’s a new thing that brings some VLE functionality but limited in terms of grades being 0-100. And you can set up forms as well – again in preview right now but coming. Feedback goes into a CSV file in excel.

The other thing that is new is Planner – a project planning tool to assign tasks, share documents, set up groups.

So, Office 365 is certainly the tool most secondary schools I work with use.

The other thing that is happening in schools right now is the increasing use of data dashboards and tracking tools – especially in secondary schools – and that is concerning as it’s fairly uncritical. There is a tool called Office Mix which lets you create tracked content in Powerpoint… Not sure if you have access here, but you can use it at home.

Other data in schools tools include Power BI… Schools are using these for e.g. attainment outcomes. There is a free schools version of this tool (used to be too expensive). My concern is that it is not looking at what has impact in terms of teaching and learning. It’s focused on the summative, not the actual teaching and learning, not on students reporting back to teachers on their own learning. Hattie and self-reported grades tells us that students set expectations, goals, and understand rubrics for self-assessment. There is rich and interesting work to be done on using data in rich and meaningful ways.

In terms of what’s coming… This was supposed to be by 2025, then 2020, maybe sooner… Education Technology Action Group suggest online learning is an entitlement, better measures of performance, new emerging teaching and learning, wearables, etc.

Emerging EdTech includes Augmented Reality. It’s a big thing I do… It’s easy but it excites students… It’s a digital overlay on reality… So my two year old goddaughter is colouring in book that is augmented reality – you can then see a 3D virtual dinosaur coloured as per your image. And she asked her dad to send me a picture of her with a dinosaur. Other fun stuff… But where is the learning outcome here? Well there is a tool called Aurasma… Another free tool… You create a new Aura trigger image – can be anything – and you can choose your overlay… So I said I wanted to change the words on th epaper converted into French. It’s dead easy! We get small kids into this and can put loads of hidden AR content around the classroom, you can do it on t-shirts – to show inner working of the body for instance. We’ve had Year 11’s bring Year 7 textbooks to life for them – learning at both ends of the spectrum.

Last thing I want to talk about is micro:bit. This is about coding. In England and Wales coding is compulsory part of English now. All students are being issued a micro:bit and students are now doing all sorts of creative things. Young Rewired State project runs every summer and come to London to have code assessed – the winners were 5 and 6 year olds. So they will come to you with knowledge of coding – but they aren’t digital natives no matter what anyone tells you!

Q&A

Q1 – Me) I wanted to ask about equality of access… How do you ensure students have the devices or internet access at home that they need to participate in these activities and tools – like the Office365 usage at home for instance. In the RSE Digital Participation Inquiry we found that the reality of internet connectivity in homes really didn’t match up to what students will self-report about their own access to technology or internet connections, there is such baggage associated with not having internet access to access to the latest technologies and tools… So I was wondering how you deal with that, or if you have any comments on that.

A1) With the contribution schemes that schools have for devices… Parents contribute what they can, school covers the rest… So that can be 50p or £1 per month, it doesn’t need to be a lot. Also pupil premium money can be used for this. But, yes, parental engagement is important… Many students have 3G access not fixed internet for instance and that has cost implications… some can use dongles supplied by schools but just supporting students like this can cost 15k/yr to support for a small to medium sized cohort. There is some interesting stuff taking place in new build schools though… So for instance Gaia in Wales are a technology company doing a lot of the new build hardware/software set up… In many of those schools there is community wifi access… a way around that issue of connectivity… But that’s a hard thing to solve.

Q1 – Me) There was a proposal some years ago from Gordon Brown’s government, for all school aged children to have government supported internet access at home but that has long since been dropped.

Q2) I fear with technologies is that if I learn it, it’s already out of date. And also learners who are not motivated to engage with these tools they haven’t used before… I enjoyed these tools, their natty…

A2) Those are my “sweet shop” tools… Actually Office365/Google or things like Moodle are the bread and butter tools. These are fun one-off apps… They are pick up and go stuff… but its getting big tools working well that matter. Ignore the sweets if you need or want… The big stuff matters.

And with that Velda is closing with great thanks to our speakers today, to colleagues in IAD, and to Daphne Loads and colleagues. Please do share your feedback and ideas, especially for the next forum!

May 122016
 
Participants networking over lunch at eLearning@ed

Last week I was delighted to be part of the team organising the annual eLearning@ed Conference 2016. The event is one of multiple events and activities run by and for the eLearning@ed Forum, a community of learning technologists, academics, and those working with learning technologies across the University of Edinburgh. I have been Convener of the group since last summer so this was my first conference in this role – usually I’m along as a punter. So, this liveblog is a little later than usual as I was rather busy on the day…

Before going into my notes I do also want to say a huge thank you to all who spoke at the event, all who attended, and an extra special thank you to the eLearning@ed Committee and Vlad, our support at IAD. I was really pleased with how the event went – and feedback has been good – and that is a testament to the wonderful community I have the privilege of working with all year round here at Edinburgh.

Note: Although I have had a chance to edit these notes they were taken live so just let me know if you spot any errors and I will be very happy to make any corrections. 

The day opened with a brief introduction from me. Obviously I didn’t blog this but it was a mixture of practical information, enthusiasm for our programme, and an introduction to our first speaker, Melissa Highton:

Connecting ISG projects for learning and teaching – Melissa Highton (@honeybhighton), Director: Learning, Teaching and Web (LTW), Information Services.

Today is about making connections. And I wanted to make some connections on work that we have been doing.

I was here last year and the year before, and sharing updates on what we’ve been doing. It’s been a very good year for LTW. It has been a very busy year for open, inspired by some of the student work seen last year. We have open.ed launched, the new open educational resources policies, we have had the OER conference, we have open media, we have had some very bold moves by the library. And a move to make digital images from the library are open by default. That offers opportunities for others, and for us.

Extract from the Online Learning Consortium's 2016 Infographic

Extract from the Online Learning Consortium’s 2016 Infographic (image copyright OLC 2016)

There is evidence – from the US (referencing the EdTech: a Catalyst for Success section of the Online Learning Consortium 2016 Infographic). with students reporting increased engagement with course materials, with professors, with fellow students. And there is also a strong interest in digital video. MediaHopper goes fully launched very soon, and we are taking a case to Knowledge Strategy Committee and Learning and Teaching Committee to invest further in lecture capture, which is heavily used and demanded. And we need to look at how we can use that content, how it is being used. One of the things that I was struck by at LAK, was the amount of research being done on the use of audio visual material, looking at how students learn from video, how they are used, how they are viewed. Analytics around effective video for learning is quite interesting – and we’ll be able to do much more with that when we have these better systems in place. And I’ve included an image of Grace Hopper, who we named MediaHopper after.

Melissa Highton speaking at eLearning@ed 2016

Melissa Highton speaking at eLearning@ed 2016

Talking of Learning Analytics I’m a great fan of the idea that if a thing is worth doing, it’s worth doing a 2×2 matrix. So this is the Learning Analytics Map of Activities, Research and Roll-out (LAMARR – a great mix of Hollywood screen icon, and the inventor of wifi!), and there are a whole range of activities taking place around the university in this area at the moment, and a huge amount of work in the wider sector.

We also are the only University in the UK with a Wikimedian in Residence. It is a place entirely curated by those with interest in the world, and there is a real digital literacy skill for our students, for us, in understanding how information is created and contested online, how it becomes part of the internet, and that’s something that is worth thinking about for our students. I have a picture here of Sophie Jex-Blake, she was part of the inspiration for our first Wikipedia Edit-a-thon on women in science. Our Wikimedian is with us for just one year, so do make use of him. He’s already worked on lots of events and work, he’s very busy, but if you want to talk to him about a possible event, or just about the work being done, or that you want to do.

Here for longer than one year we have Lynda.com, an online collection of training videos which the University has signed up to for 3 years, and will be available through your University login. Do go and explore it now, and you will have Edinburgh University access from September. The stuff that is in there, can be curated into playlists, via learn, usage etc.

So, Wikipedia for a year, Lynda.com for three years, MediaHopper here now, and open increasingly here.

Highlights from recent conferences held in Edinburgh, chaired by Marshall Dozier

Marshall: Conferences are such an opportunity to make a connection between each other, with the wider community, and we hope to fold those three big conferences that have been taking place back into our own practice.

OER16 Open Culture Conference – Lorna Campbell (@lornamcampbell), Open Education Resources Liaison for Open Scotland, LTW.

This was the 7th OER conference, and the first one to take place in Edinburgh. It was chaired by myself and Melissa Highton. Themes included Strategic advantage of open, creating a culture of openness and the reputational challenges of “open-washing”; converging and competing cultures of open knowledge, open source, open content, open practice, open data and open access; hacking, making and sharing; openness and public engagement?; and innovative practices in cultural heritage contexts, which I was particularly to see us get good engagement from.

There was originally a sense that OER would die out, but actually it is just getting bigger and bigger. This years OER conference was the biggest yet, and that’s because of support and investment from those who, like the University of Edinburgh, who see real value in openness. We had participants from across the world – 29 countries – despite being essentially a UK based conference. And we had around a 50/50 gender split – no all male panel here. There is no external funding around open education right now, so we had to charge but we did ensure free and open online participation for all – keynotes live-streamed to the ALT channel, we had Radio #EDUtalk @ OER16, with live streaming of keynotes, and interviews with participants and speakers from the conference – those recordings are hugely recommended; and we also had a busy and active Twitter channel. We had a strong Wikimedia presence at OER16, with editing training, demonstrations, and an ask a Wikimedian drop-in clinic, and people found real value in that.

Lorna Campbell speaking about OER16 at eLearning@ed 2016

Lorna Campbell speaking about OER16 at eLearning@ed 2016

We also had a wide range of keynotes and I’m just going to give a flavour of these. Our first was Catherine Cronin, National University of Ireland, Galway, who explored different definitions of openness, looking at issues of context and who may be excluded. We all negotiate risk when we are sharing, but negotiating that is important for hope, equality, and justice.

In the year of the 400th anniversary of Shakespeare’s death we were delighted to have Shakespeare scholar Emma Smith, who had a fantastic title: Free Willy: Shakespeaker & OER. In her talk she suggested teaching is an open practice now, that “you have to get over yourself and let people see what you are doing”.

John Scally’s keynote talked about the National Library of Scotland’s bold open policy. The NLS’ road to openness has been tricky, with tensions around preservation and access. John argued that the library has to move towards equality, and that open was a big part of that.

Edupunk Jim Groom of Reclaim Hosting, has quite a reputation in the sector and he was giving his very first keynote in the UK. JIm turned our attention from open shared resources, and towards open tech infrastructure, working at individual scale, but making use of cloud, networked resources which he sees as central to sustainable OER practice.

The final keynote was from Melissa Highton, with her talk Open with Care. She outlined the vision and policy of UoE. One idea introduced by Melissa was “technical and copyright debt”, the costs of not doing licensing, etc. correctly in the first place. IT Directors and CIOs need to be persuaded of the need for investment in OER.

It is difficult to summarise such a diverse conference, but there is growing awareness that openness is a key aspect that underpins good practice. I wanted to quote Stuart Allen’s blog. Stuart is a student on the MSc in Digital Education. HE did a wonderful summary of the conference.

Next year’s conference has the theme of Open and Politics and will be co-chaired by Josie Frader and Alec Tartovsky, chair of CC in Poland (our first international co-chair).

Learning@Scale 2016 – Amy Woodgate, Project Manager – Distance Education Initiative (DEI) & MOOCs, LTW.

I am coming at this from a different perspective here, as participant rather than organiser. This conference is about the intersection between informatics approaches and education. And I was interested in the degree to which that was informed by informatics, and that really seems to flag a need to interrogate what we do in terms of learning analytics, educational approach. So my presentation is kind of a proposal…

We have understood pedagogy for hundreds of years, we have been doing a huge amount of work on digital pedagogy, and the MSc in Digital Education is leading in this area. We have environments for learning, and we have environments at scale, including MOOCs, which were very evident at L@S. At University of Edinburgh we have lots of digitally based learning environments: ODL; MOOCS; and the emergence of UG credit-bearing online courses. But there is much more opportunity to connect these things for research and application – bringing pedagogy and environments at scale.

The final keynote at L@S was from Ken Koedinger, at Carnegie Mellon University. He suggested that every learning space should be a learning lab. We shouldn’t just apply theory, but building, doing, providing evidence base, thinking as part of our practice. He talked about collecting data, testing that data, understanding how to use data for continuous improvement. We are a research led institution, we have amazing opportunities to blend those things. But perhaps we haven’t yet fully embraced that Design, Deploy, Data, Repeat model. And my hope is that we can do something together more. We’ve done MOOCs for four years now, and there are so many opportunities to use the data, to get messy in the space… We haven’t been doing that but no-one has been. What was hard about the conference for me was that lots of it was about descriptive stats – we can see that people have clicked a video, but not connecting that back to anything else. And what was interesting to me was the articulation into physical environments here – picking up your pen many times is not meaningful. And so many Learning Analytics data sources are what we can capture, not necessarily what is meaningful.

The keynote had us answer some questions, about knowing when students are learning. You can see when people view or like a video, but there is a very low correlation between liking and learning… And for me that was the most important point of the session. That was really the huge gap, more proactive research, engagement, for meaningful measures of learning – not just what we can measure.

Mike Sharples, OU was also a keynote at L@S, and he talked about learning at scale, and how we can bring pedagoguey into those spaces, and the intersection of diversity, opportunity and availability. One of the things FutureLearn is exploring is the notion of citizen inquiry – people bring own research initiatives (as students) and almost like kickstarter engage the community in those projects. Interesting to see what happens, but an interesting question of how we utilize the masses, the scale of these spaces. We need you as the community working with us to start questioning how we can get more out of these spaces. Mike’s idea was that we have to rethink our idea of effective pedagoguey, and of ensuring that that is sustainable as being a key idea.

Working backwards then, there were many many papers submitted, not all were accepted, but you can view the videos of keynotes on Media Hopper, and there were posters for those not able to present as well. The winner of the best paper was “1A Civic Mission of MOOCs” – which gave the idea that actually there was a true diversity of people engaged in political MOOCs, and they weren’t all trolly, there was a sense of “respectful disagreement”. There were a lot of papers that we can look at, but we can’t apply any of these findings that can be applied without critical reflection, but there is much that can be done there.

It was interesting Lorna’s comments about gender balance. At L@S there were great female speakers, but only 15% of the whole. That reflected the computer science angle and bias of the event, and there felt like there was a need for the humanities to be there – and I think that’s an aspiration for the next one, to submit more papers, and get those voices as part of the event.

Although perhaps a slightly messy summary of the event, I wanted to leave you with the idea that we should be using what we do here at Edinburgh, with what we have available here, to put out a really exciting diverse range of work for presenting at next year’s third L@S!

So, what do people think about that idea of hacking up our learning spaces more? Thinking more about integrating data analysis etc, and having more of a community of practice around online pedagogies for learning@scale.

Amy Woodgate speaking about Learning@Scale 2016

Amy Woodgate speaking about Learning@Scale at elearning@ed 2016

Q&A

Q1) I think that issue of measuring what we can measure is a real issue right now. My question here is about adapting approach for international students – they come in and play huge fees, and there are employers pushing for MOOCs instead… But then we still want that income… So how does that all work together.

A1) I don’t think learning at scale is the only way to do teaching and learning, but it is an important resource, and offers new and interesting ways of learning. I don’t feel that it would compromise that issue of international students. International students are our students, we are an international community on campus, embracing that diversity is important. It’s not about getting rid of the teacher… There is so much you can do with pedagogies online that are so exciting, so immersive… And there is more we can get out of this in the future. I find it quite awkward to address your point though… MOOCs are an experimentation space I think, for bringing back into core. That works for some things, and some types of content really work at scale – adaptive learning processes for instance – lots of work up front for students then to navigate through. But what do others think about using MOOCs on campus…

Comment, Tim) I think for me we can measure things, but that idea of how those actions actually relate to the things that are not measured… No matter how good your VLE, people will do things beyond it. And we have to figure out how we connect and understand how they connect.

Q2, Ruby) Thank you very much for that. I was just a little bit worried… I know we have to move away from simplistic description of this measure, means this thing. But on one slide there was an implication that measuring learning… can be measured through testing. And I don’t think that that that is neccassarily true or helpful. Liking CAN be learning. And there is a lot of complexity around test scores.

A2)  Yes, that chart was showing that viewing a particular video, hadn’t resulted in better learning uptake at the end of the course… But absolutely we do need to look at these things carefully…

Q3) At the recent BlackBoard conference there was the discussion of credit bearing MOOCs, is there any plan to do that now?

A3) This sometihng we can do of course, could take a MOOC into a credit bearing UG course, where the MOOC is about content. What becomes quite exciting is moving out and, say, the kind of thing MSc DE did with eLearning and Digital Cultures – making connections between the credit bearing module and the MOOC, in interesting and enriching ways. The future isn’t pushing students over to the MOOC, but taking learning from one space to another, and seeing how that can blend. Some interesting conversations around credit alliances, like a virtual Erasmus, around credit like summer school credit. But then we fall back of universities wanting to do exams, and we have a strong track record of online MScs not relying on written exams, but not all are as progressive right now.

Q4, Nigel) I’m in Informatics, and am involved in getting introductory machine learning course online, and one of the challenges I’m facing is understanding how students are engaging, how much. I can ask them what they liked… But it doesn’t tell me much. That’s one issue. But connecting up what’s known about digital learning and how you evaluate learning in the VLEs is good… The other thing is that there is a lot of data I’d like to get out of the VLE and which to my knowledge we can’t access that data… And we as data scientists don’t have access.

Comment, Anne-Marie Scott) We are still learning how to do that best but we do collect data and we are keen to see what we can do. Dragan will talk more about Learning Analytics but there is also a UoE group that you could get involved with.

Q5, Paul) That was fascinating, and I wish I’d been able to make it along… I was a bit puzzled about how we can use this stuff… It seems to me that we imagine almost a single student body out there… In any programme we have enthusiastic students desperate to learn, no matter what; in the middle we have the quite interested, may need more to stay engaged; and then there are people just there for the certificate who just want it easy. If we imagine we have to hit all of the audiences in one approach it won’t work. We are keen to have those super keen students. In medicine we have patient groups with no medical background or educational background, so motivated to learn about their own conditions… But then in other courses, we see students who want the certificate… I think that enormous spectrum give us enormous challenges.

A5) An interesting pilot in GeoSciences on Adaptive Learning, to try to address the interested and the struggling students. Maths and Physics do a lot with additional resources with external sites – e.g. MOOCs – in a curated list from academics, that augment core. Then students who just want the basics, for those that want to learn more… Interesting paper on cheating in MOOCs, did analysis on multiple accounts and IP addresses, and toggling between accounts… Got a harvester and master account, looked at clusters…. Master accounts with perfect learning… Harvesting were poorer, then the ones in the middle… The middle is the key part… That’s where energy should be in the MOOC.

Q6) I was intrigued by big data asset work, and getting more involved… What are tensions with making data openly available… Is it competition with other universities…

A6) That’s part of project with Dragan and Jeff Haywood have been leading on Learning Analytics data policy… MOOCs include personally identifiable data, can strip it, but requires work. University has desire to share data, but not there yet for easy to access framework to engage with data. To be part of that, it’s part of bigger Learning Analytics process.

LAK’16 Learning Analytics & Knowledge Conference – Professor Dragan Gasevic (@dgasevic), Chair in Learning Analytics and Informatics, Moray House School of Education & School of Informatics

The Learning Analytics and Knowledge Conference, LAK’16, took place in Edinburgh last week. It was in it’s sixth edition. It started in Canada as a response to several groups of people looking at data collected in different types of digital environments, and also the possibility to merge data from physical spaces, instruments, etc. It attracted a diverse range of people from educational research, machine learning, psychology, sociology, policy makers etc. In terms of organisation we had wonderful support from the wonderful Grace Lynch and two of my PhD students, who did a huge amount. I also had some wonderful support from Sian Bayne and Jeff Haywood in getting this set up! They helped connect us to others, within the University and throughout the conference. But there are many others I’d like to thank, including Amy and her team who streamed all four parallel sessions throughout the conference.

In terms of programme the conference has a research stream and a practitioner stream. Our chairs help ensure we have a great programme – and we have three chairs for each stream. They helped us ensure we had a good diversity of papers and audiences, and vendors. We have those streams to attract papers but we deliberately mix the practice and research sessions are combined and share sessions… And we did break all records this time. This was only the second conference outside North America, and most of our participants are based there, but we had almost double the submissions this year. These issues are increasingly important, and the conference is an opportunity to critically reflect on this issue. Many of our papers were very high in quality, and we had a great set of workshops proposed – selecting those was a big challenge and only 50% made it in… So, for non computer scientists the acceptance ratio maybe isn’t a big deal… But for computer scientists it is a crucial thing. So here’s we accepted about 30% of papers… Short papers were particularly competitive – this is because the field is maturing, and people want to see more mature work.

Dragan Gasevic speaking about LAK'16 at eLearning@ed 2016.

We had participants from 35 countries, across our 470 participants – 140 from the US, 120 from the UK, and then 40 from Australia. Per capita Australia was very well represented. But one thing that is a little disappointing is that other European countries only had 3 or 4 people along, that tells us something about institutional adoption of learning analytics, and research there. There are impressive learning analytics work taking place in China right now, but little from Africa. In South America there is one hub of activity that is very good.

Workshops wise the kinds of topics addressed included learning design and feedback at scale, learning analytics for workplace and professional learning – definitely a theme with lots of data being collected but often private and business confidential work but that’s a tension (EU sees analytics as public data), learning analytics across physical and digital spaces – using broader data and avoiding the “streetlight effect”, temporal learning analytics – trying to see how learning processes unfold… Students are not static black boxes… They change decisions, study strategies and approaches based on feedback etc; also had interesting workshop on IMS Caliper; we also had a huge theme and workshop on ethical and privacy issues; and another on learning analytics for learners; a focus on video, and on smart environments; also looking for opportunities for educational researchers to engage with data – through data mining skills sessions to open conversations with with informaticians. We also had a “Failathon” – to try ideas, talk about failed ideas.

We also had a hackathon with Jisc/Apero… They issues an Edinburgh Statement for learning analytics interoperability. Do take a look, add your name, to address the critical points…

I just want to highlight a few keynotes: Professor Mireilla Hildebrandt talked about the law and learning as a a machine, around privacy, data and bringing in issues including the right to be forgotten. The other keynote I wanted to talk about was Professor Paul A Kirshner on learning analytics and policy – a great talk. And final keynote was Robert Mislevy who talked about psychometric front of learning analytics.

Finally two more highlights, we picked two papers out as the best:

  • Privacy and analytics – it’s a DELICATE issue. A checklist for trusted learning analytics – Hendrik Drachsler and Wolfgang Greller.
  • When should we stop? Towards Universal approach – details of speakers TBC

More information on the website. And we have more meetings coming up – we had meetings around the conference… And have more coming up with a meeting with QAA on Monday, session with Blackboard on Tuesday, and public panel with George Siemens & Mark Milliron the same day.

Q&A

Q1) Higher Education is teaching, learning and research… This is all Learning Analytics… So do we have Teaching Analytics?

A1) Great point… Learning analytics is about learning, we shouldn’t be distracted by toys. We have to think about our methods, our teaching knowledge and research. learning analytics with pretty charts isn’t neccassarily helpful – sometimes event detrimental – t0 learners. We have to look at instructional designs, to support our instructors, to use learning analytics to understand the cues we get in physical environments. One size does not fit all!

Marshall) I set a challenge for next year – apply learning analytics to the conference itself!

Student-centred learning session, chaired by Ruby Rennie

EUSA: Using eLearning Tools to Support and Engage Record Numbers of Reps – Tanya Lubicz-Nawrocka (@TanyaLubiczNaw), Academic Engagement Coordinator, EUSA; Rachel Pratt, Academic Representation Assistant, EUSA; Charline Foch (@Woody_sol), EUSA, and Sophie McCallum,Academic Representation Assistant, EUSA.

Tanya opened the presentation with an introduction to what EUSA: the Edinburgh University Students Association is and does, emphasizing the independence of EUSA and its role in supporting students, and supporting student representatives… 

Rachel: We support around 2000 (2238) students across campus per year, growing every year (actually 1592 individuals – some are responsible for several courses), so we have a lot of people to support.

Sophie: Online training is a big deal, so we developed an online training portal within Learn. That allows us to support students on any campus, and our online learners. Students weren’t always sure about what was involved in the role, and so this course is about helping them to understand what their role is, how to engage etc. And in order to capture what they’ve learned we’ve been using Open Badges, for which over to Tanya…

Tanya Lubicz-Nawrocka speaking about EUSA's use of Learn and Open Badges at elearning@ed 2016

Tanya Lubicz-Nawrocka speaking about EUSA’s use of Learn and Open Badges at elearning@ed 2016

Tanya: I actually heard about open badges at this very conference a couple of years ago. These are flexible, free, digital accreditation. Thay are full of information (metadata) and can be shared and used elsewhere in the online world. These badges represent skills in key areas, Student Development badges (purple), Research and communication badges (pink) and ? (yellow).

Tanya shows the EUSA Open Badges at elearning@ed 2016

Tanya shows the EUSA Open Badges at elearning@ed 2016

There have been huge benefits of the badges. There are benefits for students in understanding all aspects of the role, encouraging them to reflect on and document their work and success – and those helped us share their success, to understand school level roles, and to understand what skills they are developing. And we are always looking for new ways to accredit and recognise the work of our student reps, who are all volunteers. It was a great way to recognise work in a digital way that can be used on LinkedIn profiles.

There were several ways to gain badges – many earned an open badge for online training (over 1000 earned); badges were earned for intermediate training – in person (113 earned); and badges were also earned by blogging about their successes and development (168 earned).

And the badges had a qualitative impact around their role and change management, better understanding their skills and relationships with their colleagues.

Sophie McCallum speaking about EUSA's work on training and Open Badges at elearning@ed 2016

Sophie McCallum speaking about EUSA’s work on training and Open Badges at elearning@ed 2016

Rachel: Looking at the learning points from this. In terms of using (Blackboard) Learn for online functionality… For all our modules to work the best they can, 500 users is the most we could. We have two Learn pages – one for CSE (College of Science & Engineering), one for CHSS (College of Humanities and Social Sciences), they are working but we might have to split them further for best functionality. We also had challenges with uploading/bulk uploading UUNs (the University personal identifiers) – one wrong UUN in several hundred, loses all. Information services helped us with that early on! We also found that surveys in Learn are anonymous – helpful for ungraded reflection really.

In terms of Open Badges the tie to an email address is a challenge. If earned under a student email address, it’s hard to port over to a personal email address. Not sure how to resolve that but aware of it. And we also found loading of badges from “Backpack” to sites like LinkedIn was a bit tedious – we’ll support that more next year to make that easier. And there are still unknown issues to be resolved, part of the Mozilla Open Badges environment more broadly. There isn’t huge support online yet, but hopefully those issues will be addressed by the bigger community.

Using eLearning tools have helped us to upscale, train and support record numbers of Reps in their roles; they have helped us have a strong positive quantitative and qualitative impact in engaging reps; and importance of having essential material and training online and optional, in-person intermediate training and events. And it’s definitely a system we’ll continue to have and develop over the coming years.

Rachel Pratt talks about EUSA's training approach, working with student representatives across the University, at elearning@ed 2016

Rachel Pratt talks about EUSA’s training approach, working with student representatives across the University, at elearning@ed 2016

Q&A

Q1) Have you had any new feedback from students about this new rep system… I was wondering if you have an idea of whether student data – as discussed earlier – is on the agenda for students?

A1 – Tanya) Students are very well aware of their data being collected and used, we are part of data analytics working groups across the university. It’s about how it is stored, shared, presented – especially the issue of how you present information when they are not doing well… Interested in those conversations about how data is used, but we are also working with reps, and things like the Smart Data Hacks to use data for new things – timetabling things for instance…

Q2) ?

A2) It’s a big deal to volunteer 50 hours of their time per year. They are keen to show that work to future employers etc.

Q3) As usual students and EUSA seem to be way ahead. How do you find out more about the badges?

A3) They can be clicked for more metadata – that’s embedded in it. Feedback has been great, and the blogposts have really helped them reflect on their work and share that.

SLICCs: Student-Led Individually Created Courses – Simon Riley, Senior Lecturer, MRC Centre for Reproductive Health

I’m Simon Riley, from the School of Medicine. I’m on secondment with the IAD and that’s why I’m on this. I’m coming to it from having worked on the student led component in medicine. You would think that medicine would be hugely confined by GMC requirements, but there is space there. But in Edinburgh there is about a year of the five year programme that is student led – spread across that time but very important.

Now, before speaking further I must acknowledge my colleague Gavin McCabe, Employability Consultant who has been so helpful in this process.

SLICCs are essentially a reflective framework, to explore skill acquisition, using an e-portfolio. We give students generic Learning Outcomes (LOs), which allow the students to make choices. Although it’s not clear how much students understand or engage with learning outcomes… We only get four or five per module. But those generic LOs allow students to immediately define their own aims and anticipated learning in their “proposal”. Students can take ownership of their own learning by choosing the LOs to address.

Simon Riley talks about SLICCs at eLearning@ed 2016

Simon Riley talks about SLICCs at eLearning@ed 2016

The other place that this can raise tensions is the idea of “academic rigor”. We are comfortable at assessing knowledge, and assessments that are knowledge based. And we assume they get those other graduate attributes by osmosis… I think we have to think carefully about how we look at that. Because the SLICCs are reflection on learning, I think there is real rigor there. But there has to be academic content – but it’s how they gain that knowledge. Tanya mentioned the Edinburgh Award – a reflective process that has some similarities but it is different as it is not for credit.

Throughout their learning experience students can make big mistakes, and recover from them. But if you get students to reflect early, and reflect on any issue that is raised, then they have the opportunity to earn from mistakes, to consider resilience, and helping them to understand their own process for making and dealing with mistakes.

The other concern that I get is “oh, that’s a lot of work for our staff”… I was involved in Pilot 1 and I discovered that when giving feedback I was referring students back to the LOs they selected, their brief, the rubric, the key feedback was about solving the problem themselves… It’s relatively light touch and gives ownership.

So, here are three LOs… Around Analysis, Application, Evaluation. This set is Level 8. I think you could give those to any student, and ask them to do some learning, based on that, and reflect on it… And that’s across the University, across colleges… And building links between the colleges and schools, to these LOs.

So, where are we at? We had a pilot with a small number of students. It was for extra credit, totally optional. They could conduct their own learning, capture in a portfolio, reflect upon it. And there is really tight link between the portfolio evidence, and the reflective assignment. It was a fascinating set of different experiences… For instance one student went and counter river dolphins in the Amazon, but many were not as exotic… We didn’t want to potentially exclude anyone or limit relevance. Any activity can have an academic element to it if structured and reflected upon appropriately. Those who went through the process… Students have come back to us who did these at Level 8 in second year (highest level senate has approved)… They liked the process – the tutor, the discipline, the framework, more than the credit.

So we have just over 100 students signed up this summer. But I’m excited about doing this in existing programmes and courses… What we’ve done is created SCQF LOs at Level 7, 8, 10 and 11, with resources to reflect, marking rubric, and board of studies documents. I am a course organiser – developing is great but often there isn’t time to do it… So what I’m trying to do is create all that material and then just let others take and reuse that… Add a little context and run onto it. But I want to hold onto the common LOs, as long as you do that we can work between each other… And those LOs include the three already shown, plus LO4 on “Talent” and LO5 on “Mindset”, both of which specifically address graduate attributes. We’ve had graduate attributes for years but they aren’t usually in our LOs, just implicit. In these case LOs are the graduate attributes.

Simon Riley gets very animated talking about Learning Outcomes at eLearning@ed 2016

Simon Riley gets very animated talking about Learning Outcomes at eLearning@ed 2016

What might they look like? Embedded in the curriculum, online and on campus. Level 11 on-campus courses are very interested, seems to fit with what they are trying to do. Well suited to projects, to skill acquisition, and using a portfolio is key – evidencing learning is a really useful step in getting engagement. And there is such potential for interdisciplinary work – e.g. Living Lab, Edinburgh CityScope. Summer schools also very interested – a chance for a student to take a holistic view of their learning over that period. We spend a lot of money sending students out to things – study abroad, summer schools, bursaries… When they go we get little back on what they have done. I think we need to use something like this for that sort of experience, that captures what they have learnt and reflected on.

Q&A

Q1) That idea of students needing to be able to fail successfully really chimes for me… Failures can be very damaging… I thought that the idea of embracing failure, and that kind of start up culture too which values amazing failure… Should/could failure be one of your attributes… to be an amazing failure…

A1) I think that’s LO5 – turning it into a talent. But I think you have touched on an important aspect of our experience. Students are risk averse, they don’t want to fail… But as reflective learners we know that failure matters, that’s when we learn, and this framework can help us address this. I look to people like Paul McC… You have students learning in labs… You can set things up so they fail and have to solve problems… Then they have to work out how to get there, that helps…

Q1) In the sporting world you have the idea of being able to crash the kit, to be able to learn – learning how to crash safely is an early stage skills – in skateboarding, surfing etc.

Keynote, supported by the Centre for Research in Digital Education: In search of connected learning: Exploring the pedagogy of the open web – Dr Laura Gogia MD, PhD, (@GoogleGuacamole)Research Fellow for the Division of Learning Innovation and Student Success at Virginia Commonwealth University, USA, chaired by Jen Ross

Jen: I am really delighted to welcome Laura Gogia to eLearning@ed – I heard her speak a year or so ago and I just felt that great thing where ideas just gel. Laura has just successfully defended her PhD. She is also @GoogleGuacamole on Twitter and organises a Twitter reading club. And her previous roles have been diverse, most interestingly she worked as an obstetrician.

Laura: Thank you so much for inviting me today. I have been watching Edinburgh all year long, it’s just such an exciting place. To have such big conferences this year, there is so much exciting digital education and digital pedagogy work going on, you guys are at the forefront.

So I’m going to talk about connected learning – a simpler title than originally in your programme – because that’s my PhD title… I tried to get every keyword in my PhD title!

Laura Gogia begins her keynote with great enthusiasm at eLearning@ed 2016

Laura Gogia begins her keynote with great enthusiasm at eLearning@ed 2016

Let me show you an image of my daughter looking at a globe here, that look on her face is her being totally absorbed. I look for that look to understand when she is engaged and interested. In the academic context we know that students who are motivated, who see real relevance and benefit to their own work makes for more successful approaches. Drawing on Montesorri and other progressive approaches, Mimi Ito and colleagues have developed a framework for connected learning that shapes those approaches for an online digital world.

Henry Jenkins and colleagues describe Digital Participatory Culture that is interactive, creative, about sharing/contributing and informal mentoring. So a connected teacher might design learning to particularly use those connections out to the wider world. George Siemens and colleagues talk about digital workflow, where we filter/aggregate; critique; remix; amplify – pushing our work out into a noisy world where we need to catch attention. Therefore connected learners and teachers find ways to embed these skills into learning and teaching experiences…

Now this all sounds good, but much of the literature is on K-12, so what does connected learning mean for Higher Education. Now in 2014 my institution embarked on an openly networked connected learning project, on learning experiences that draw from web structure and culture to (potentially) support connected learning and student agency, engagement and success. This is only 2 years in, it’s not about guaranteed success but I’ll be talking about some work and opportunities.

So, a quick overview of VCU, we have an interesting dynamic institution, with the top rated arts college, we have diverse students, a satellite campus in Quatar and it’s an interesting place to be. And we also have VCU RamPages, an unlimited resource for creating webpages, that can be networked and extended within and beyond the University. There are about 16k websites in the last year and a half. Many are student websites, blogs, and eportfolios. RamPages enable a range of experiences and expression but I’ll focus on one, Connected Courses.

Connected Courses are openly networked digital spaces, there are networked participatory activities – some in person, all taught by different teaching staff. And they generate authentic learning products, most of which are visible to the public. Students maintain their own blog sites – usually on RamPages but they can use existing sites if they want. When they enroll on a new course they know they will be blogging and doing so publicly. They use a tag, that is then aggregated and combined with other students posts…

So, this is an example of a standard (WordPress) RamPages blog… Students select the blog template, the header images, etc. Then she uses the appropriate tag for her course, which takes it to the course “Bloggregate”… And this is where the magic happens – facilitating the sharing, the commenting, and from a tutors point of view, the assessment.

Laura Gogia shows the VCA/RamPages

Laura Gogia shows the VCA/RamPages “Bloggregate” at eLearning@ed 2016

The openly networked structure supports student agency and discovery. Students retain control of their learning products during and after the course. And work from LaGuadia found students were more richly engaged in such networked environments. And students can be exposed to work and experience which they would not otherwise be exposed to – from different sites, from different institutions, from different levels, and from different courses.

Connected learning also facilitate networked participation, including collaboration and crowdsourcing, including social media. These tools support student agency – being interdependent and self regulated. They may encourage digital fluency. And they support authentic learning products – making joint contributions that leads to enriched work.

A few years ago the UCI bike race was in Virginia and the University, in place of classes, offered a credited course that encouraged them to attend the bike race and collect evidence and share their reflections through the particular lens of their chosen course option. These jointly painted a rich picture, they were combined into authentic work products. Similarly VCU Field Botany collaboratively  generate a digital field guide (the only one) to the James Richer Park System. This contributes back to the community. Similarly arts students are generating the RVArts site, on events, with students attending, reflecting, but also benefiting our community who share interest in these traditionally decentralised events.

Now almost all connected courses involve blogging, which develops multimodal composition for digital fluency and multiple perspectives. Students include images and video, but some lecturers are embedding digital multimodal composition in their tasks. Inspireed by DS106, University of Mary Washington, our #CuriousCoLab Creative Makes course asks students to process abstract course concepts and enhance their digital fluency. They make a concrete representation of the abstract concept – they put it in their blog with some explanation of why they have chosen to do this in their way. The students loved this… They spent more time, they thought more on these abstract ideas and concepts… They can struggle with those ideas… This course was fully online, with members of the public engaged too – and we saw both students and these external participants did the creative make, whether or not they did the reflective blogging (optional for outside participants).

In terms of final projects students are often asked to create a website. These assignments allow the students to work on topics that really talk to their heart… So, one module can generate projects on multitasking and the brain, another might talk about the impact on the bombing of Hiroshima.

I’ve talked about connected learning but now I’d like to turn to my research on student blogging and tweeting, and my focus on the idea that if students are engaged in Connected Learning we require the recognition and creation of connections with people, and across concepts, contexts and time. I focused on Blogging and tweeting as these are commonly used in connected learning… I asked myself about whether there was something about these practices that was special here. So I looked at how we can capture connected learning through student digital annotation… Looking at hyperlinks, mentions, etc. The things that express digital connection… Are they indicative of pedagogical connections too? I also looking at images and videos, and how students just use images in their blog posts…

Because the Twitter API and WordPress allow capture of digital annotations… You can capture those connections in order to describe engagement. So, for the class I looked at there were weekly Twitter chats… And others beyond the course were open participants, very lightly auditing the course… I wanted to see how they interacted… What I saw was that open students were very well integrated with the enrolled students, and interacting… And this has instructional value too. Instructors used a similar social network analysis tool to ask students to reflect on their learning and engagement.

Laura Gogia speaking about linking and interaction patterns at VCU as part of her eLearning@ed 2016 keynote

Laura Gogia speaking about linking and interaction patterns at VCU as part of her eLearning@ed 2016 keynote

Similarly I looked at psychology students and how they shared hyperlinks… You can see also how sources are found directly, and when they access them exclusively through their Twitter timeline… That was useful for discussing student practice with them – because those are two different processes really – whether reading fully, or finding through others’ sharing. And in a course where there is controversy over legitimate sources, you could have a conversation on what sources you are using and why.

I found students using hyperlinks to point to additional resources, traditional citations, embedded definitions, to connect their own work, but also to contextualise their posts – indicating a presumption of an external audience and of shaping content to them… And we saw different styles of linking. We didn’t see too many “For more info see…” blog posts pointing to eg NYT, CNN. What we saw more of was text like “Smith (2010) states that verbal and nonverbal communication an impact” – a traditional citation… But “Smith 2010” and “nonverbal” were both linked. One goes where you expect (the paper), the other is a kind of “embedded description” – linking to more information but not cluttering their style or main narrative. You couldn’t see that in a paper based essay. You might also see “As part of this course, I have created a framework and design structure for..”… “this course” links to the course – thinking about audience perhaps (more research needed) by talking about context; framework pointed to personal structure etc.

I also saw varying roles of images in blog posts: some were aesthetic, some were illustration, some as extension. Students making self-generated images and videos incorporated their discussion of that making process in their blog posts… I particularly enjoyed when students made their own images and videos.

Laura Gogia talks about the Twitter patterns and hyperlinking practices of her research participants in her eLearning@ed 2016 keynote

Laura Gogia talks about the Twitter patterns and hyperlinking practices of her research participants in her eLearning@ed 2016 keynote

In terms of Twitter, students tweeted differently than they blog. Now we know different platforms support different types of behaviours. What I noticed here was that students tweeted hyperlinks to contribute to the group, or to highlight their own work. So, hyperlink as contribution could be as simple as a link with the hashtag. Whilst others might say “<hyperlink> just confirms what was said by the speaker last week”… which is different. Or it might be, e.g. “@student might find this on financial aid interesting <hyperlink>, now that inclusion of a person name significantly increases the chances of engagement – significantly linked to 3+ replies.

And then we’d see hyperlinks as promotion, although we didn’t see many loading tweets with hashtags to target lots of communities.

So, my conclusions on Digital Annotations, is that these are nuanced areas for research and discussion. I found that students seldom mentioned peer efforts – and that’s a problem, we need to encourage that. There is a lack of targeted contribution – that can be ok and trigger serendipity, but not always. We have to help students and ourselves to navigate to ensure we get information to the right people. Also almost no images I looked at had proper attribution, and that’s a problem. We tell them to cite sources in the text, have to do that in the images too. And finally course design and instructor behaviour matters, students perform better when the structure works for them… So we have to find that sweet spot and train and support instructors accordingly.

I want to end with a quote from a VCU Undergraduate student. This was a listening tour, not a formal part of research, and I asked them how she learned, how they want to learn… And this student talked about the need for learning to be flexible, connected, portable. Does everyone need an open connected space? No, but some do, and these spaces have great affordances… We need to play more here, to stay relevant and engaged with that wider world, to creatively play with the idea of learning!

Q&A

Q1) It was fantastic to see all that student engagement there, it seems that they really enjoy that. I was wondering about information overload and how students and staff deal with that with all those blogs and tweets!

A1) A fabulous question! I would say that students either love or hate connected courses… They feel strongly. One reason for that is the ability to cope with information overload. The first time we ran these we were all learning, the second time we put in information about how to cope with that early on… Part of the reason for this courses is to actually help students cope with that, understand how to manage that. It’s a big deal but part of the experience. Have to own up front, why its important to deal with it, and then deal with it. From a Twitter perspective I’m in the process of persuading faculty to grade Twitter… That hasn’t happened yet… Previously been uncredited, or has been a credit for participation. I have problems with both models… With the no credit voluntary version you get some students who are really into it… And they get frustrated with those that don’t contribute. The participation is more structured… But also frustrating, for the same reasons that can be in class… So we are looking at social network analysis that we can do and embed in grading etc.

Comment – Simon Riley) Just to comment on overload… That’s half of what being a professional or an academic is. I’m a medic and if you search PubMed you get that immediately… Another part of that is dealing with uncertainty… And I agree that we have to embrace this, to show students a way through it… Maybe the lack of structure is where we want to be…

A2) Ironically the people with the least comfort with uncertainty and unstructured are faculty members – those open participants. They feel that they are missing things… They feel they should know it all, that they should absorb it at. This is where we are at. But I was at a digital experience conference where there were 100s of people, loads of parallel strands… There seems to be a need to see it all, do it all… We have to make a conscious effort at ALT Lab to just help people let it go… This may be the first time in history where we have to be fine that we can’t know it all, and we know that and are comfortable…

Q3) Do you explicitly ask students not to contribute to that overload?

A3) I’m not sure we’re mature enough in practice… I think we need to explain what we are doing and why, to help them develop that meta level of learning. I’m not sure how often that’s happening just now but that’s important.

Q4) You talked a lot about talking in the open web in social media. Given that the largest social networks are engaging in commercial activities, in political activities (e.g. Mark Zuckerberg in China), is that something students need to be aware of?

A4) Absolutely, that needs to be there, alongside understanding privacy, understanding attribution and copyright. We don’t use Facebook. We use WordPress for RamPages – have had no problems with that so far. But we haven’t had problems with Twitter either… It’s a good point that should go on the list…

Q5) Could you imagine connected courses for say Informatics or Mathematics…? What do they look like?

A5) Most of the math courses we have dealt with are applied mathematics. That’s probably as far as I could get without sitting with a subject expert – so give me 15 mins with you and I could tell you.

Q6) So, what is the role of faculty here in carefully selecting things for students which we think are high quality?

A6) The role is as it has ever been, to mark those things out as high quality…

Q6) There is a lot of stuff out there… Linking randomly won’t always find high quality content.

A6) Sure, this is not about linking randomly though, it’s about enabling students to identify content, so they understand high quality content, not just the list given, and that supports them in the future. Typically academic staff do curate content, but (depending on the programme), students also go out there to find quality materials, discussing reasons for choosing, helping them model and understand quality. It’s about intentionality… We are trying to get students to make those decisions intentionally.

Digital Education & Technology Enhanced Learning Panel Session, chaired by Victoria Dishon

Victoria: I am delighted to be able to chair this panel. We have some brilliant academic minds and I am very pleased to be able to introduce some of them to you.

Prof. Sian Bayne (@sbayne), Professor of Digital Education in the School of Education, and Assistant Principal, Digital Education

I have a slight identity crisis today! I am Sian Bayne and I’m Professor of Digital Education but I am also newly Assistant Principal, Digital Education. It’s an incredibly exciting area of work to take forward so I thought I’d talk a bit about digital education at Edinburgh and where we are now… We have reputation and leadership, 2600 PG online students, 67 programmes, 2m MOOC learners, and real strategic support in the University. It’s a good time to be here.

Sian Bayne speaking about her exciting new role, at eLearning@ed 2016

Sian Bayne speaking about her exciting new role, at eLearning@ed 2016

We also have a growing culture of teaching innovation in Schools and a strong understanding of the challenges of academic development for and with DE. Velda McCune, Depute Director of IAD, currently on research leave, talks about complex, multilateral and ever shifting conglomerations of learning.

I want to talk a bit about where things are going… Technology trends seem to be taking us in some particular directions…We have a range of future gazing reports and updates, but I’m not as sure that we have a strong body of students, of academics, of support with a vision for what we want digital education to look like here. We did have 2 years ago the Ed2020 trying to look at this. The Stanford 2025 study is also really interesting, with four big ideas emerging around undergraduate education – of the open loop university – why 4 years at a set age, why not 6 years across your lifetime; paced education – 6 years of personalised learning with approaches for discipline we’re embedded in and put HE in the world; Axis flip; purpose learning – coming to uni with a mission not a major… So it would be interesting to think of those ideas in this university.

UAL/LSE did a digital online hack event, Digital is not the future, to explore the idea of hacking the institution from the inside. Looking at shifting to active work. Also a great new MIT Future of Digital Education report too. And if you have any ideas for processes or approaches to take things forward, please do email or Twitter me…

Melissa Highton, Assistant Principal, Online Learning (@honeybhighton)

I am also having quite an identity crisis. Sian and I have inherited quite a broad range of activities from Jeff Haywood, and I have inherited many of the activities that he had as head of IS, particularly thinking about online learning in the institution, number of courses, number of learners, what success would look like, targets – and where they came from – get thrown about… Some are assumptions, some KPI, some reach targets, some pure fantasy! So I’ll be looking at that, with the other Assistant Principals and the teams in ISG.

Melissa Highton talks about her forthcoming new role, at eLearning@ed 2016

Melissa Highton talks about her forthcoming new role, at eLearning@ed 2016

What would success look like? That Edinburgh should be THE place to work if you want to work on Digital Education, that it is innovative, fund, and our practice must be research informed, research linked, research connected. Every educator should be able to choose a range of tools to work with, and have support and understanding of risk around that… Edinburgh would be a place that excellent practitioners come t0 – and stay. Our online students would give us high satisfaction ratings. And our on campus learners would see themselves continuing studies online – preferably with us, but maybe with others.

To do that there are a set of more procedural things that must be in place around efficiency, structures, processes, platforms, to allow you to do the teaching and learning activity that we need you to do to maintain our position as a leader in this area. We have to move away from dependence on central funding, and towards sustainable activity in their departments and schools. I know it’s sexy to spin stuff up locally, it’s got us far, but when we work at scale we need common schools, taking ideas from one part of the institution to others. But hopefully creating a better environment for doing the innovative things you need to do.

Prof. David Reay (@keelincurve); Chair in Carbon Management & Education Assistant Principal, Global Environment & Society

Last year at eLearning@ed I talked about the Sustainability and Social Responsibility course, and today I’ll talk about that, another programme and some other exciting work we are doing all around Global Change and Technology Enhanced Learning.

So with the Online MSc in Carbon Management we have that fun criteria! We had an on campus programme, and it went online with students across the world. We tried lots of things, tried lots of tools, and made all sorts of mistakes that we learned from. And it was great fun! One of my favourite students was joining the first Google Hangout from a bunker in Syria, during the war, and when she had connectivity issues for the course we had to find a tactic to be able to post content via USB to students with those issues.

David Reay speaks about the new Online

David Reay speaks about the new Online “Sustainability & Social Responsibility” MSc at eLearning@ed 2016

So that online course in Sustainability and Social Responsibility is something we’ve put through the new CAIRO process that Fiona Hale is leading on, doing that workshop was hugely useful for trying those ideas, making the mistakes early so we could address them in our design. And this will be live in the autumn, please do all take a look and take it.

And the final thing, which I’m very excited about, is an online “Disaster Risk Reduction” course, which we’ve always wanted to do. This is for post earthquake, post flooding, post fire type situations. We have enormous expertise in this area and we want to look at delivery format – maybe CPD for rescue workers, MOOCs for community, maybe Masters for city planners etc. So this is the next year, this is what I’ll speak about next year.

Prof. Chris Sangwin (@c_sangwin), Chair in Technology Enhanced Science Education, School of Mathematics

I’m new to Edinburgh, joined in July last year, and my interest is in automatic assessment, and specifically online assessment. Assessment is the cornerstone of education, it drives what people do, that is the action they undertake. I’ve been influenced by Kluger and DeNiki 1996 who found that “one third of feedback interventions decreased performance”. This study found that specific feedback on the task was effective, feedback that could be seen as a personal attack was not. Which makes sense, but we aren’t always honest about our failures.

Chris Sangwin talks about automated approaches to assessing mathematics, at eLearning@ed 2016

Chris Sangwin talks about automated approaches to assessing mathematics, at eLearning@ed 2016

So, I’ve developed an automatic assessment system for mathematics – for some but not all things – which uses the computer algebra system (CAS) Maxima, which generates random structured questions, gives feedback, accommodates multiple approaches, and provides feedback on the parts of the answer which does not address the question. This is a pragmatic tool, there are bigger ideas around adaptive learning but those are huge to scope, to build, to plan out. The idea is that we have a cold hard truth – we need time, we need things marking all the time and reliably, and that contrasts with the much bigger vision of what we want for our students for our education.

You can try it yourself here: http://stack.maths.ed.ac.uk/demo/ and I am happy to add you as a question setter if you would like. We hope it will be in Learn soon too.

Prof. Judy Hardy (@judyhardy), Professor of Physics Education, School of Physics and Astronomy.

I want to follow up my talk last year about what we need to focus on “awareness” knowledge, “how to” knowledge, and we need “principles” knowledge. Fewer than a quarter of people don’t modify approaches in their teaching – sometimes that is fine, sometimes it is not. So I want to talk about a few things we’ve done, one that worked, one that did not.

Judy Hardy talks about modifying teaching approaches, at eLearning@ed 2016

Judy Hardy talks about implementing changes in teaching approaches, at eLearning@ed 2016

We have used Peerwise testing, and use of that correlates with exam performance, even when controlling for other factors. We understand from our evidence how to make it work. We have to move from formative (recommended) to summative (which drives behaviour). We have to drive students ownership of this work.

We have also used ACJ – Adaptive Comparative Judgement – to get students to understand what quality looks like, to understand it in comparison to others. They are not bad at doing that… It looks quite good at face value. But when we dug in we found students making judgments on surface features… neatness, length, presence of diagram… We are not at all confident about their physics knowledge, and how they evidence that decision… For us the evidence wasn’t enough, it wasn’t aligned with what we were trying to do. There was very high administrative overheads… A detail that is easily overlooked. For a pilot its fine, to work every day that’s an issue.

Implementing change, we have to align the change with the principles – which may also mean challenge underlying beliefs about their teaching. It needs to be compatible with local, often complex, classroom context, and it takes time, and time to embed.

Victoria: A lot of what we do here does involve taking risk so it’s great to hear that comparison of risks that have worked, and those that are less successful.

Dr Michael Seery, Reader, Chemistry Education. (@seerymk)

Like Chris I joined last July… My background has been in biology education. One of the first projects I worked on was on taking one third of chemistry undergraduate lab reports (about 1200 reports_ and to manage and correct those for about 35 postgraduate demonstrators. Why? Well because it can be hard to do these reports, often inconsistent in format, to assess online and I wanted to seek clarity and consistency of feedback. And the other reason to move online was to reduce administrative burden.

Michael Seery speaks about moving to online learning (image also shows the previous offline administrative tools), at eLearning@ed 2016

Michael Seery speaks about moving to online learning (image also shows the previous offline administrative tools), at eLearning@ed 2016

So Turnitin (Grademark) was what I started looking at. But it requires a Start Date, Due Date, and End date. But our students don’t have those. Instead we needed to retrofit it a bit. So, students submitted to experimental Dropbox, demonstrators filtered submissions and corrected their lab reports, and mark and feedback returned immediately to students… But we had problems… No deadline possible so can’t track turnaround time/impose penalties; “live” correction visible by student, and risk of simultaneous marking. And the Section rubrics (bands of 20%) too broad – that generated a great deal of feedback, as you can imagine. BUT demonstrators were being very diligent about feedback – but that also confused students as minor points were mixed with major points.

So going forward we are using groups, students will submit by week so that due dates ad turnaround times clearer, use TurnItIn assessment by groups with post date, and grading forms all direct mark entry. But our challenge has been retrofitting technologies to the assessment and feedback issue, but that bigger issue needs discussion.

The format for this session is that each of our panel will give a 3-5 minute introductory presentation and we will then turn to discussion, both amongst the panel and with questions and comments from the audience.

Panel discussion/Q&A

Q1) Thank you for a really interesting range of really diverse presentations. My question is for Melissa, and it’s about continuity of connection… UG, online, maybe pre-arrival, returning as a lifelong learning… Can we keep our matriculation number email forever? We use it at the start but then it all gets complex on graduation… Why can’t we keep that as that consistent point of contact.

A1, Melissa) That sounds like a good idea.

Q2) We’ve had that discussion at Informatics, as students lose a lot of materials etc. by loss of that address. We think an @ed.ac.uk alias is probably the way, especially for those who carry on beyond undergraduate. It was always designed as a mapping tool. But also let them have their own space that they can move work into and out of. Think that should be University policy.

A2, Melissa) Sounds like a good idea too!

Q3) I was really pleased to hear assessment and feedback raised in a lot of these presentations. In my role as Vice Principal Assessment and Feedback I’m keen to understand how we can continue those conversations, how do we join these conversations up? What is the space here? We have teaching networks but what could we be missing?

A3, Michael) We all have agreed LOs but if you ask 10 different lab demonstrators they will have 10 different ideas of what that looks like that. I think assessment on a grade, feedback, but also feed forward is crucial here. Those structures seems like a sensible place.

A3, Judy) I think part of the problem is that teaching staff are so busy that it is really difficult  to do the work needed. I think we should be moving more towards formative assessment, that is very much an ideal, far from where we are in practice, but it’s what I would like to see.

Q4) A lot of you talked about time, time being an issue… One of the issues that students raise all of the time is about timeliness of feedback… Do you think digital tools offer a way to do this?

A4, Judy) For me, the answer is probably no. Almost all student work is handwritten for us… What we’d like to do is sit with a student to talk to them, to understand what is going on in their heads, how their ideas are formed. But time with 300 students is against us. So digital tools don’t help me… Except maybe Chris’ online assessment for mathematics.

A4, Chris) The idea of implementing the system I showed is to free up staff time for that sort of richer feedback, by tackling the limited range of work we can mark automatically. That is a limited range though and it diminishes as the subject progresses.

A4, David) We implemented online submission as default and it really helped with timings, NSS, etc. that really helped us. For some assessment that is hard, but it has helped for some.

A4, Michael) Students do really value that direct feedback from academic staff… You can automate some chemistry marking, but we need that human interaction in there too, that’s important.

A4, Sian) I want to raise a humanities orientated way of raising the time issue… For me time isn’t just about the timeline for feedback, but also exploring different kinds of temporality that you can do online. For our MSc in Digital Education we have students blog and their tutors engage in a long form engaged rich way throughout the course, feedback and assessment is much richer than just grading.

Q5) In terms of incorporation of international students here, they are here for one year only and that’s very short. Sometimes Chinese students meet a real clash of expectations around language proficiency, a communication gap between what assessment and feedback is, and what we practice. In terms of technology is there a formative model for feedback for students less familiar with this different academic culture, rather than leaving them confused for one semester and then start to understand.

A5, David) It’s such an important point. For all of our students there is a real challenge of understanding what feedback actually is, what it is for. A lot of good feedback isn’t badged properly and doesn’t show up in NSS. I love the idea of less assessment, and of the timing being thought through. So we don’t focus on summative assessment early on, before they know how to play the game.. I agree really.

A5, Judy) One thing we don’t make much use, is of exemplars. They can be very valuable. When I think about how we get expertise as markers, is because of trying to do it. Students don’t get that opportunity, you only see your own work. Exemplars can help there…

The panel listening to questions from the floor at eLearning@ed 2016

The panel listening to questions from the floor at eLearning@ed 2016

Q6) Maybe for the panel, maybe for Fiona… One thing to build in dialogue, and the importance of formative assessment… Are you seeing that in the course design workshops, use of CAIReO (blog post on this coming soon btw), whether you see a difference in the ways people assess….

A6, Fiona) We have queues of people wanting the workshop right now, they have challenges and issues to address and for some of them its assessment, for others its delivery or pace. But assessment is always part of that. It comes naturally out of storyboarding of learner activities. BUt we are not looking at development of content, we are talking about learning activity – that’s where it is different. Plenty to think about though…

Comment, Ross) Metaphor of a blank piece of paper is good. With learning technologies you can start out with that sense of not knowing what you want to achieve… I think exemplars help here too, sharing of ideas and examples. Days like today can be really helpful for seeing what others are doing, but then we go back to desks and have blank sheets of paper.

Q7) As more policies and initiatives appear in the institution, does it matter if we believe that learning is what the student does – rather than the teacher? I think my believe is that learning occurs in the mind of the learning… So technologies such as distance and digital learning can be a bit strange… Distance and digital teaching maybe makes more sense…

A7) I think that replacing terminology of “teaching” with terminology of “learning” has been taking place. Hesper talks about the problems of the “learnification of education”, when we do that we instrumentalise education. That ignores power structures and issues in many ways. My colleagues and I wrote a Manifesto for Teaching Online and we had some flack about that terminology but we thought that that was important.

Q8) Aspirationally there would be one to one dialogue with students… I agree that that is a good aspiration… And there is that possibility of continuity… But my question was to what extent past, present, and future physical spaces… And to what extent does that enable or challenge good learning or good teaching?

A8, Judy) We use technology in classrooms. First year classes are flipped – and the spaces aren’t very conducive to that. There are issues with that physical space. For group working there are great frustrations that can limit what we can do… In any case this is somewhat inevitable. In terms of online education, I probably have to hand to colleagues…

A8, David) For our institution we have big plans and real estate pressures already. When we are designing teaching spaces, as we are at KB right now, there is a danger of locking ourselves into an estate that is not future proof. And in terms of impinging on innovation, in terms of changing demands of students, that’s a real risk for us… So I suppose my solution to that is that when we do large estate planning, that we as educators and experts in technology do that work, do that horizon scanning, like Sian talked about, and that that feeds into physical space as well as pedagogy.

A8, Sian) For me I want leakier spaces – bringing co-presences into being between on campus and online students. Whole area of digital pedagogical exploration we could be playing with.

A8, Melissa) There is is a very good classroom design service within the Learning and Teaching spaces team in IS. But there is a lag between the spaces we have today, and getting kit in place for current/future needs. It’s an ongoing discussion. Particularly for new build spaces there is really interesting possibility around being thoughtful. I think we also have to think about shifting time and space… Lecture Capture allows changes, maybe we need fewer big lecture rooms… Does the teaching define the space, or the space that designs the teaching. Please do engage with the teams that are there to help.

A8, Michael) One thing that is a danger, is that we chase the next best thing… But those needs change. We need to think about the teaching experience, what is good enough, what is future-proof enough… And where the need is for flexibility.

Victoria: Thanks to all our panel!

eMarking Roll Out at Abertay – Carol Maxwell, Technology Enhanced Learning Support team Leader, Abertay University, chaired by Michael Seery

I am Carol Maxwell from Abertay University and I am based in the Technology Enhanced Learning support team. So, a wee bit about Abertay… We are a very small city centre university, with 4025 students (on campus) and 2091 in partner institutions. We are up 9 places to 86 in Complete University Guide (2017), And our NSS score for feedback turnaround went up by 12%, which we think has a lot to do with our eMarking roll out.

We have had lots of change – a new Principal and new Vice Chancellor in summer 2012. We have many new appointments, a new director of teaching and learning enhancement, and we’ve moved towards central services rather than local admin. We get involved in the PGCert programme, and all new members of staff have to go through that process. We have monthly seminars where we get around 70 people coming along. We have lots of online resources, support for HEA accreditation and lots of things taking place, to give you a flavour of what our team does.

Carol Maxwell talks about the work of the Abertay Teaching and Learning Enhancement Team, at eLearning@ed 2016

Carol Maxwell talks about the work of the Abertay Teaching and Learning Enhancement Team, at eLearning@ed 2016

So the ATLEF project was looking at supporting assessment and feedback practice with technology, this was when our team was part of information services, and that was intended to improve the University’s understanding and awareness of the potential benefits, challenges and barriers associated with a more systematic and strategic approach to technology-enhanced assessment and feedback, we wanted to accelerate staff awareness of technological tools for assessment.

So we did a baseline report on practice – we didn’t have tools there, and instead had to interrogate Blackboard data course by course… We found only 50% of those courses using online assessment were using Grademark to do this. We saw some using audio files, some used feedback in Grade Centre, some did tracked changes in Word, and we also saw lots of use of feedback in comments on eportfolios.

We only had 2% online exams. Feedback on that was mixed, and some was to do with how the actual user experience worked – difficulties in scrolling through documents in Blackboard for instance. Some students were concerned that taking exams at home would be distracting. There was also a perception that online exams were for benefit of teaching staff, rather than students.

So we had an idea of what was needed, and we wanted to also review sector practices. We found Ferrell 2013, and also the Heads of eLearning Forum Electronic Management of Assessment Survey Report 2013 we saw that the most common practice was e-submission as well as hard copy printed by student… But we wanted to move away from paper. So, we were involved in the Jisc Electronic Marking and Assessment project and cycle… And we were part of a think tank where we discussed issues such as retention and archiving of coursework, and in particular the importance of it being a University wide approach.

So we adopted a new Abertay Assessment Strategy. So for instance we now have week 7 as a feedback week. It isn’t for teaching, it is not a reading week, it is specifically for assessment and feedback. The biggest change for our staff was the need for return of coursework and feedback in 10 working days before week 13, and within 15 weeks thereafter, That was a big change. We had been trialing things for year, so we were ready to just go for it. But we had some challenges, we have a literal grading policy, A+, A, B+ etc. which is harder in these tools.

We had senior management, registry, secretariat, teaching staff, teaching and learning staff discussing and agreeing the policy document. We had EMA champions demonstrating current process, we generated loads of supporting materials to. So one of our champions delivered video feedback – albeit with some student feedback to him that he was a little dry, he took it on the chin. One academic uses feedback on PebblePad, we have a lecturer who uses questions a great deal in mathematics courses, letting students attempt questions and then move on after completion only. We also have students based in France who were sharing reflections and video content, and feedback to it alongside their expected work. And we have Turnitin/Grademark, of which the personalised feedback is most valuable. Another champion has been using discussion forums, where students can develop their ideas, see each others work etc. We also hold lots of roadshow events, and feedback from these have raised the issue of needing two screens to actually manage marking in these spaces.

Carol Maxwell talks about the support for staff in rolling out eMarking at Abertay, at eLearning@ed 2016

Carol Maxwell talks about the support for staff in rolling out eMarking at Abertay, at eLearning@ed 2016

The areas we had difficulty with here was around integration, with workarounds required for Turnitin with Blackboard Grade Centre and literal grading; Staff resistance – with roadshows helping’ Moderation – used 3 columns not 2 for marking; Anonymity; returning feedback to students raised some complexities faced. There has been some challenging work here but overall the response has been positive. Our new templates include all the help and support information for our templates to.

So, where to now… Carry on refining procedures and support, need on going training – especially new staff, Blackboard SITS Integration. More online exams (some online and some off site); digital literacy etc. And, in conclusion you need Senior Management support and a partnership approach with academic staff, students and support services required to make a step change in practice.

Q&A

Q1) I’m looking at your array of initiatives, but seeing that we do these things in pockets. The striking thing is how you got the staff on board… I wonder if we have staff on board, but not sure we have students on board… So what did you do to get the students on board?

A1) There was a separate project on feedback with the students, raising student awareness on what feedback was. The student association were an important part of that. Feedback week is intended to make feedback to students very visible and help them understand their importance… And the students all seem to be able to find their feedback online.

Q2, Michael) You made this look quite seamless across spaces, how do you roll this out effectively?

A2) We’ve been working with staff a long time, so individual staff do lots of good things… The same with assessment and feedback… It was just that we had those people there who had great things there… So like the thinking module there is a model with self-enroll wikis… You end up with examples all around. With the roll out of EMA the Principal was keen that we just do this stuff, we have already tested it. But Abertay is a small place, we have monthly meet ups with good attendance as that’s pretty much needed for PGCAP. But it’s easier to spread an idea, because we are quite small.

Q3) For that 10-15 day turnaround how do you measure it, and how do you handle exemptions?

A3) You can have exemptions but you have to start that process early, teams all know that they have to pitch in. But some academic staff have scaled assessment back to the appropriate required level.

At this point we broke for an extended break and poster session, some images of which are included below. 

Amy Burge and Laine Ruus show their posters during the eLearning@ed 2016 Poster Session

Amy Burge and Laine Ruus show their posters during the eLearning@ed 2016 Poster Session

 

Participants explore posters including Simon Fokt's Diversity Reading List poster at eLearning@ed 2016

Participants explore posters including Simon Fokt’s Diversity Reading List poster at eLearning@ed 2016

 

Ross Ward provides an informal LTW drop in session as part of the eLearning@ed 2016 Poster Session

Ross Ward provides an informal LTW drop in session as part of the eLearning@ed 2016 Poster Session

Taking this forward – Nicola Osborne

Again, I was up and chairing so notes are more minimal from these sessions… 

The best of ILW 2016 – Silje Graffer (@SiljeGrr), ILW/IAD

ILW is in its fifth year… We had over 263 events through the event, we reached over 2 million people via social media…

How did we get to this year? It has been amazing in the last few years… We wanted to see how we could reach the students and the staff in a better way that was more empowering for them. We went back to basics, we hired a service design company in Glasgow to engage people who had been involved in ILW before… In an event we called Open ILW… We wanted to put people first. We had 2 full time staff, 3 student staff, 20 school coordinators – to handle local arrangements – and created a kind of cool club of a network!

Silje Graffer talks about the Innovative Learning Week team, at eLearning@ed 2016

Silje Graffer talks about the Innovative Learning Week team, at eLearning@ed 2016

So we went back to the start… We wanted to provide clarity on the concept… We wanted to highlight innovation already taking place, that innovation doesn’t just happen once a year. And to retain that space to experiment.

We wanted to create a structure to support ideas. We turned feedback into a handbook for organisers. We had meet ups every month for organisers, around ideas, development, event design, sharing ideas, developing process… We also told more stories through social media and the website. We curated the programme around ideas in play. We wanted to focus on people making the events, who go through a valuable process, and have scope to apply that.

Silje Graffer talks about some of the highlight events from ILW16, at eLearning@ed 201g

Silje Graffer talks about some of the highlight events from ILW16, at eLearning@ed 201g

So I just wanted to flag some work on openness, there was a Wikipedia Editathon on the history of medicine, we had collaboration – looking at meaningful connections between different parts of the university, particularly looking at learners with autism which was really valuable. Creativity… This wasn’t digital education in itself, but the Board Game Jam was about creating games, all were openly licensed, and you can access and use those games in teaching, available from OER. A great example for getting hands dirty and how that translates into the digital. And iGEM Sandpit and Bio Hackathon, are taking ideas forward to a worldwide event. Smart Data Hack continued again, with more real challenges to meet. Prof Ewan Klein gas taken work forward in the new Data, Design and Society Course… And in the Celebratory mode, we had an online game called Edinburgh is Everywhere, exploring Edinburgh beyond the physical campus! And this was from a student. You can browse all the digital education events that ran on the website, and I can put you in touch with organisers.

Next year its happening again, redeveloped and imagined again.

Q1) Is it running again

A1) Yes! But we will be using some of the redesigning approaches again.

 

CMALT – what’s coming up – Susan Greig (@SusieGreig),

Are you certified… I am based in LTW and I’m really pleased to announce new support for achieving CMALT within the University. And I can say that I am certified!

CMALT is the Certified Member of ALT, it’s recommended for documenting and reflecting on your work, a way to keep pace with technology, it is certified by peers, update certification every three years. So, why did I do CMALT? When back when I put my portfolio forward in 2008 I actually wrote down my reasons – I hoped to plan for my future careers more effectively, the career path isn’t well definied and I was keen to see where this would take me. And looking back I don’t think that career path has become more clear… So still very useful to do.

Susan Greig talking about support for CMALT, at eLearning@ed 2016

Susan Greig talking about support for CMALT, at eLearning@ed 2016

So, to do CMALT you need to submit a portfolio. That is around five areas, operational issues; teaching, learning and/or assessment processes; the wider context; communication; and a specialist area. I did this as an individual submission, but there is also an option to do this together. And that is what we will be doing in Information Services. We will provide ongoing support and general cheer-leading, events which will be open to all, and regular short productive cohort meetings. There will also be regular writing retreats with IAD. So, my challenge to you is can we make the University of Edinburgh the organisation with the most accredited CMALT members in the UK?

If you are interested get in touch. Likely cohort start is August 2016… More presentations from alt 3rd june, showcase event there in july

Making Connections all year long: eLearning@ed Monthly meet ups – Ross Ward (@RossWoss), Educational Design

Today has been a lovely chance to  get to meet and network with peers… Over the last year in LTW  (Learning, Teaching and Web Services) we’ve looked at how we can raise awareness of how we can help people in different schools and colleges achieve what they are trying to do, and how we can support that… And as we’ve gone around we’ve tried to work with them to provide what is needed for their work, we’ve been running roadshows and workshops. Rather than focus on the technologies, we wanted to come from more of a learning and teaching perspective…Around themes of Interactive learning and teaching, assessment and feedback, open educational resources, shakers, makers and co-creators, and exploring spaces… From those conversations we’ve realised there is loads of amazing stuff coming on… And we wanted to share these more widely…

Ross Ward talks about recent elearning@ed/LTW Monthly MeetUps, at eLearning@ed 2016

Ross Ward talks about recent elearning@ed/LTW Monthly MeetUps, at eLearning@ed 2016

Luckily we have a great community already… And we have been working collaboratively between elearning@ed and learning, teaching and web services, and having once a month meetings on one of the themes, sharing experiences and good practices… A way to strengthen networks, a group to share with in physical and digital shared spaces… The aim is that they are open to anyone – academics, learning technologists, support teams… Multiple short presentations, including what is available right now, but not ignoring horizon scanning. It’s a space for discussion – long coffee break, and the pub afterwards. We have a 100% record of going to the pub… And try to encourage discussion afterwards…

So far we’ve looked at Using media in teaching (January); Open Education – including our Wikimedian in residence (February); Things we have/do – well received catch up (March); Learning Design – excellent session from Fiona (April). We put as much as we can on the wiki – notes and materials – and you’ll find upcoming events there too. Which includes: Assessment and Feedback – which will be lively if the sessions here are anything to go by (27th June); CMALT (27th July); Maker Space (August) – do share your ideas and thoughts here.

In the future we are trying to listen to community needs, to use online spaces for some, to stream, to move things around, to raise awareness of the event. All ideas and needs welcomed… Interesting to use new channels… These tend to be on themes so case by case possibilities…

The final part of our day was our wrap up by Prof. Charlie Jeffrey, who came to us fresh from Glasgow where he’d been commenting on the Scottish Parliamentary election results for the BBC… 

Wrap Up – Professor Charlie Jeffrey, Senior Vice Principal.

I’m conscious of being a bit of an imposter here as I’m wrapping up a conference that I have not been able to attend most of. And also of being a bit of an obstacle between you and the end of the day… But I want to join together a few things that colleagues and I have been working on… The unambiguous priority of teaching and learning at Edinburgh, and the work that you do. So, what is the unambiguous priority about? It’s about sharpening the focus of teaching and learning in this university. My hope is that we reach a point in the future that we prize our excellent reputation for learning and teaching as highly as we do our excellent reputation in research. And I’ve been working with a platoon of assistant principals looking at how best to structure these things. One thing to come out of this is the Teaching Matters website which Amy (Burge) so wonderfully edits. And I hope that that is part of that collegiate approach. And Ross, I think if we had blogs and shorter contributions for the website coming out of those meetings, that would be great…

Charlie Jeffrey gives the wrap up at eLearning@ed 2016

Charlie Jeffrey gives the wrap up at eLearning@ed 2016

I’m also conscious of talking of what we do now… And that what we do in the future will be different. And what we have to do is make sure we are fit for the future… Traditional teaching and learning is being transformed by Teaching and Learning… And I wouldn’t want us to be left behind. That’s a competitive advantage thing… But it is is also a pedagogical issues, to do the best we can with the available tools and technologies. I’m confident that we can do that… We have such a strong track record of DEIs, MOOCs, and what Lesley Yellowlees calls he “TESEy chairs”, the Centre of research in Digital Education, an ISG gripped in organisational priorities, and a strong community that helps us to be at the forefront of digital education. Over the last few weeks we’ve had three of the worlds best conferences in digital education, and that’s a brilliant place to be! And an awful lot of that is due to the animation and leadership of Jeff Haywood, who has now retired, and so we’ve asked Sian and Melissa to help ensure that we stay in that absolutely powerful leading position, no pressure whatsoever, but I am very confident that they will be well supported. It’s pretty rare within an organisation to get 90 people to make time to come together and share experience like you have today.

And with that the day was finished! A huge thank you again to all who were part of the event. If you were there – whether presenting or to participate in the poster session or just to listen, I would ask that you complete our feedback survey if you haven’t already. If you weren’t there but are interested in next year’s event or the eLearning@ed community in general, you’ll find lots of useful links below. Video of the event will also be online soon (via MediaHopper – I’ll add the link once it is all live) so anyone reading this should be able to re-watch sessions soon. 

Related Resources

More about eLearning@ed

If you are interested in learning more about the eLearning@ed Forum the best place to start is our wiki: http://elearningforum.ed.ac.uk/.

If you are based at Edinburgh University – whether staff or student – you can also sign up to the Forum’s mailing list where we share updates, news, events, etc.

You can also join us for our monthly meet ups, co-organised with the Learning, Teaching and Web Services team at Edinburgh University. More information on these and other forthcoming events can be found on our Events page. We are also happy to add others’ events to our calendar, and I send out a regular newsletter to the community which we are happy to publicise relevant events, reports, etc. to. If you have something you’d like to share with the eLearning@ed community do just get in touch.

You can also read about some of our previous and more recent eLearning@ed events here on my blog:

 

Feb 102016
 
Wikipedia Editathon Poster for ILW 2016

For the last few years the University of Edinburgh have run an “Innovative Learning Week” in which no traditional lectures or tutorials take place, instead students (and staff) are encouraged to experiment, to engage in new ways, to participate in events and teaching activities beyond their usual discipline or subject areas. It is a really lovely concept and I am always amazed at the range of events and collaborations that take place in that very busy week.

This year Innovative Learning Week runs from Monday 15th to Friday 19th February and I am involved in a few events that I thought I would share here for those based at Edinburgh (do sign up!) and for the interest of others who may be curious about what an ILW event looks like…

History of Medicine Wikipedia Editathon

This event, a follow up last year’s very successful editathon, is something I have been involved in the planning of (and will be baking for) although I’ll only be able to be there on the Thursday. However, a fantastic group of information services, academic and Wikipedian in Residence folks are making this event happen and it should be both fun and really interesting. Great for those wanting to brush up their Wikipedia skills too. 

Join the Innovative Learning Week History of Medicine Wikipedia Editathon (open to students, staff, and all others who are interested), where you will have an opportunity to edit Wikipedia and meet our new Wikimedian in Residence, Ewan McAndrew. Join us in re-writing the Wikipedia pages of Edinburgh’s infamous medical figures including body-snatcher William Burke, the intriguing Dr. James Miranda Barry, or choose to enhance and create content for notable University of Edinburgh alumni (see the list under the How do I prepare section http://bit.ly/ILWEditathonEventPage).

Wikipedia training provides staff valuable digital skills to support CPD as well as hands on experience using an open access educational repository. No experience necessary as each session will offer Wikipedia editing and publishing training and the opportunity to observe online collaboration, public engagement, knowledge exchange, and scholarly communication in action.

Join in for one session, a full day, or for all three (sessions run in David Hume Tower, Teaching Studio LG.07):

  • TUESDAY 16                       Session1: 2pm-5pm
  • WEDNESDAY 17                S2: 10am-1pm; S3: 2-5pm
  • THURSDAY 18                    S4: 10am-1pm; S5: 2-5pm

Sign up: http://bit.ly/ILWEditathon2016 and/or follow us and share on Twitter: #ILWEditathon @LTW_UOE. If you are attending please bring your own personal laptop or tablet if you are able.

Creating an Effective Presence (Engineering)

I will be leading a section in this workshop on managing your digital footprint, developing and effective online presence, managing social media settings and options, as part of a wider session that looks at what it means to present yourself as a professional engineer and to evidence your skills and experience. 

This workshop on Tuesday 16th February (2-5pm), jointly hosted by the School of Engineering, the Careers Service and EDINA, will focus on Digital Footprint Awareness and creating an effective online presence to support summer internship and placement applications.

The session will include:

  • advice on using LinkedIn effectively;
  • an introduction to PebblePad for online portfolios;
  • guidance on managing your digital footprint.

Before attending, make sure you’ve registered for an account on LinkedIn. This is a BYOD session (bring your own device e.g. laptop or tablet).

Sign up (students in the School of Engineering only): http://www.innovativelearning.ed.ac.uk/creating-effective-online-presence-engineering

Communicating science to non-academic audiences ? who, what, why and how.

I have been involved in the planning of this session which I am contributing some social media, copyright/licensing and science communication expertise and resources to.

This science communication workshop explores how critical it is to identify your target audience and tailor your Open Educational Resource accordingly. The group will identify audiences and explore what their specific needs are before creating an interactive, web based, Open Educational Resource.

Sign up:

Other events worth noting include… 

The ILW newspaper (below) includes some highlights or you can search the programme in full here: http://www.innovativelearning.ed.ac.uk/ilw-calendar

And I’ll be sharing some of the resources from the sessions I’m involved with here on my blog (likely on the Publications and Presentations page).

Jan 062016
 

Today I am delighted to be hosting – in my eLearning@ed Convener hat – a talk from Martin Hawksey, from ALT.

Note: this is a live blog so apologies for any typos, errors etc – corrections always welcome.

I am one of about four members of staff at ALT – the Association of Learning Technologists. How many of you are ALT members? (a good chunk of the room are) And how many of you have heard of our conference? (pretty much all). I’m going to talk today about what else ALT does, where there are opportunities to take part etc.

A key part of what we want to do is improve practice, promote research and influence policy around information technology. We support learning technologists of course, but our members cross a wide range of roles reflecting the range of learning technology use. ALT itself was established in 1993 – before the internet which is an interesting marker. ALT has 1700+ individual and 180 organisational members at present. ALT works across sectors including Further Education, Higher Education and research, and ALT is also an international community. And, as you are all part of the University of Edinburgh you can join ALT for free as an associate member. To become a voting member/get involved in governance etc. you do, however, need to apply for full membership.

Before I worked at ALT I didn’t really appreciate that ALT is truly a membership organisation – and governed by its members. And that genuinely drives the organisation.

In terms of the benefits of membership there are three areas particularly relevant: keeping pace with technology; developing skills; recognition for your work. We also have the ALT-MEMBERS list (a Jiscmail list) and that is a really rich resource in terms of people posing questions, receiving feedback on what they are doing. You obviously have elearning@ed giving you a great insight into your local community, that ALT-MEMBERS list does some of the same stuff on a wider/global scale. For instance discussion on VLE Review (a conversation including 24 replies); tracking Twitter hashtags (a conversation including 14 replies); a post on appropriate use of social media and advice on inappropriate behaviour (had 15 replies and became a blog post drawing resources together); review of web conferening tools had 23 replies. So you can see there is huge interaction here, content to draw upon, trends to pick up, information being shared. If you aren’t yet a member of that list then you can sign up – it is a closed list and you need to be an ALT member to sign up.

Do you have any feedback on the mailing list?

Comment: It is just too busy for me, too many emails.

I think it is useful to have that health warning that there is a lot of traffic. You can manage that with filters, subscribing to the digest etc. But you need to be aware of the volume. In terms of posting we’d recommend a good subject line – to catch those eyes – and as with any list it’s good to do a bit of research first and share that in your post, that makes it more likely that you will have replies and engagement. Despite all the other technologies we have available email is still suprisingly important.

ALT also has Member Groups and SIGs (Special Interest Groups) on areas such as games and learning, open education, MOOCs, FELTAG.The SIGs tend to change as different trends go in and out of popularity – the open education group is especially busy at the moment for instance. There is also a specific ALT-Scotland group. So, for instance ALT-Scotland recently held a policy board with funders and policy makers to understand what they are thinking and doing at the moment which was hugely valuable.

In addition to email we are also using Twitter. For our conference and events we’ve moved away from specific hashtags for each towards a since hashtag – #altc – and that’s a great way to share your message with the community. We monitor and retweet that hashtag – and we have around 7000 followers. That hashtag can be used for projects, events, blog posts, etc. It’s pretty all encompassing.

As I mentioned ALT is your organisation, as a member. Our governance model is that we have a board of trustees including ALT members in Scotland – currently we have a member from Glasgow Caledonian, and another from Heriot-Watt. Our current vice-chair is Martin Weller, OU, our chair is ? and our current president is ?. We also have operational committees – a rewarding thing to do, enabling you engage with the community and good for your CV of course. And we have editors for the ALT journals as well.

I also mentioned recognition… How many of you have heard of CMALT – Certified Membership? (pretty much all in the room have) What do you want to know about it? It is a portfolio-based accreditation – you submit electronically and you can do that in whatever electronic format you like. That portfolio is certified by peers, and you can nominate one of your assessors. And they will give you feedback. There is a cost – about £150 – but if a group of you want to submit there is a reduced group rate.

Because there are a range of roles within ALT the skills assessed cover a range of core areas (operational issues; teaching, learning and assessment, wider context, communication), and specialist areas (such as leadership, tech development, administration, research, policy). The key thing is to certify your commitment to learning technology. It can feel like saying what you do but it is also about successes, reflection on success and failure, and working with feedback and support – about being a better learning technologist and making you have that professional journey. It isn’t just about the achievement of the certificate.

Question: How long does this take?

Once you are registered you have up to a year to complete and submit your portfolio. Obviously it doesn’t take that long to do. Maybe a few hours per area is sufficient – 20 or 24 hours perhaps for portfolios. There are examples of submitted portfolios and guidance on the ALT website. We also try to run regular CMALT webinars where you can talk to other candidates about the process and the detail.

Question: What are the benefits of doing CMALT?

Interestingly CMALT has been running for around 10 years now. We just passed our 300th CMALT certified members. And we have increasingly seen ALT members looking for CMALT as a desirable qualification for roles, which is obviously helpful for job prospects. The main benefit though is that process itself -the reflection, the capture of that experience, the opportunity to develop your practice.

Additionally CMALT maps to UKPSF and HEA Fellowship. We have mapped the requirements of UKPSF onto CMALT so that if you do either of those you may be able to reuse that work in applying to the other – there is more about this on the website.

Also we have the annual Learning Technologist of the Year Awards (#LTAwards), to recognise excellence in the sector. The awards are open internationally but most applicants are UK based. You can nominate someone else, or yourself. We normally announce these in April, so watch this space. Again, this is a great way to boost your CV but there is also a cash prize. This year the winner has been working on using Minecraft in teaching.

We have run ALT publications for years – we used to have the ALT Newsletter which we have now rebranded as the #ALTC Blog – anyone can contribute to this and we have editors who are all ALT members. We have around 225 posts and counting and look for posts of around 500 words each. Again, a great way to get information out.

We also have Research in Learning Technology (used to be known as ALTJ), and a great way to get full on research publications out there. It is a peer reviewed open access journal. It is rolling submission – although we have the capacity to do special issues. Again this publishing schedule fits with the roles and schedules of ALT members. There are no submission fees like some other open access journals – so little overhead to submitting. And the process can be very useful for preparing to submit to elsewhere. We have a bit of a boom at the moment so we currently have a call out for new editors – so if you are interested do take a look. Full details of submission processes can be found on the journal website.

As I mentioned we also have the annual conference, which is a really interesting conference but can melt your brain slightly – 3 very busy days! How many here have gone to the ALT conference? And how do you find it?

Comment) I find every second year works well. I like that you get a broad overview of what is happening in the sector, and a way to take the temperature of the sector in a fairly unique way.

Even if you can’t make it in person we do livestream a lot of the keynotes and plenary sessions, so we haven’t announced our keynote speaker. Last year we have Laura Cernovicz from Capetown, South Africa on ethics of education, open access, open education etc. We also had Jonathan Worth from University of Coventry, who has experimented with opening up courses to wider audiences and the challenges on informed and implied consent around use of social media in these. We also had Steve Wheeler. In the plenaries we had Rebecca ? from Oxford University on scaling learning analytics there. The videos of sessions are all available online on the ALT YouTube channel. It’s worth looking back to 2014 as we had some great speakers then including Audrey Walters, Catherine Cronin and Jeff Hayward.

In terms of other events note that OER16 is in Edinburgh next April – here at University of Edinburgh and co-chaired by Lorna Campbell and Melissa Highton.

Lorna: This year we are focusing on open cultures and making connections to galleries, museums. Submissions are closed at the moment – we are marking those right now. In terms of speakers we have Catherine Cronin, University of Galway; Melissa Highton, University of Edinburgh; John Scally, NLS; Emma Smith, Oxford University on Open Shakespeare work; and Jim Groom from DS106 – a MOOC or perhaps a cult – and the forefront of open higher education. The conference is on 19th and 20th April and registration will open up shortly. And it would be great to see a good cross-section of Edinburgh folk there.

Martin: ALT’s work with OER is a more recent thing, in terms of supporting its’ running. And that is in recognition of the importance of openness. And it’s worth noting that the call for OER17 chairs is now open.

The other thing to be aware of is the ALT Online Winter Conference 2015 – a free conference online, open to anyone to drop into and participate. Presenters all needed to be ALT members. And we hope to run this again this year. The call will go out in September so keep an eye out for that.

Something else ALT does is the policy side. So, a big plug here for our ALT Annual Survey – which is our opportunity to understand current and future practice, to enable us to represent our members needs. And this information helps us understand those needs for policy responses as well, for instance on the development of the Digital Learning and Teaching Strategy for Scotland. Currently ALT is preparing a response to the TEF as well.

One of the things I wanted to talk about was… last night I tweeted that I’d be talking here and was looking for what the benefit of being a member of ALT is… Originally I asked about technology and I realised there were technologies I wouldn’t have had access to without being part of ALT… For instance last year we ran an event here at the Informatics Forum where we got to use a real Oculus Rift – certainly at CES VR is supposed to be the big thing. Also John Kerr at Glasgow Caledonian had Google Glass along to see how his projects with it worked. There are opportunities to be introduced to new technologies. Also BuddyPress was something that in 2009 at the ALT Conference Joss Winn was experimenting with BuddyPress and finding it useful… Fast forward and we use BuddyPress in ALT activities, online courses etc. And it was that connection and chat that led to that solution… Again these are part of the benefits of being part of this lovely melting pot of people, contributing to the ALT community… Less about what than who in many ways.

Other benefits include discounts for the ALT conference (a big one), we also negotiate with other conferences – e.g. Online Educa this year.

Finally… Emerging areas and my advice on this…

This is related to the ALT community/membership thing. Throughout my career I have gotten the most out of technology by being flexible in what I focus on – but you do need to focus on things in some depth. A benefit of being part of a wider community means they can filter through those a bit, making you aware of them as they do. I have at various times worked on voting systems, peer instruction, Twitter, learning analytics… So, my advice is… With such a broad field keep half an eye of what is going on – and the ALT community is great for that – but also delve in and get lost in…

And with that Martin is done… and we open up for some discussion on emerging areas… this group suggests they include: policy; what an institution is and what its bounds are in the face of online education; teacher presence in various contexts, including the impact of MOOCs on student expectations.

Martin: Expectations are a really interesting area… In peer instruction you move things out of the classroom. Back when we trialled some of those approaches and moved a lecture out, the students resisted… They wanted that lecture, and to be in that room.

Comment: I think that depends on trust in peers… My undergraduate experience involved trusting some but there were also risks of social bullying dynamics and I would have had real concern about that.

Martin: The social aspect of being at an institution is a high priority… Whether an online experience can replicate that is interesting. And digital identity and the transitions between one form of digital identity to another, the move to professional attributes. Which is why learning technology is never dull!

And with that we broke for lunch and discussion. You can explore Martin’s magic live tweets and Lorna Campbell’s (less automated but no less impressive) live tweets in the Storify below:

You can also view the full story “Martin Hawksey talk on ALT for eLearning@ed (6th Jan 2016)” on Storify.