Oct 072016
 

PS-15: Divides (Chair: Christoph Lutz)

The Empowered Refugee: The Smartphone as a Tool of Resistance on the Journey to Europe – Katja Kaufmann

For those of you from other continents we had a great deal of refugees coming to Europe last year, from Turkey, Syria, etc. who were travelling to Germany, Sweden, and Vienna – where I am from – was also a hub. Some of these refugees had smartphones and that was covered in the (right wing) press about this, criticising this group’s ownership of devices but it was not clear how many had smartphones, how they were being used and that’s what I wanted to look at.

So we undertook interviews with refugees to see if they used them, how they used them. We were researching empowerment by mobile phones, following Svensson and Wamala Larsson (2015) on the role of the mobile phone in transforming capacilities of users. Also with reference to N. Kabeer (1999), A. Sen (1999) etc. on meanings of empowerment in these contexts. Smith, Spend and Rashid (2011) describe mobiles and their networs altering users capability sets, and about phone increasing access to flows of information (Castell 2012).

So, I wanted to identify how smartphones were empowering refugees through: gaining an advantage in knowledge by the experiences of other refugees; sensory information; cross-checking information; and capabilities to opposse actions of others.

In terms of an advantage in knowledge refugees described gaining knowledge from previous refugees on reports, routes, maps, administrative processes, warnings, etc. This was through social networks and Facebook groups in particular. So, a male refugee (age 22) described which people smugglers cannot be trusted, and which can. And another (same age) felt that smart phones were essential to being able to get to Europe – because you find information, plan, check, etc.

So, there was retrospective knowledge here, but also engagement with others during their refugee experience and with those ahead on their journey. This was mainly in WhatsApp. So a male refugee (aged 24) described being in Macedonia and speaking to refugees in Serbia, finding out the situation. This was particularly important last year when approaches were changes, border access changed on an hour by hour basis.

In terms of Applying Sensory Abilities, this was particularly manifested in identifying own GPS position – whilst crossing the Aegean or woods. Finding the road with their GPS, or identifying routes and maps. They also used GPS to find other refugees – friends, family members… Using location based services was also very important as they could share data elsewhere – sending GPS location to family members in Sweden for instance.

In terms of Cross-checking information and actions, refugees were able to track routes whilst in the hand of smugglers. A male Syrian refugee (aged 30) checked information every day whilst with people smugglers, to make sure that they were being taken in the right direction – he wanted to head west. But it wasn’t just routes, it was also weather condiions, also rumous, and cross-checking weather conditions before entering a boat. A female Syrian refugee downloaded an app to check conditions and ensure her smuggler was honest and her trip would be safer.

In terms of opposing actions of others, this was about being capable of opposing actions of others – orders of authorities, potential acts of (police) violence, risks, fraud attempts, etc. Also disobedience by knowledge – the Greek government gave orders about the borders, but smartphones allowed annotated map sharing that allowed orders to be disobeyed. And access to timely information – exchange rates for example – a refugee described negotiating price of changing money down by Google searching for this. And opposition was also about a means to apply pressure – threatening with or publishing photos. A male refugee (aged 25) described holding up phones to threaten to document policy violence, and that was impactful. Also some refugees took pictures of people smugglers as a form of personal protection and information exchange, particularly with publication of images as a threat held in case of mistreatment.

So, in summary the smartphones

Q&A

Q1) Did you have any examples of privacy concerns in your interviews, or was this a concern for later perhaps?

A1) Some mentioned this, some felt some apps and spaces are more scrutinised than others. There was concern that others may have been identified through Facebook – a feeling rather than proof. One said that they do not send their parents any pictures in case she was mistaken by Syrian government as a fighter. But mostly privacy wasn’t an immediate concern, access to information was – and it was very succesful.

Q2) I saw two women in the data here, were there gender differences?

A2) We tried to get more women but there were difficulties there. On the journey they were using smartphones in similar ways – but I did talk to them and they described differences in use before their journey and talked about picture taking and sharing, the hijab effect, etc.

Social media, participation, peer pressure, and the European refugee crisis: a force awakens? – Nils Gustafsson, Lund university, Sweden

My paper is about receiving/host nations. Sweden took in 160,000 refugees during the crisis in 2015. I wanted to look at this as it was a strange time to live in. A lot of people started coming in late summer and early autumn… Numbers were rising. At first response was quite enthusiastic and welcoming in host populations in Germany, Austria, Sweden. But as it became more difficult to cope with larger groups of people, there were changes and organising to address challenge.

And the organisation will remind you of Alexander (??) on the “logic of collective action” – where groups organise around shared ideas that can be joined, ideas, almost a brand, e.g. “refugees welcome”. And there were strange collaborations between government, NGOs, and then these ad hoc networks. But there was also a boom and bust aspect here… In Sweden there were statements about opening hearts, of not shutting borders… But people kept coming through autumn and winter… By December Denmark, Sweden, etc. did a 180 degree turn, closing borders. There were border controls between Denmark and Sweden for the first time in 60 years. And that shift had popular support. And I was intrigued about this. And this work is all part of a longer 3 year project on young people in Sweden and their political engagement – how they choose to engage, how they respond to each other. We draw on Bennett & Segerberg (2013), social participation, social psychology, and the notion of “latent participation” – where people are waiting to engage so just need asking to mobilise.

So, this is work in progress and I don’t know where it will go… But I’ll share what I have so far. And I tried to focus on recruitment – I am interested in when young people are recruited into action by their peers. I am interested in peer pressure here – friends encouraging behaviours, particularly important given that we develop values as young people that have lasting impacts. But also information sharing through young people’s networks…

So, as part of the larger project, we have a survey, so we added some specific questions about the refugee crisis to that. So we asked, “you remember the refugee crisis, did you discuss it with your friends?” – 93.5% had, and this was not surprising as it is a major issue. When we asked if they had discussed it on social media it was around 33.3% – much lower perhaps due to controversy of subject matter, but this number was also similar to those in the 16-25 year old age group.

We also asked whether they did “work” around the refugee crisis – volunteering or work for NGOs, traditional organisations. Around 13.8% had. We also asked about work with non-traditional organisations and 26% said that they had (and in 16-25% age group, it was 29.6%), which seems high – but we have nothing to compare this too.

Colleagues and I looked at Facebook refugee groups in Sweden – those that were open – and I looked at and scraped these (n=67) and I coded these as being either set up as groups by NGOs, churches, mosques, traditional organisations, or whether they were networks… Looking across autumn and winter of 2015 the posts to these groups looked consistent across traditional groups, but there was a major spike from the networks around the crisis.

We have also been conducting interviews in Malmo, with 16-19 and 19-25 year olds. They commented on media coverage, and the degree to which the media influences them, even with social media. Many commented on volunteering at the central station, receiving refugees. Some felt it was inspiring to share stories, but others talked about their peers doing it as part of peer pressure, and critical commenting about “bragging” in Facebook posts. Then as the mood changed, the young people talked about going to the central station being less inviting, on fewer Facebook posts… about feeling that “maybe it’s ok then”. One of our participants was from a refugee background and ;;;***

Q&A

Q1) I think you should focus on where interest drops off – there is a real lack of research there. But on the discussion question, I wasn’t surprised that only 30% discussed the crisis there really.

A1) I wasn’t too surprised either here as people tend to be happier to let others engage in the discussion, and to stand back from posting on social media themselves on these sorts of issues.

Q2) I am from Finland, and we also helped in the crisis, but I am intrigued at the degree of public turnaround as it hasn’t shifted like that in Finland.

A2) Yeah, I don’t know… The middleground changed. Maybe something Swedish about it… But also perhaps to do with the numbers…

Q2) I wonder… There was already a strong anti-immigrant movement from 2008, I wonder if it didn’t shift in the same way.

A2) Yes, I think that probably is fair, but I think how the Finnish media treated the crisis would also have played a role here too.

An interrupted history of digital divides – Bianca Christin Reisdorf, Whisnu Triwibowo, Michael Nelson, William Dutton, Michigan State University, United States of America

I am going to switch gears a bit with some more theoretical work. We have been researching internet use and how it changes over time – from a period where there was very little knowledge of or use of the internet to the present day. And I’ll give some background than talk about survey data – but that is an issue of itself… I’ll be talking about quantitative survey data as it’s hard to find systematic collection of qualitative research instruments that I could use in my work.

So we have been asking about internet use for over 20 years… And right now I have data from Michigan, the UK, and the US… I have also just received further data from South Africa (this week!).

When we think about Digital Inequality the idea of the digital divide emerged in the late 1990s – there was government interest, data collection, academic work. This was largely about the haves vs. have-nots; on vs. off. And we saw a move to digital inequalities (Hargittai) in the early 2000s… Then it went quite aside from work from Neil Selwyn in the UK, from Helsper and Livingstone… But the discussion has moved onto skills…

Policy wise we have also seen a shift… Lots of policies around digital divide up to around 2002, then a real pause as there was an assumption that problems would be solved. Then, in the US at least, Obama refocused on that divide from 2009.

So, I have been looking at data from questionnaires from Michigan State of the State Survey (1997-2016); questionnaires from digital future survey in the US (2000, 2002, 2003, 2014); questionnaires from the Oxford Internet Surveys in the UK (2003, 2005, 2007, 2009, 2013); Hungarian World Internet Project (2009); South African World Internet Project (2012).

Across these data sets we have looked at questionnaires and frequency of use of particular questions here on use, on lack of use, etc. When internet penetration was less high there was a lot of explanation in questions, but we have shifted away from that, so that we assume that people understand that… And we’ve never returned to that. We’ve shifted to devices questions, but we don’t ask other than that. We asked about number of hours online… But that increasingly made less sense, we do that less as it is essentially “all day” – shifting to how frequently they go online though.

Now the State of the State Survey in Michigan is different from the other data here – all the others are World Internet Project surveys but SOSS is not looking at the same areas as not interent researchers neccassarily. In Hungary (2009 data) similar patterns of question use emerged, but particular focus on mobile use. But the South African questionnaire was very different – they ask how many people in the household is using the internet – we ask about the individual but not others in the house, or others coming to the house. South Africa has around 40% penetration of internet connection (at least in 2012 when we have data here), that is a very different context. There they ask for lack of access and use, and the reasons for that. We ask about use/non-use rather than reasons.

So there is this gap in the literature, there is a need for quantitative and qualitative methods here. We also need to understand that we need to consider other factors here, particularly technology itself being a moving target – in South Africa they ask about internet use and also Facebook – people don’t always identify Facebook as internet use. Indeed so many devices are connected – maybe we need

Q&A

Q1) I have a question about the questionnaires – do any ask about costs? I was in Peru and lack of connections, but phones often offer free WhatsApp and free Pokemon Go.

A1) Only the South African one asks that… It’s a great question though…

Q2) You can get Pew questionnaires and also Ofcom questionnaires from their website. And you can contact the World Internet Project directly… And there is an issue with people not knowing if they are on the internet or not – increasingly you ask a battery of questions… and then filtering on that – e.g. if you use email you get counted as an internet user.

A2) I have done that… Trying to locate those questionnaires isn’t always proving that straightforward.

Q3) In terms of instruments – maybe there is a need to developmore nuanced questionnaires there.

A3) Yes.

Levelling the socio-economic playing field with the Internet? A case study in how (not) to help disadvantaged young people thrive online – Huw Crighton Davies, Rebecca Eynon, Sarah Wilkin, Oxford Internet Institute, United Kingdom

This is about a scheme called the “Home Access Scheme” and I’m going to talk about why we could not make it work. The origins here was a city council’s initiative – they came to us. DCLG (2016) data showed 20-30% of the population were below the poverty line, and we new around 7-8% locally had no internet access (known through survey responses). And the players here were researchers, local government, schools, and also an (unnamed) ISP.

The aim of the scheme was to raise attainment in GCSEs, to build confidence, and to improve employability skills. The Schools had a responsibility to identify students in need at school, to procure laptops, memory sticks and software, provide regular, structured in-school pastoral skills and opportunities – not just in computing class. The ISP was to provide set up help, technical support, free internet connections for 2 years.

This scheme has been running two years, so where are we? Well we’ve had successes: preventing arguments and conflict; helped with schoolwork, job hunting; saved money; and improved access to essential services – this is partly as cost cutting by local authorities have moved transactions online like bidding for council housing, repeat prescription etc. There was also some intergenerational bonding as families shared interests. Families commented on the success and opportunities.

We did 25 interiews, 84 1-1 sessions in schools, 3 group workshops, 17 ethnographic visits, plus many more informal meet ups. So we have lots of data about these families, their context, their lives. But…

Only three families had consistent internet access throughout. Only 8 families are still in the programme. It fell apart… Why?

Some schools were so nervous about use that they filtered and locked down their laptops. One school used the scheme money to buy teacher laptops, gave students old laptops instead. Technical support was low priority. Lead teachers left/delegated/didn’t answer emails. Very narrow use of digital technology. No in-house skills training. Very little cross-curriculum integration. Lack of ICT classes after year 11. And no matter how often we asked about it we got no data from schools.

The ISP didn’t set up collections, didn’t support the families, didn’t do what they had agreed to. They tried to bill families and one was threatened with debt collectors!

So, how did this happen? Well maybe these are neoliberalist currents? I use that term cautiously but… We can offer an emergent definition of neoliberalism from this experience.

There is a neoliberalist disfigurement of schools: teachers under intense pressue to meet auditable targets; the scheme’s students subject to a range of targets used to problematise a school’s performance – exclusions, attendance, C grades; the scheme shuffled down priorities; ICT not deemed academic enough under Govian school changes; and learning is stribbed back to narrow range of subjects and focus towards these targets.

There were effects of neoliberalism on the city council: targets and “more for less” culture; scheme disincentivised; erosion of authority of democratic institutional councils – schools beyond authority controls, and high turn over of staff.

There were neoliberalist practices at the ISP: commodifying philanthropy; couldn’t not treat families as customers. And there were dysfunctional mini-markets: they subcontracted delivery and set up; they subcontracted support; they charged for support and charged for internet even if they couldn’t help…

Q&A

Q1) Is the problem digital divides but divides… Any attempt to overcome class separation and marketisation is working against the attempts to fix this issue here.

A1) We have a paper coming and yes, there were big issues here for policy and a need to be holistic… We found parents unable to attend parents evening due to shift work, and nothing in the school processes to accommodate this. And the measure of poverty for children is “free school meals” but many do not want to apply as it is stigmatising, and many don’t qualify even on very low incomes… That leads to children and parents being labelled disengaged or problematic

Q2) Isn’t the whole basis of this work neoliberal though?]

A2) I agree. We didn’t set the terms of this work..

Panel Q&A

Q1/comment) RSE and access

A1 – Huw) Other companies the same

Q2) Did the refugees in your work Katja have access to Sim cards and internet?

A2 – Katja) It was a challenge. Most downloaded maps and resources… And actually they preferred Apple to Android as the GPS is more accurate without an internet connection – that makes a big difference in the Aegean sea for instance. So refugees shared sim cards, used power banks for the energy.

Q3) I had a sort of reflection on Nils’ paper and where to take this next… It occurs to me that you have quite a few different arguements… You have this survey data, the interviews, and then a different sort of participation from the Facebook groups… I have students in Berlin here looking at the boom and bust – and I wondered about that Facebook group work being worth connecting up to that type of work – it seems quite separate to the youth participation section.

A3 – Nils) I wasn’t planning on talking about that, but yes.

Comment) I think there is a really interesting aspect of these campaigns and how they become part of social media and the everyday life online… The way they are becoming engaged… And the latent participation there…

Q3) I can totally see that, though challenging to cover in one article.

Q4) I think it might be interesting to talk to the people who created the surveys to understand motivations…

A4) Absolutely, that is one of the reasons I am so keen to hear about other surveys.

Q5) You said you were struggling to find qualitative data?

A5 – Katja) You can usually download quantitative instruments, but that is harder for qualitative instruments including questions and interview guides…

XP-02: Carnival of Privacy and Security Delights – Jason Edward Archer, Nathanael Edward Bassett, Peter Snyder, University of Illinois at Chicago, United States of America

Note: I’m not quite sure how to write up this session… So these are some notes from the more presentation parts of the session and I’ll add further thoughts and notes later… 

Nathanial: We have prepared three interventions for you today and this is going to be kind of a gallery exploring space. And we are experimenting with wearables…

Fitbits on a Hamster Wheel and Other Oddities, oh my!

Nathanial: I have been wearing a FitBit this week… but these aren’t new ideas… People used to have beads for counting, there are self-training books for wrestling published in the 16th Century. Pedometers were conceived of in Leonardo di Vinci’s drawings… These devices are old, and tie into ideas of posture, and mastering control of physical selves… And we see the pedometer being connected with regimes of fitness – like the Manpo-Meter (“10,000 steps meter) (1965). This narrative takes us to the 1970s running boom and the idea of recreational discipline. And now the world of smart devices… Wearables are taking us to biometric analysis as a mental model (Neff – preprint).

So, these are ways to track, but what happens with insurance companies, with those monitoring you. At Oriel Roberts university students have to track their fitness as part of their role as students. What does that mean? I encourage you all to check out “unfitbit” – interventions to undermine tracking. Or we could, rather than going to the gym with a FitBit, give it to Terry Crews – he’s going anyway! – and he could earn money… Are fitness slaves in our future?

So, use my FitBit – it’s on my account

And so, that’s the first part of our session…

?: Now, you might like to hear about the challenges of running this session… We had to think about how to make things uncomfortable… But then how do you get people to take part… We considered a man-in-the-middle site that was ethically far too problematic! And no-one was comfortable participating in that way… Certainly raising the privacy and security issue… But as we talk of data as a proxy for us… As internet researchers a lot of us are more aware of privacy and security issues than the general population, particularly around metadata. But this would have been one day… I was curious if people might have faked your data for that one day capture…

Nathanial: And the other issue is why we are so much more comfortable sharing information with FitBit, and other sharing platforms, faceless entities versus people you meet at a conference… And we didn’t think about a gender aspect here… We are three white guys here and we are less sensitive to that being publicised rather than privatised. Men talk about how much they can benchpress… but personal metadata can make you feel under scrutiny

Me: I wouldn’t want to share my data and personal data collection tools…

Borrowing laptop vs borrowing phone…

?: In the US there have been a few cases where FitBits have been submitted as evidence in court… But that data is easier to fake… In one case a woman claimed to have been raped, and they used her FitBit to suggest that

Nathanial: You talked about not being comfortable handing someone your phone… It is really this blackbox… Is it a wearable? It has all that stuff, but you wear it on your body…

??: On cellphones there is FOMO – Fear Of Missing Out… What you might mix…

Me: Device as security

Comment: Ableism embedded in devices… I am a cancer survivor and I first used step counts as part of a research project on chemotherapy and activity… When I see a low step day on my phone now… I can feel this stress of those triggers on someone going through that stress…

Nathanial: FitBit’s vibrate when you have/have not done a number of steps… Trying to put you in an ideological state apparatus…

Jh: That nudge… That can be good for able bodied… But if you can’t move that is a very different experience… How does that add to their stress load.

Interperspectival Goggles

Again looking at the condition of virtuality – Hayles 2006(?)

Vision is constructed… Thinking of higher resolution… From small phone to big phone… Lower resolution to higher resolution TV… We have spectacles, quizzing glasses and monocles… And there is the strange idea of training ourselves to see better (William Horation Bates, 1920s)… And emotional state interfering with how you do something… Rgeb we have optomitry and x-rays as a concept of seeing what could not be seen before… And you have special goggles and helmets… LIke the idea of the Image Accumulator in Videodrome (1985?), or the idea of the Memory recorder and playback device in Brainstorm (1983). We see embodied work stations – Da Vinci Surgery Robot (2000) – divorcing what is seen, from what is in front of them…

There are also playful ideas: binocular football; the Decelerator Helmet; Meta-perceptional Helmet (Cleary and Donnelly 2014); and most recently Google Glass – what is there and also extra layers… Finally we have Oculus Rift and VR devices – seeing something else entirely… We can divorce what we see from what we are perceiving… We want to swap people’s vision…

1. Raise awareness about the complexity of electronic privacy and security issues.

2. Identify potential gaps in the research agenda through playful interventions, subversions, and moments of the absurd.

3. Be weird, have fun!

Mathius

“Cell phones are tracking devices that make phonecalls” (Applebaum, 2012)

I am interested in IMSI catcher which masquerades as a wireless base station, prompting phones to communicate with it. They are used by police, law inforcement, etc. They can be small and handheld, or they can be drone mounted. And they can track people, people in crowds, etc. There is always a different way to use it – you can scan for people in crowds. So if you know someone is there you can scan for it in a different way. So, these tools are simple and disruptive and problematic, especially in activism contexts.

But these tools are also capable of caturing transmitted content, and all the data in your phone. These devices are problematic and have raised all sorts of issues about their use, who and how you use them. I’d like to think of this a different way… Is there a right to protest? And to protest anonymously? We do have anti-masking laws in some places – that suggests no right to anonymous protest. But that’s still a different privacy right – covering my face is different from participating at all…

Protests are generally about a minority persuading a majoruty about some sort of change. There is no legal rights to protest anonymously, but there are lots of protected anoymous spaces. So, in the 19th century there was big debate on whether or not the voting ballot should be anonymous – democracy is really the C19th killer app. So there is a lovely quote here about the “The Australian system” by Bernheim (1889) and the introduction of anonymous voting. It wasn’t brought in to preserve privacy. At the time politicians brought votes – buying a keg of beer or whatever – and anonymity was there to stop that, not to preserve individual privacy. But Jill LePore (2008) writes about how our forebears considered casting a “secret ballot” to be “cowardly, underhanded and dispicable”.

So, back to these devices… There can be an idea that “if you have nothing to fear, you have nothing to hide”, but many of us understand that it is not true. And this type of device silences uncomfortable discourse.

Mathias Klang, University of Massachusetts Boston

Q1) How do you think that these devices fit into the move to allow law inforcement to block/”switch off” the camera on protestors/individuals’ phones?

A1) Well people can resist these surveillance efforts, and you will see subversive moves. People can cover cameras, conceal devices etc. But with these devices it may be that the phone becomes unusable, requiring protestors to disable phones or leave phones at home… And phones are really popular and well used for coordinating protests

Bryce Newell, Tilburg Institute for Law, Technology, and Society

I have been working on research in Washington Stat, working with law enforcement on license plate recognition systems and public disclosure law. And looking at what you can tell. So, here is a map of license plate data from Seattle, showing vehicle activity. In Minneapolis similar data being released led to mapping of the governer’s registered vehicles..

The second area is about law enforcement and body cameras. Several years ago peaceful protestors at UC Davis were pepper sprayed. Even in the cropped version of that image you can see a vast number of phones out, recording the event. And indeed there are a range of police surveillance apps that allow you to capture police encounters without that being visible on the phone, including: ACLU Police Tape, Stop and Frisk Watch; OpenWatch; CopRecorder2. And some of these apps upload the recording to the cloud right away to ensure capture. And there have certainly been a number of incidents from Rodney King to Oscar Grant (BART), Eric Garner, Ian Tomlinson, Michael Brown. Of these only the Michael Brown case featured law enforcement with bodycams. There has been a huge call for more cameras on law enforcement… During a training meeting some officers told me “Where’s the direct-to-YouTube button?” and “If citizens can do it, why can’t we also benefit from the ability to record in public places?”. There is a real awareness of control and of citizen videos. I also heard a lot of there being “a witch hunt about to begin…”.

So, I’m in the middle of focused coding on police attitudes to body cameras. Police are concerned that citizen video is edited, out of context, distorting. And they are concerned that it doesn’t show wider contexts – when recording starts, perspective, the wider scene, the fact that provocation occurs before filming usually. But there is also the issue of control, and immediate physical interaction, framing, disclosure, visibility – around their own safety, around how visible they are on the web. They don’t know why it is being recorded, where it will go…

There have been a number of regulatory responses to this challenge: (1) restrict collection – not many, usually budgetary and rarely on privacy; (2) restrict access – going back to the Minneapolis case, within two weeks of the map of governer vehicles being published in the paper they had an exemption to public disclosure law which is now permanent for this sort of data. In the North Carolina protests recently the call was “release the tapes” – and they released only some – then the cry was “release all the tapes”… But on 1st October law changed to again restrict access to this type of data.

But different state provide different access. Some provide access. In Oakland, California, data was released on how many license plates had been scanned. In Seattle data on scans can, because the data for many scans of one licence plates over 90 days is quite specific, you can almost figure out the householder. But granularity varies.

Now, we do see body cameras of sobriety tests, foot chases, and a half hour long interview with prostitute that discloses a lot of data. Washington shares a lot of video to YouTube. We see that in Rotterdam, Netherlands police doing this too.

But one patrol office told me that he would never give his information to an officer with a camera. Another noted that police choose when to start recording with little guidance on when and how to do this.

And we see a “collatoreal visibility” issue for police around these technologies.

Q&A

Q1) Is there any process where police have to disclose that they are filming with a body cam?

A1) Interesting question… Initially they didn’t know. We used to have two party consent process – as for tapings – to ensure consent/implied consent. But the State attorney general described this as outside of that privacy regulation, saying that a conversation with a police officer is a public conversation. But police are starting to have policies that officers should disclose that they have cameras – partly as they hope and sometimes it may reduce violence to police.

Data Privacy in commercial users of municipal location data – Meg Young, University of Washington

My work looks at how companies use Seattle’s location data. I wanted to look at how data privacy is enacted by Seattle municipal government? And I am drawing on the work of Annemarie Mol and John Law (2004), an ethnographer working on health, that focuses on the lived experience. My data is drawing on ethnographic as as well as focus groups, interviews with municipal government and local civic technology communities. I really wanted to present the role of commercial actors in data privacy in city government.

We know that cities collect location data to provide services, and so share it for third parties to do so. In Washinton we have a state freedom of information (FOI) law, which states “The people of this state do not yield their sovereignty to the government…”, making data requestable.

In Seattle the traffic data is collected by a company called Acyclica. The city is growing and the infrastructure is struggling, so they are gathering data to deal with this, to shape traffic signals. This is a large scale longitudinal data collection process. Acyclica are doing that with wi-fi sensors sniff MAC addresses, the location traces sent to Acyclica (MAC salted). The data is aggregated and sent to the city – they don’t see the detailed creepy tracking, but the company does. And this is where the FOI law comes in. The raw data is on the company side here. If the raw data was a public record, it would be requestable. The company becomes a shield for collecting sensitive data – it is proprietizing.

So you can collect data, have service needs met, but without it becoming public to you and I. But analysing the contract the terms do not preclude the resale of data – though a Seattle Dept. of Transport (DOT) worker notes that right now people trust companies more than government. Now I did ask about this data collection – not approved elsewhere – and was told that having wifi settings on in public making you open to data collection – as it is in public space.

My next example is the data from parking meters/pay stations. This shows only the start, end, no credit card #s etc. The DOT is happy to make this available via public records requests. But you can track each individual, and they are using this data to model parking needs.

The third example is the Open Data Portal for Seattle. They pay Socrata to host that public-facing data portal. They also sell access to cleaned, aggregated data to companies through a separate API called the Open Data Network. The Seattle Open Data Manager didn’t see this situation as different from any other reseller. But there is little thought about third party data users – they rarely come up in converations – who may combine this data with other data sets for data analysis.

So, in summary, municipal government data is no less by and for commercial actors as it is the public. Proprietary protections around data are a strategy for protecting sensitive data. Government transfers data to third party

Q&A

Q1) Seattle has a wifi for all programme

A1) Promisingly this data isn’t being held side by side… But the routers that we connect to collect so much data… Seeing an Oracle database of the websites fokls

Q2) What are you policy recommendations based on your work?

A2) We would recommend licensing data with some restrictions on use, so that if the data is used inappropriately their use could be cut off…

Q2) So activists could be blocked by that recommendation?

A2) That is a tension… Activists are keen for no licensing here for that reason… It is challenging, particularly when data brokers can do problematic profiling…

Q2) But that restricts activists from questioning the state as well.

Response – Sandra Braman

I think that these presentations highlight many of the issues that raise questions about values we hold as key as humans. And I want to start from an aggressive position, thinking about how and why you might effectively be an activist in this sort of environment. And I want to say that any concerns about algorithmically driven processes should be evaluated in the same way as we would social process. So, for instance we need think about how the press and media interrogate data and politicians

? “Decoding the social” (coming soon) is looking at social data and analysis of social data in the context of big data. She argues that social life is too big and complex than predicatable data. Everything that people who use big data “do” to understand patterns, are things that activists can do too. We can be just as sophisticated as corporations.

The two things I am thinking about are how to mask the local, and how to use the local… When I talk of masking the local I look back to work I did several years back on local broadcasting. There is mammoth literature on TV as locale, and production and how that is separate, misrepresenting, and the assumptions versus the actual information provided vs actual decision making. My perception is that social activism is that there is some brilliant activity taking place – brilliance at moments, specific apps often. And I think that if you look at the essays that Julian Assange before he founded WikiLeaks, particularly n weak links and how those work… He uses sophisticated social theory in a political manner.

But anonymity is practicably impossible… What can we learn from local broadcast? You can use phones in organised ways – there was training for phone cameras for the Battle of Seattle for instance. You can fight with indistinguishable actions – all doing the same things. Encryption is cat and mouse… Often we have activists presenting themselves as mice, although we did see an app discussed at the plenary on apps to alert you to protest and risk. And I have written before on tactical memory.

In terms of using the local… If you know you will be sensed all the time, there are things you can do as an activist to use that. It is useful to think about how we can conceive of ourselves as activists as part of the network. And I was inspired by US libel laws – if a journalist has transmission/recording devices but are a neutral observer, you are not “repeating” the libel and can share that footage. That goes back to 1970s law, but that can be useful to us.

We are at risk of being censored, but that means that you have choices about what to share, being deliberate in giving signals. We have witnessing, which can be taken as a serious commitment. That can happen with people with phones, you can train witnessing. There are many moments were leakage can be an opportunity – maybe not with volume or content of Snowden, but we can do that. There are also ways to learn and shape learning. But we can also be routers, and be critically engaged in that – what we share, the acceptable error rate. National Security are concerned about where in the stream they should target the misinformation – activists can adopt that too. The server functions – see my strategic memory piece. We certainly have community-based wifi, MESH networks, and that is useful politically and socially. We have responsibilities to build the public that is appropriate, and the networking infrastructure that enables those freedom. We can use more computational power to resolve issues. Information can be an enabler as well as influencing your own activism. Thank you to Anne and her group in Amsterdam for triggering thinking here, but big data we should be engaging critically. If you can’t make decisions in some way, there’s no point to doing it.

I think there needs to be more robustness in managing and working with data. If you go far then you need a very high level of methodological trust. Information has to stand up in court, to respect activist contributions to data. Use as your standard, what would be acceptable in court. And in a Panspectrum (not Panopticon) environment, when data is collected all the times, you absolutely have to ask the right questions.

Panel Q&A

Q1) I was really interested in that idea of witnessing as being part of being a modern digital citizens… Is there more on protections or on that which you can say

A1 – Sandra) We’ve seen all protections for whistle blowing in government disappear under Bush (II)… We still have protections for private sector whistle blowers. But there would be an interesting research project in there…

Q2) I wondered about that idea of cat and mouse use of technology… Isn’t that potentially making access a matter of securitisation…?

A2) I don’t think that “securitisation” makes you a military force… One thing I forgot to say was about network relations… If a system is interacting with another system – the principle of requisite variety – they have to be as complex as the system you are dealing with. You have to be at least as sophisticated as the other guy…

Q3) For Bryce and Meg, there are so many tensions over when data should be public and when it should be private, and tensions there… And police desires to show the good things they do. Also Meg, this idea of privatising data to ensure privacy of data – it’s problematic for us to collect data, but now a third party can do that.

A3 – Bryce) One thing I didn’t explain well enough is that video online comes from police, and from activists – it depends on the video here. Some videos are accessed via public records requests and published to YouTube channel – in fact in Washington you can make requests for free and you can do it anonymously. Police department does public video. Whilst they did a pilot in 2014 they had a hackathon to consider how to deal with redaction issues… detect faces, blur them, etc.. And proactive posting of – only some – video. The narrative of sharing everything, but that isn’t the case. The rhetoric has been about being open, by privacy rights and the new police chief. A lot of it was administrative cost concerns… In the hackathon they asked if posting in a blurred form, it would do away with blanket requests to focus requests. At that time they dealt with all requests for email. They were receiving so many emails and under state law they had to give up all the data and for free. But state law varies, in Charlotte they gave up less data. In some states there is a a differnet approach with press conferences, narratives around the footage as they release parts of videos…

A3 – Meg) The city has worked on how to release data… They have a privacy screening process. They try to provide data in a way that is embedded. They still have a hard core central value that any public record is requestable. Collection limitation is an important and essential part of what cities should be doing… In a way private companies collecting data results in large data sets that will end up insecure in those data sets… Going back to what Bryce was saying, the bodycam initiative was really controversial… There was so much footage and unclear what should be public and when… And the faultlines have been pretty deep. We have the Coalition for Open Government advocates for full access, the ACLU worried that these become surveillance cameras… This was really contentious… They passed a version of a compromise but the bottom line is that the PRA is still a core value for the state.

A3 – Bryce) Much of the ACLU, nationally certainly, was to support bodycams, but individuals and local ACLUs change and vary… They were very pro, then backing off, then local variance… It’s a very different picture hence that variance.

Q4) For Matthias, you talked about anti-masking laws. Are there cases where people have been brought in for jamming signals under that law.

A4 – Matthias) Right now the American cases is looking for keywords – manufacturers of devices, the ways data is discussed. I haven’t seen cases like that, but perhaps it is too new… I am a Swedish lawyer and that jamming would be illegal in protest…

A4 – Sandra) Would that be under antimasking or under jamming law.

A4 – Matthias) It would be under hacking laws…

Q4) If you counter with information… But not if switching phone off…

A4 – Matthias) That’s still allowed right now.

Q5) Do you do work comparing US and UK bodycamera?

A5 – Bryce) I don’t but I have come across the Rotterdam footage. One of my colleagues has looked at this… The impetus for adoption in the Netherlands has been different. In the US it is transparancy, in the Netherlands it was protection of public servants as the narrative. A number of co-authors have just published recently on the use of cameras and how they may increase assault on officers… Seeing some counter-intuitive results… But the why question is interesting.

Comment) Is there any aspect of cameras being used in higher risk areas that makes that more likely perhaps?

A5 – Sandra) It’s the YouTube on-air question – everyone imagines themselves on air.

Q6) Two speakers quoted individuals accused of serious sexual assault… And I was wondering how we account for the fact that activists are not homogenous here… Particularly when tech activists are often white males, they can be problematic…

A6) Techies don’t tend to be the most politically correct people – to generalise a great deal…

A6 – Sandra) I think they are separate issues, if I didn’t engage with people whose behaviour is problematic it would be hard to do any job at all. Those things have to be fought, but as a woman you should also challenge and call those white male activists on their actions.

Q7 – me) I was wondering about the retention of data. In Europe there is a lot of use of CCTV and the model  there is record everything, and retain any incident. In the US CCTV is not in widespread use I think and the bodycam model is record incidents in progress only… So I was wondering about that choice in practice and about the retention of those videos and the data after capture.

A7 – Bryce) The ACLU has looked at retention of data. It is a state based issue. In Washington there are mandatory minimu periods… They are interesting as due to findings in conduct they are under requirements to keep everything for as long as possible so auditors from DOJ can access and audit. Bellingham and Spokane, officers can flag items, and supervisors can… And that is what dictates retention schedule. There are issues there of course. Default when I was there was 2 years. If it is publicly available and hits YouTube then that will be far more long lasting, can pop up again… Perpetual memory there… So actual retention schedule won’t matter.

A7 – Sandra) A small follow up – you may have answered with that metadata… Do they treat bodycam data like other types of police data, or is it a separate class of data?

A7 – Bryce) Generally it is being thought of as data collection… And there is no difference from public disclosure, but they are really worried about public access. And how they share that with prosecutors… They could share on DVD… And wanted to use share function of software… But they didn’t want emails to be publicly disclosable with that link… So being thought about as like email.

Q8 – Sandra) On behalf of colleagues working on visual evidence in course.

Comment – Micheal) There is work on video and how it can be perceived as “truth” without awareness of potential for manipulation.

A8 – Bryce) One of the interesting things in Bellingham was release of that video I showed of a suspect running away… The footage was following a police pick up for suspected drug dealing but the footage showed evasion of arrest and the whole encounter… And in that case, whether or not he was guilty of the drug charge, that video told a story of the encounter. In preparing for the court case the police shared the video with his defence team and almost immediately they entered a guilty plea in response to that… And I think we will see more of that kind of invisible use of footage that never goes to court.

And with that this session ends… 

PA-31:Caught in a feedback loop? Algorithmic personalization and digital traces (Chair: Katrin Weller)

Wiebke Loosen1, Marco T Bastos2, Cornelius Puschmann3, Uwe Hasebrink1, Sascha Hölig1, Lisa Merten1, Jan­-Hinrik Schmidt1, Katharina E Kinder­-Kurlanda4, Katrin Weller4

1Hans Bredow Institute for Media Research; 2; 3Alexander von Humboldt Institute for Internet and Society; 4GESIS Leibniz Institute for the Social Sciences

?? – Marco T Bastos, University of California, Davis  and Cornelius Puschmann, Alexander von Humboldt Institute for Internet and Society

Marco: This is a long-running project that Cornelius and I have been working on. At the time we started, in 2012, it wasn’t clear what impact social media might have on the filtering of news, but they are now huge mediators of news and news content in Western countries.

Since then there is some challenge and conflict between journalists, news editors and audiences and that raises the issue of how to monitor and understand that through digital trace data. We want to think about which topics are emphasized by news editors, and which are most shared by social media, etc.

So we will talk about taking two weeks of content from the NYT and The Guardian across a range of social media sites – that’s work I’ve been doing. And Cornelius has tracked 1.5/4 years worth of content from four German newspapers (Suddeutsche Zeitung, Die Zeit, FAZ, Die Welt).

With the Guardian we accessed data from the API which tells you which articles were published in print, and which have not – that is baseline data for the emphasis editors place on different types of content.

So, I’ll talk about my data from the NY Times and the Guardian, from 2013, though we now have 2014 and 2015 data too. This data from two weeks is about 16k+ articles. The Guardian runs around 800 articles per day, the NYT does around 1000. And we could track the items on Twitter, Facebook, Google+, Delicious, Pinterest and Stumbleupon. We do that by grabbing the unique identifyer for the news article, then use the social media endpoints of social platforms to find sharing. But we had a challenge with Twitter – in 2014 they killed the end point we and others had been using to track sharing of URLs. The other sites are active, but relatively irrelevant in the sharing of news items! And there are considerable differences across the ecosystems, some of these social networks are not immediately identifiable as social networks – will Delicious or Pinterest impact popularity?

This data allows us to contrast the differences in topics identified by news editors and social media users.

So, looking at the NYT there is a lot of world news, local news, opinion. But looking at the range of articles Twitter maps relatively well (higher sharing of national news, opinion and technology news), but Facebook is really different – there is huge sharing of opinion, as people share what lies with their interests etc. We see outliers in every section – some articles skew the data here.

If we look at everything that appeared in print, we can look at a horrible diagram that shows all shares… When you look here you see how big Pinterest is, but in fashion in lifestyle areas. The sharing there doesn’t reflect ratio of articles published really though. Google+ has sharing in science and technology in the Guardian, in environment, jobs, local news, opinion and technology in the NYT.

Interestingly news and sports, which are real staples of newspapers but barely feature here. Economics are even worse. Now the articles are english-speaking but they are available globally… But what about differences in Germany… Over to Cornelius…

Cornelius: So Marcos’ work is ahead of mine – he’s already published some of this work. But I have been applying his approach to German newspapers. I’ve been looking at usage metrics and how that relationship between audiences and publishers, and how that relationship changes over time.

So, I’ve looked at Facebook engagement with articles in four German newspapers. I have compared comments, likes and shares and how contribution varies… Opinion is important for newspapers but not necessarily where the action is. And I don’t think people share stories in some areas less – in economics they like and comment, but they don’t share. So interesting to think about the social perception of sharability.

So, a graph here of Die Zeit here shows articles published and the articles shared on Facebook… You see a real change in 2014 to greater numbers (in both). I have also looked at type of articles and print vs. web versions.

So, some observations: niche social networks (e.g. Pinterest) are more relevant to news sharing than expected. Reliance on FB at Die Zeit grew suddenly in 2014. Social nors of liking, sharing and discussing differ significantly across news desks. Some sections (e.g. sports) see a mismatch of importance and use versus liking and sharing.

In the future we want to look at temporal shifts in social media feedback and newspapers coverage. Monitoring

Q&A

Q1) Have you accounted for the possibility of bots sharing content?

A1 – Marcus) No, we haven’t But we are looking across the board but we cannot account for that with the data we have.

Q2) How did you define or find out that an article was shared from the URLs

A2) Tricky… We wrote a script for parsing shortened URLs to check that.

A2 – Cornelius) Read Marco’s excellent documentation.

Q3) What do you make of how readers are engaging, what they like more, what they share more… and what influences that?

A3 – Cornelius) I think it is hard to judge. There are some indications, and have some idea of some functions that are marketed by the platforms being used in different ways… But wouldn’t want to speculate.

Twitter Friend Reportoires: Inferring sources of information management from digital traces – Jan-Hinri Schmidt; Lisa Merton, Wiebke Loosen, Uwe, Kartin?

Our starting point was to think about shifting the focus of Twitter Research. Many studies are on Twitter – explicitly or implicitly – as a broadcast paradigm, but we want to conceive of it as an information tool, and the concept of “Twitter Friend Reportoires” – using “Friend” in the Twitter terminology – someone I follow. We ware looking for patterns in composition of friend sets.

So we take a user, take their friends list, and compare to list of accounts identified previously. So our index has 7,528 Twitter account of media outlets (20.8%) of organisations (political parties, companies, civil society organisations (53.4%) and of individuals (politicians, celebrities and journalists, 25.8%) – all in Germany. We take our sample, compare with a relational table, and then to our master index. And if the account isn’t found in the master index, we can’t say anything about them yet.

To demonstrate the answers we can find with this approach…. We have looked at five different samples:

  • Audience_TS – sample following PSB TV News
  • Audience_SZ – sample following quality daily newspapers
  • MdB – members of federal parliament
  • BPK – political journalists registerd for the bundespressekonferenz
  • Random – random sample of German Twitter users (via Axel Bruns)

We can look at the friends here, and we can categorise the account catagories. In our random sample 77.8% are not identifiable, 22.2% are in our index (around 13% are individual accounts). That is lower than the percentages of friends in our index for all other audiences – for MdB and BPK a high percentage of their friends are in our index. Across the groups there is less following of organisational accounts (in our index) – with the exception of the MdB and political parties. If we look at the media accounts we can see that with the two audience samples they have more following of media accounts than others, including MdB and BPK… When it comes to individual public figures in our indexes, celebrities are prominent for audiences, much less so for MdB and BPK, but MdB follow other politicians, and journalists tend to follow other politicians. And journalists do follow politicians, and politicians – to a less extent – follow journalists.

In terms of patterns of preference we can suggest a model of a fictional user to understand preference between our three categories (organisational account, media account, individual account). And we can use that profile example and compare with our own data, to see how others behaviours fit that typology. So, in our random sample over 30% (37,9%) didn’t follow any organisational accounts. Amongst MdB and BPK there is a real preference for individual accounts.

So, this is what we are measuring right now… I am still not quite happy yet. It is complex to explain, but hard to also show the detail behind that… We have 20 categories in our master index but only three are shown here… Some frequently asked questions that I will ask and answer based on previous talks…

  1. Around 40% identified accounts is not very must is it?
    Yes and no! We have increased this over time. But initially we did not include international accounts, if we did that we’d increase share, especially with celebrities, also international media outlets. However, there is always a trade off, there will also be a long tail… And we are interested in specific categorisations and in public speakers as sources on Twitter.
  2. What does friending mean on Twitter anyway?
    Good question! More qualitative research is needed to understand that – but there is some work on journalists (only). Maybe people friend people for information management reasons, reciprocity norms, public signal of connection, etc. And also how important are algorithmic recommendations in building your set of friends?

Q&A

Q1 – me) I’m glad you raised the issue of recommendation algorithms – the celebrity issue you identified is something Twitter really pushes as a platform now. I was wondering though if you have been looking at how long the people you are looking at have been on Twitter – as behavioural norms

A1) It would be possible to collect it, but we don’t now. We do, for journalists and politicians we do gather list of friends of each month to get longitudinal idea of changes. Over a year, there haven’t been many changes yet…

Q2) Really interesting talk, could you go further with the reportoire? Could there be a discrepancy between the reportoire and their use in terms of retweeting, replying etc.

A2) We haven’t so far… Could see which types of tweets accounts are favouriting or retweeting – but we are not there yet.

Q3) A problem here…

A3) I am not completely happy to establish preference based on indexes… But not sure how else to do this, so maybe you can help me with it. 

Analysing digital traces: The epistemological dimension of algorithms and (big) internet data – Katharine Kinder-Kuranda and Katrin Weller

Katherine: We are interested in the epistemiological aspects of algorithms, so how we research these. So, our research subjects are researchers themselves.

So we are seeing real focus on algorithms in Internet Research, and we need to understand the (hidden) influence of algorithms on all kinds of research, including researchers themselves. So we have researchers interested in algorithms… And in platforms, users and data… But all of these aspects are totally intertwined.

So lets take a Twitter profile… A user of Twitter gets recommendations of who to follow in a given moment of time, and they see newsfeeds at a given moment of time. That user has context that as a researcher I cannot see or interpret the impact of that context on the user’s choice of e.g. who they then follow.

So, algorithms observe, count, sort and rank information on the basis of a variety of different data sources – they are highly heterogeneous and transient. Online data can be user-generated content or activity, traces or location data from various internet platforms. That promises new possibilities, but also raises significant challenge, including because of its heterogeneity.

Social media data has uncertain origins, about users and their motivations; often uncertain provenance of the data. The “users that we see are not users” but highly structured profiles and the result of careful image-management. And we see renewed discussion of methods and epistemology, particularly within the social sciences, for instance suggestions include “messiness” (Knupf 2014), and ? (Kitchen 2012).

So, what does this mean for algorithms? Algorithms operate on an uncertain basis and present real challenges for internet research. So I’m going to now talk about work that Katrin and I did in a qualitative study of social media researchers (Kinder-Kurlanda and Weller 2014). We conducted interviews at conferences – highly varied – speaking to those working with data obtained from social media. There were 40 interviews in total and we focused on research data management.

We found that researchers found very individual ways to address epistemological challenges in order to realise the potential of this data for research. And there were three real concerns here: accessibility, methodology, research ethics.

  1. Data access and quality of research

Here there were challenges of data access, restrictions on privacy of social media data, technical skills; adjusting research questions due to data availability; struggle for data access often consumes much effort. Researchers talks about difficulty in finding publicatio outlets, recognition, jobs in the disciplinary “mainstream” – it is getting better but a big issue. There was also comment on this being a computer science dominated fields – which had highly formalised review processes, few high ranking conferences, and this enforces highly strategic planning of resources and research topics. So researchers attempts to acieve validity and good research quality are constrained. So, this is really challenging for researchers.

2. New Methodologies for “big data”

Methodologies in this research often defy traditional ways of achieveing research validity – through ensuring reproducability, sharing of data sets (ethically not possible). There is a need to find patterns in large data sets by analysis of keywords, or automated analysis. It is hard for others to understand process and validate it. Data sets cannot be shared…

3. Research ethics

There is a lack of users informed consent to studies based on online data (Hutton and Henderson 2015). There are ethical complexity. Data cannot really be anonymised…

So, how do algorithms influence our research data and what does this mean for researchers who want to learn something about the users? Algoritms influence what content users interact with, for example: How to study user networks without knowing the algorithms behind follower/friend suggestions? How to study populations?

To get back to the question of observing algorithms? Well the problem is that various actors in the most diverse situations react out of different interests to the results of algorithic calculations, and may even try to influence algorithms. You see that with tactics around trending hashtags as part of protest for instance. The results of algorithmic analyses presented to internet users with information on how algorithms take part.

In terms of next steps. researchers need to be aware that online environments are influenced by algorithms and so are the users and the data they leave behind. It may mean capturing the “look and feel” of the platform as part of research.

Q&A

Q1) One thing I wasn’t sure about… Is your sense when you were interviewing researchers that they were unaware of algorithmic shaping… Or was it about not being sure how to capture that?

A1) Algorithms wasn’t the terminology when we started our work… They talked about big data… the framing and terminology is shifting… So we are adding the algorithms now… But we did find varying levels of understanding of platform function – some were very aware of platform dynamics, but some felt that if they have a Twitter dataset that’s a representation of the real world.

Q1) I would think that if we think about recognising how algorithms and platform function come in as an object… Presumably some working on interfaces were aware but others looking at, e.g. friendship group, took data and weren’t thinking about platform function, but that is something they should be thinking about…

A1) Yes.

Q2) What do you mean by the term “algorithm” now, and how that term is different from previously…

A2) I’m sure there is a messyness of this term. I do believe that looking at programmes, wouldn’t solve that problem. You have the algorithm in itself, gaining attention… From researchers and industry… So you have programmers tweaking algorithms here… as part of different structures and pressures and contexts… But algorithms are part of a lot of peoples’ everyday practice… It makes sense to focus on those.

Q3) You started at the beginning with an illustration of the researcher in the middle, then moved onto the agency of the user… And the changes to the analytical capacities working with this type of data… But how much is the awareness amongst researchers of how the data, the tools they work with, and how they are inscribed into the research…

A3) Thank you for making that distinction here. The problem in a way is that we saw what we might expect – highly varied awareness… This was determined by disciplinary background – whether STS researchers in sociology, or whether a computer scientist, say. We didn’t find too many disciplinary trends, but we looked across many disciplines…. But there were huge ranges of approach and attitude here – our data was too broad.

Panel Q&A

Q1 – Cornelius) I think that we should say that if you are wondering about “feedback” here, it’s about thinking about metrics and how they then feedback into practice, if there is a feedback loop… From very different perspectives… I would like to return to that – maybe next year when research has progressed. More qualitative understanding is needed. But a challenge is that stakeholder groups vary greatly… What if one finding doesn’t hold for other groups…

Q2) I am from the Wikimedia Foundation… I’m someone who does data analysis a lot. I am curious if in looking at these problems you have looked at recommender systems research which has been researching this space for 10 years, work on messy data and cleaning messy data… There are so many tiny differences that can really make a difference. I work on predictive algorithms, but that’s a new bit of turbulence in a turbulent sea… How much of this do you want to bring this space…

A2 – Katrin) These communities have not come together yet. I know people who work in socio-technical studies who do study interface changes… There is another community that is aware that this exists… And is not aware so closely… But see it as tiny bits of the same puzzle… And can be harder to understand for historical data… And getting an idea of what factors influence your data set. In our data sets we have interviewees more like you, and some with people at sessions like this… There is some connection, but not all of those areas coming together…

A2 – Cornelius) I think that there is a clash between computational social science data work, and this stuff here… That predictable aspect screws with big claims about society… Maybe an awareness but not a keenness. In terms of older computer science research that we are not engaging in, but should be… But often there is a conflict of interests sometimes… I saw a presentation that showed changes to the interface, changing behaviour… But companies don’t want to disclose that manipulation…

Comment) We’ve gone through a period, disheartened to see it is still there, that researchers are so excited to trace human activities, that they treat hashtags as the political debate… This community helpfully problematises or contextualises this… But I think that these papers are raising the question of people orientating practices towards the platform, from machine learning… I find it hard to talk about that… And how behaviour feeds into machine learn… Our system tips to behaviour, and technology shifts and reacts to that which is hard.

Q3) I wanted to agree with that idea of the  need to document. But I want to push at your implicit position that this is messy and difficult and hard to measure… But I think that applies to *any* methods… Standards of data removal, arise elsewhere, messiness occurs elsewhere… Some of those issues apply across all kinds of research…

A3 – Cornelius) Christian would have had an example on his algorithm audit work that might have been helpful there.

Comment) I wanted to comment on social media research versus traditional social science research… We don’t have much power over our data set – that’s quite different in comparison with those running surveys, undertaking interviews… and I have control of that tool… And I think that argument isn’t just about survey analysis, but other qualitative analysis… Your research design can fit your purposes…

 

Twitter recommend algorithms, celebrities and noise. Time on twitter. Overall follower/following counts? Does friend suggest influence?

Advertistors? and role in shaping content in news

Time:
Friday, 07/Oct/2016:

4:00pm – 5:30pm

Session Chair:

Location: HU 1.205
Humboldt University of Berlin Dorotheenstr. 24 Building 1, second floor 80 seats
Show help for 'Increase or decrease the abstract text size'

Presentations

Wiebke Loosen1, Marco T Bastos2, Cornelius Puschmann3, Uwe Hasebrink1, Sascha Hölig1, Lisa Merten1, Jan­-Hinrik Schmidt1, Katharina E Kinder­-Kurlanda4, Katrin Weller4

1Hans Bredow Institute for Media Research; 2University of California, Davis; 3Alexander von Humboldt Institute for Internet and Society; 4GESIS Leibniz Institute for the Social Sciences

Oct 062016
 

Today I am again at the Association of Internet Researchers AoIR 2016 Conference in Berlin. Yesterday we had workshops, today the conference kicks off properly. Follow the tweets at: #aoir2016.

As usual this is a liveblog so all comments and corrections are very much welcomed. 

PA-02 Platform Studies: The Rules of Engagement (Chair: Jean Burgess, QUT)

How affordances arise through relations between platforms, their different types of users, and what they do to the technology – Taina Bucher (University of Copenhagen) and Anne Helmond (University of Amsterdam)

Taina: Hearts on Twitter: In 2015 Twitter moved from stars to hearts, changing the affordances of the platform. They stated that they wanted to make the platform more accessible to new users, but that impacted on existing users.

Today we are going to talk about conceptualising affordances. In it’s original meaning an affordance is conceived of as a relational property (Gibson). For Norman perceived affordances were more the concern – thinking about how objects can exhibit or constrain particular actions. Affordances are not just the visual clues or possibilities, but can be felt. Gaver talks about these technology affordances. There are also social affordances – talked about my many – mainly about how poor technological affordances have impact on societies. It is mainly about impact of technology and how it can contain and constrain sociality. And finally we have communicative affordances (Hutchby), how technological affordances impact on communities and communications of practices.

So, what about platform changes? If we think about design affordances, we can see that there are different ways to understand this. The official reason for the design was given as about the audience, affording sociality of community and practices.

Affordances continues to play an important role in media and social media research. They tend to be conceptualised as either high-level or low-level affordances, with ontological and epistemological differences:

  • High: affordance in the relation – actions enabled or constrained
  • Low: affordance in the technical features of the user interface – reference to Gibson but they vary in where and when affordances are seen, and what features are supposed to enable or constrain.

Anne: We want to now turn to platform-sensitive approach, expanding the notion of the user –> different types of platform users, end-users, developers, researchers and advertisers – there is a real diversity of users and user needs and experiences here (see Gillespie on platforms. So, in the case of Twitter there are many users and many agendas – and multiple interfaces. Platforms are dynamic environments – and that differentiates social media platforms from Gibson’s environmental platforms. Computational systems driving media platforms are different, social media platforms adjust interfaces to their users through personalisation, A/B testing, algorithmically organised (e.g. Twitter recommending people to follow based on interests and actions).

In order to take a relational view of affordances, and do that justice, we also need to understand what users afford to the platforms – as they contribute, create content, provide data that enables to use and development and income (through advertisers) for the platform. Returning to Twitter… The platform affords different things for different people

Taking medium-specificity of platforms into account we can revisit earlier conceptions of affordance and critically analyse how they may be employed or translated to platform environments. Platform users are diverse and multiple, and relationships are multidirectional, with users contributing back to the platform. And those different users have different agendas around affordances – and in our Twitter case study, for instance, that includes developers and advertisers, users who are interested in affordances to measure user engagement.

How the social media APIs that scholars so often use for research are—for commercial reasons—skewed positively toward ‘connection’ and thus make it difficult to understand practices of ‘disconnection’ – Nicolas John (Hebrew University of Israel) and Asaf Nissenbaum (Hebrew University of Israel)

Consider this… On Facebook…If you add someone as a friend they are notified. If you unfriend them, they do not. If you post something you see it in your feed, if you delete it it is not broadcast. They have a page called World of Friends – they don’t have one called World of Enemies. And Facebook does not take kindly to app creators who seek to surface unfriending and removal of content. And Facebook is, like other social media platforms, therefore significantly biased towards positive friending and sharing actions. And that has implications for norms and for our research in these spaces.

One of our key questions here is what can’t we know about

Agnotology is defined as the study of ignorance. Robert Proctor talks about this in three terms: native state – childhood for instance; strategic ploy – e.g. the tobacco industry on health for years; lost realm – the knowledge that we cease to hold, that we loose.

I won’t go into detail on critiques of APIs for social science research, but as an overview the main critiques are:

  1. APIs are restrictive – they can cost money, we are limited to a percentage of the whole – Burgess and Bruns 2015; Bucher 2013; Bruns 2013; Driscoll and Walker
  2. APIs are opaque
  3. APIs can change with little notice (and do)
  4. Omitted data – Baym 2013 – now our point is that these platforms collect this data but do not share it.
  5. Bias to present – boyd and Crawford 2012

Asaf: Our methodology was to look at some of the most popular social media spaces and their APIs. We were were looking at connectivity in these spaces – liking, sharing, etc. And we also looked for the opposite traits – unliking, deletion, etc. We found that social media had very little data, if any, on “negative” traits – and we’ll look at this across three areas: other people and their content; me and my content; commercial users and their crowds.

Other people and their content – APIs tend to supply basic connectivity – friends/following, grouping, likes. Almost no historical content – except Facebook which shares when a user has liked a page. Current state only – disconnections are not accounted for. There is a reason to not know this data – privacy concerns perhaps – but that doesn’t explain my not being able to find this sort of information about my own profile.

Me and my content – negative traits and actions are hidden even from ourselves. Success is measured – likes and sharin, of you or by you. Decline is not – disconnections are lost connections… except on Twitter where you can see analytics of followers – but no names there, and not in the API. So we are losing who we once were but are not anymore. Social network sites do not see fit to share information over time… Lacking disconnection data is an idealogical and commercial issue.

Commercial users and their crowds – these users can see much more of their histories, and the negative actions online. They have a different regime of access in many cases, with the ups and downs revealed – though you may need to pay for access. Negative feedback receives special attention. Facebook offers the most detailed information on usage – including blocking and unliking information. Customers know more than users, or Pages vs. Groups.

Nicholas: So, implications. From what Asaf has shared shows the risk for API-based research… Where researchers’ work may be shaped by the affordances of the API being used. Any attempt to capture negative actions – unlikes, choices to leave or unfriend. If we can’t use APIs to measure social media phenomena, we have to use other means. So, unfriending is understood through surveys – time consuming and problematic. And that can put you off exploring these spaces – it limits research. The advertiser-friends user experience distorts the space – it’s like the stock market only reporting the rises except for a few super wealthy users who get the full picture.

A biography of Twitter (a story told through the intertwined stories of its key features and the social norms that give them meaning, drawing on archival material and oral history interviews with users) – Jean Burgess (Queensland University of Technology) and Nancy Baym (Microsoft Research)

I want to start by talking about what I mean by platforms, and what I mean by biographies. Here platforms are these social media platforms that afford particular possibilities, they enable and shape society – we heard about the platformisation of society last night – but their governance, affordances, are shaped by their own economic existance. They are shaping and mediating socio-cultural experience and we need to better to understand the values and socio-cultural concerns of the platforms. By platform studies we mean treating social media platforms as spaces to study in their own rights: as institutions, as mediating forces in the environment.

So, why “biography” here? First we argue that whilst biographical forms tend to be reserved for individuals (occasionally companies and race horses), they are about putting the subject in context of relationships, place in time, and that the context shapes the subject. Biographies are always partial though – based on unreliable interviews and information, they quickly go out of date, and just as we cannot get inside the heads of those who are subjects of biographies, we cannot get inside many of the companies at the heart of social media platforms. But (after Richard Rogers) understanding changes helps us to understand the platform.

So, in our forthcoming book, Twitter: A Biography (NYU 2017), we will look at competing and converging desires around e.g the @, RT, #. Twitter’s key feature set are key characters in it’s biography. Each has been a rich site of competing cultures and norms. We drew extensively on the Internet Archives, bloggers, and interviews with a range of users of the platform.

Nancy: When we interviewed people we downloaded their archive with them and talked through their behaviour and how it had changed – and many of those features and changes emerged from that. What came out strongly is that noone knows what Twitter is for – not just amongst users but also amongst the creators – you see that today with Jack Dorsey and Anne Richards. The heart of this issue is about whether Twitter is about sociality and fun, or is it a very important site for sharing important news and events. Users try to negotiate why they need this space, what is it for… They start squabling saying “Twitter, you are doing it wrong!”… Changes come with backlash and response, changed decisions from Twitter… But that is also accompanied by the media coverage of Twitter, but also the third party platforms build on Twitter.

So the “@” is at the heart of Twitter for sociality and Twitter for information distribution. It was imported from other spaces – IRC most obviously – as with other features. One of the earliest things Twitter incorporated was the @ and the links back.. You have things like originally you could see everyone’s @ replies and that led to feed clutter – although some liked seeing unexpected messages like this. So, Twitter made a change so you could choose. And then they changed again to automatically not see replies from those you don’t follow. So people worked around that with “.@” – which created conflict between the needs of the users, the ways they make it usable, and the way the platform wants to make the space less confusing to new users.

The “RT” gave credit to people for their words, and preserved integrity of words. At first this wasn’t there and so you had huge variance – the RT, the manually spelled out retweet, the hat tip (HT). Technical changes were made, then you saw the number of retweets emerging as a measure of success and changing cultures and practices.

The “#” is hugely disputed – it emerged through hashtag.org: you couldn’t follow them in Twitter at first but they incorporated it to fend off third party tools. They are beloved by techies, and hated by user experience designers. And they are useful but they are also easily coopted by trolls – as we’ve seen on our own hashtag.

Insights into the actual uses to which audience data analytics are put by content creators in the new screen ecology (and the limitations of these analytics) – Stuart Cunningham (QUT) and David Craig (USC Annenberg School for Communication and Journalism)

The algorithmic culture is well understood as a part of our culture. There are around 150 items on Tarleton Gillespie and Nick Seaver’s recent reading list and the literature is growing rapidly. We want to bring back a bounded sense of agency in the context of online creatives.

What do I mean by “online creatives”? Well we are looking at social media entertainment – a “new screen ecology” (Cunningham and Silver 2013; 2015) shaped by new online creatives who are professionalising and monetising on platforms like YouTube, as opposed to professional spaces, e.g. Netflix. YouTube has more than 1 billion users, with revenue in 2015 estimated at $4 billion per year. And there are a large number of online creatives earning significant incomes from their content in these spaces.

Previously online creatives were bound up with ideas of democratic participative cultures but we want to offer an immanent critique of the limits of data analytics/algorithmic culture in shaping SME from with the industry on both the creator (bottom up) and platform (top down) side. This is an approach to social criticism exposes the way reality conflicts not with some “transcendent” concept of rationality but with its own avowed norms, drawing on Foucault’s work on power and domination.

We undertook a large number of interviews and from that I’m going to throw some quotes at you… There is talk of information overload – of what one might do as an online creative presented with a wealth of data. Creatives talk about the “non-scalable practices” – the importance and time required to engage with fans and subscribers. Creatives talk about at least half of a working week being spent on high touch work like responding to comments, managing trolls, and dealing with challenging responses (especially with creators whose kids are engaged in their content).

We also see cross-platform engagement – and an associated major scaling in workload. There is a volume issue on Facebook, and the use of Twitter to manage that. There is also a sense of unintended consequences – scale has destroyed value. Income might be $1 or $2 for 100,000s or millions of views. There are inherent limits to algorithmic culture… But people enjoy being part of it and reflect a real entrepreneurial culture.

In one or tow sentences, the history of YouTube can be seen as a sort of clash of NoCal and SoCal cultures. Again, no-one knows what it is for. And that conflict has been there for ten years. And you also have the MCNs (Multi-Contact Networks) who are caught like the meat in the sandwich here.

Panel Q&A

Q1) I was wondering about user needs and how that factors in. You all drew upon it to an extent… And the dissatisfaction of users around whether needs are listened to or not was evident in some of the case studies here. I wanted to ask about that.

A1 – Nancy) There are lots of users, and users have different needs. When platforms change and users are angry, others are happy. We have different users with very different needs… Both of those perspectives are user needs, they both call for responses to make their needs possible… The conflict and challenges, how platforms respond to those tensions and how efforts to respond raise new tensions… that’s really at the heart here.

A1 – Jean) In our historical work we’ve also seen that some users voices can really overpower others – there are influential users and they sometimes drown out other voices, and I don’t want to stereotype here but often technical voices drown out those more concerned with relationships and intimacy.

Q2) You talked about platforms and how they developed (and I’m afraid I didn’t catch the rest of this question…)

A2 – David) There are multilateral conflicts about what features to include and exclude… And what is interesting is thinking about what ideas fail… With creators you see economic dependence on platforms and affordances – e.g. versus PGC (Professionally Generated Content).

A2 – Nicholas) I don’t know what user needs are in a broader sense, but everyone wants to know who unfriended them, who deleted them… And a dislike button, or an unlike button… The response was strong but “this post makes me sad” doesn’t answer that and there is no “you bastard for posting that!” button.

Q3) Would it be beneficial to expose unfriending/negative traits?

A3 – Nicholas) I can think of a use case for why unfriending would be useful – for instance wouldn’t it be useful to understand unfriending around the US elections. That data is captured – Facebook know – but we cannot access it to research it.

A3 – Stuart) It might be good for researchers, but is it in the public good? In Europe and with the Right to be Forgotten should we limit further the data availability…

A3 – Nancy) I think the challenge is that mismatch of only sharing good things, not sharing and allowing exploration of negative contact and activity.

A3 – Jean) There are business reasons for positivity versus negativity, but it is also about how the platforms imagine their customers and audiences.

Q4) I was intrigued by the idea of the “Medium specificity of platforms” – what would that be? I’ve been thinking about devices and interfaces and how they are accessed… We have what we think of as a range but actually we are used to using really one or two platforms – e.g. Apple iPhone – in terms of design, icons, etc. and the possibilities of interface is, and what happens when something is made impossible by the interface.

A4 – Anne) When the “medium specificity” we are talking about the platform itself as medium. Moving beyond end user and user experience. We wanted to take into account the role of the user – the platform also has interfaces for developers, for advertisers, etc. and we wanted to think about those multiple interfaces, where they connect, how they connect, etc.

A4 – Taina) It’s a great point about medium specitivity but for me it’s more about platform specifity.

A4 – Jean) The integration of mobile web means the phone iOS has a major role here…

A4 – Nancy) We did some work with couples who brought in their phones, and when one had an Apple and one had an Android phone we actually found that they often weren’t aware of what was possible in the social media apps as the interfaces are so different between the different mobile operating systems and interfaces.

Q5) Can you talk about algorithmic content and content innovation?

A5 – David) In our work with YouTube we see forms of innovation that are very platform specific around things like Vine and Instagram. And we also see counter-industrial forms and practices. So, in the US, we see blogging and first person accounts of lives… beauty, unboxing, etc. But if you map content innovation you see (similarly) this taking the form of gaps in mainstream culture – in India that’s stand up comedy for instance. Algorithms are then looking for qualities and connections based on what else is being accessed – creating a virtual circle…

Q6) Can we think of platforms as instable, about platforms having not quite such a uniform sense of purpose and direction…

A6 – Stuart) Most platforms are very big in terms of their finance… If you compare that to 20 years ago the big companies knew what they were doing! Things are much more volatile…

A6 – Jean) That’s very common in the sector, except maybe on Facebook… Maybe.

PA-05: Identities (Chair: Tero Jukka Karppi)

The Bot Affair: Ashley Madison and Algorithmic Identities as Cultural Techniques – Tero Karppi, University at Buffalo, USA

As of 2012 Ashley Madison is the biggest online dating site targeted at those already in a committed relationship. Users are asked to share their gender, their sexuality, and to share images. Some aspects are free but message and image exchange are limited to paid accounts.

The site was hacked in 2016, stealing site user data which was then shared. Security experts who analysed the data assessed it as real, associated with real payment details etc. The hacker intention was to expose cheaters but my paper is focused on a different aspect of the aftermath. Analysis showed 43 male bots, and 70k female bots and that is the focus of my paper. And I want to think about this space and connectivity by removing the human user from the equation.

The method for me was about thinking about the distinction between human and non-human user, the individual and the bot. Eminating from germination theory I wanted to use cultural techniques – with materials, symbolic values, rules and places. So I am seeking elements of difference of different materials in the context of the hack and the aftermath.

So, looking at a news items: “Ashley madison, the dating website for cheaters, has admitted that some women on its site were virtual computer programmes instead of real women.” (CNN money), which goes onto say that users thought that they were cheating, but they weren’t after all! These bots interacted with users in a variety of ways from “winking” to messaging, etc. The role of the bot is to engage users in the platform and transform them into paying customers. A blogger talked about the space as all fake – the men are cheaters, the women are bots and only the credit card payments are real!

The fact that the bots are so gender imbalanced tells us the difference in how the platform imagines male and female users. In another commentary they comment on the ways in which fake accounts drew men in – both by implying real women were on the site, and by using real images on fake accounts… The lines between what is real and what is fake have been blurred. Commentators noted the opaqueness of connectivity here, and of the role of the bots. Who knows how many of the 4 million users were real?

The bots are designed to engage users, to appear as human to the extent that we understand human appearance. Santine Olympo talked about bots whilst others looking at algorithmic spaces and what can be imagined and created from our wants and needed. According to Ashley Madison employees the bots – or “angels” – were created to match the needs of users, recycling old images from real user accounts. This case brings together the “angel” and human users. A quote from a commentator imagines this as a science fiction fantasy where real women are replaced by perfect interested bots. We want authenticity in social media sites but bots are part of our mundane everyday existence and part of these spaces.

I want to finish by quoting from Ashley Madison’s terms and conditions, in which users agree that “some of the accounts and users you may encounter on the site may be fiction”.

Facebook algorithm ruins friendship – Taina Bucher, University of Copenhagen

“Rachel”, a Facebook user/informant states this in a tweet. She has a Facebook account that she doesn’t use much. She posts something and old school friends she has forgotten comment on it. She feels out of control… And what I want to focus on today are ordinary affects of algorithmic life taking that idea from ?’s work and Catherine Stewart’s approach to using this in the context of understanding the encounters between people and algorithmic processes. I want to think about the encounter and how the encounter itself becoming generative.

I think that the fetish could be one place to start in knowing algorithms… And how people become attuned to them. We don’t want to treat algorithms as a fetish. The fetishist doesn’t care about the object, just about how the object makes them feel. And so the algorithm as fetish can be a mood maker, using the “power of engagement”. The power does not reside in the algorithm, but in the types of ways people imagine the algorithm to exist and impact upon them.

So, I have undertaken a study of people’s personal algorithm stories, looking at people’s personal algorithm stories about Facebook algorithm; monitoring and querying Twitter for comments and stories (through keywords) relating to Facebook algorithms. And a total of 25 interviews were undertaken via email, chat and Skype.

So, when Rachel tweeted about Facebook and friendship, that gave me the starting point to understand stories and the context for these positions through interviews. And what repeatedly arose was the uncanny nature of Facebook algorithms. Take, for instance Micheal, a musician in LA. He shares a post and usually the likes come in rapidly, but this time nothing… He tweets that the algorithm is “super frustrating” and he believes that Facebook only shows paid for posts. Like others he has developed his own strategy to show posts more clearly. He says:

“If the status doesn’t build buzz (likes, comments, shares) within the first 10 minutes or so it immediately starts moving down the news feed and eventually gets lost.”

Adapting behaviour to social media platforms and their operation can be seen as a form of “optimisation”. Users aren’t just updating their profile or hoping to be seen, they are trying to change behaviours to be better seen by the algorithm. And this takes us to the algorithmic imaginary, the ways of thinking about what algorithms are, what they should be, how they function, and what these imaginations in turn make possible. Many of our participants talked about changing behaviours for the platform. Rachel talks about “clicking every day to change what will show up on her feed” is not only her using the platform, but thinking and behaving differently in the space. Adverts can also suggest algorithmic intervention and, no matter whether the user is profiled or not (e.g. for anti-wrinkle cream), users can feel profiled regardless.

So, people do things to algorithms – disrupting liking practices, comment more frequently to increase visibility, emphasise positively charged words, etc. these are not just interpreted by the algorithm but also shape that algorithm. Critiquing the algorithm is not enough, people are also part of the algorithm and impact upon its function.

Algorithmic identity – Michael Stevenson, University of Groningen, Netherlands

Michael is starting with a poster of Blade Runner… Algorithmic identity brings to mind cyberpunk and science fiction. But day to day algorithmic identity is often about ads for houses, credit scores… And I’m interested in this connection between this clash of technological cool vs mundane instruments of capitalism.

For critics the “cool” is seen as an ideological cover for the underlying political economy. We can look at the rhetoric around technology – “rupture talk”, digital utopianism as that covering of business models etc. Evgeny Morozov writes entertainingly of this issue. I think this critique is useful but I also think that it can be too easy… We’ve seen Morozov tear into Jeff Jarvis and Tim O’Reilly, describing the latter as a spin doctor for Silicon Valley. I think that’s too easy…

My response is this… An image of Christopher Walken saying “needs more Bourdieu”. I think we need to take seriously the values and cultures and the effort it takes to create those. Bourdieu talks about the new media field with areas of “web native”, open, participatory, transparant at one end of the spectrum – the “autonomous pole”; and the “heteronomous pole” of mass/traditional media, closed, controlled, opaque. The idea is that actors locate themselves between these poles… There is also competition to be seen as the most open, the most participatory – you may remember a post from a few years back on Google’s idea of open versus that of Facebook. Bourdieu talks of the autonomous pole as being about downplaying income and economic value, whereas the heteronomous pole is much more directly about that…

So, I am looking at “Everything” – a site designed in the 1990s. It was built by the guys behind Slashdot. It was intended as a compendium of knowledge to support that site and accompany it – items of common interest, background knowledge that wasn’t news. If we look at the site we see implicit and explicit forms of impact… Voting forms on articles (e.g. “I like this write up”), and soft links at the bottom of the page – generated by these types of feedback and engagement. This was the first version in the 1990s. Then in 1999 Nathan Dussendorf(?) developed the Everything2 built with the Everything Development Engine. This is still online. Here you see that techniques of algorithmic identity and datafication of users, this is very explicitly presented – very much unlike Facebook. Among the geeks here the technology is put on top, showing reputation on the site. And being open source, if you wanted to understand the recommendation engine you could just look it up.

If we think of algorithms as talk makers, and we look back at 1999 Everything2, you see the tracking and datafication in place but the statement around it talks about web 2.0/social media type ideas of democracy, meritocracy, conflations of cultural values and social actions with technologies and techniques. Aspects of this are bottom up and you also talk about the role of cookies, and the addressing of privacy. And it directly says “the more you participate, the greater the opportunity for you to mold it your way”.

Thinking about Field Theory we can see some symbolic exclusion – of Microsoft, of large organisations – as a way to position Everything2 within the field. This continues throughout the documentation across the site. And within this field “making money is not a sin” – that developers want to do cool stuff, but that can sit alongside making money.

So, I don’t want to suggest this is a utopian space… Everything2 had a business model, but this was of its time for open source software. The idea was to demonstrate capabilities of the development framework, to get them to use it, and to then get them to pay for services… But this was 2001 and the bubble burst… So the developers turned to “real jobs”. But Everything2 is still out there… And you can play with the first version on an archived version if you are curious!

The Algorithmic Listener – Robert Prey, University of Groningen, Netherlands

This is a version of a paper I am working on – feedback appreciated. And this was sparked by re-reading Raymond Williams, who talks about “there are in fact no masses, but only ways of seeing people as masses” (1958/2011). But I think that in the current environment Williams might now say “there are in fact no individuals, but only ways of seeing people as individuals”. and for me I’m looking at this through the lens of music platforms.

In an increasingly crowded and competitive sector platforms like Spotify, SoundCloud, Apple Music, Deezer, Pandora, Tidel, those platforms are increasingly trying to differentiate themselves through recommendation engines. And I’ll go on to talk about recommendations as individualisation.

Pandora internet radio calls itself the “music genome project” and sees music as genes. It seeks to provide recommendatoins that are outside the distorting impact of cultural information, e.g. you might like “The colour of my love” but you might be put off by the fact that Celine Dion is not cool. They market themselves against the crowd. They play on the individual as the part separated from the whole. However…

Many of you will be familiar with Spotify, and will therefore be familiar with Discover Weekly. The core of Spotify is the “taste profile”. Every interaction you have is captured and recorded in real time – selected artists, songs, behaviours, what you listen to and for how long, what you skip. Discover weekly uses both the taste profile and aspects of collaborative filtering – selecting songs you haven’t discovered that fits your taste profile. So whilst it builds a unique identity for each user, it also relies heavily on other peoples’ taste. Pandora treats other people as distortion, Spotify sees it as more information. Discover weekly does also understands the user based on current and previous behaviours. Ajay Kalia (Spotify) says:

“We believe that it’s important to recognise that a single music listener is usually many listeners… [A] person’s preference will vary by the type of music, by their current activity, by the time of day, and so on. Our goal then is to come up with the right recommendation…”

This treats identity as being in context, as being the sum of our contexts. Previously fixed categories, like gender, are not assigned at the beginning but emerge from behaviours and data. Pagano talks about this, whilst Cheney-Lippold (2011) talks about “cybernetic relationship to individual” and the idea of individuation (Simondon). For Simondon we are not individuals, individuals are an effect of individuation, not the cause. A focus on individuation transforms our relationship to recommendation systems… We shouldn’t be asking if they understand who we are, but the extent to which the person is an effect of personalisation. Personalisation is seen as about you and your need. From a Simondonian perspective there is no “you” or “want” outside of technology. In taking this perspective we have to acknowledge the political economy of music streaming systems…

And the reality is that streaming services are increasingly important to industry and advertisers, particularly as many users use the free variants. And a developer of Pandora talks about the importance for understanding profiles for advertisers. Pandora boasts that they have 700 audience segments to data. “Whether you want to reach fitness-driven moms in Atlanta or mobile Gen X-er… “. The Echo Nest, now owned by Spotify, had created highly detailed consumer profiling before it was brought up. That idea isn’t new, but the detail is. The range of segments here is highly granular… And this brings us to the point that we need to take seriously what Nick Seaver (2015) says we need to think of: “contextualisation as a practice in its own right”.

This matters as the categories that emerge online have profound impacts on how we discover and encounter our world.

Panel Q&A

Q1) I think it’s about music category but also has wider relevance… I had an introduction to the NLP process of Topic Modelling – where you label categories after the factor… The machine sorts without those labels and takes it from the data. Do you have a sense of whether the categorisation is top down, or is it emerging from the data? And if there is similar top down or bottom up categorisation in the other presentations, that would be interesting.

A1 – Robert) I think that’s an interesting question. Many segments are impacted by advertisers, and identifying groups they want to reach… But they may also

Micheal) You talked about the Ashley Madison bots – did they have categorisation, A/B testing, etc. to find successful bots?

Tero) I don’t know but I think looking at how machine learning and machine learning history

Micheal) The idea of content filtering from the bottom to the top was part of the thinking behind Everything…

Q2) I wanted to ask about the feedback loop between the platforms and the users, who are implicated here, in formation of categories and shaping platforms.

A2 – Taina) Not so much in the work I showed but I have had some in-depth Skype interviews with school children, and they all had awareness of some of these (Facebook algorithm) issues, press coverage and particularly the review of the year type videos… People pick up on this, and the power of the algorithm. One of the participants emails me since the study noting how much she sees writing about the algorithm, and about algorithms in other spaces. Awareness is growing much more about the algorithms shaping spaces. It is more prominent than it was.

Q3) I wanted to ask Michael about that idea of positioning Everything2 in relation to other sites… And also the idea of the individual being transformed by platforms like Spotify…

A3 – Michael) I guess the Bourdieun vision is that anyone who wants to position themselves on the spectrum, they can. With Everything you had this moment during the Internet Bubble, a form of utopianism… You see it come together somewhat… And the gap between Wired – traditional mass media – and smaller players but then also a coming together around shared interests and common enemies.

A3 – Robert) There were segments that did come from media, from radio and for advertisers and that’s where the idea of genre came in… That has real effects… When I was at High School there were common groups around particular genres… But right now the move to streaming and online music means there are far more mixed listening and people self-organise in different ways. There has been de-bunking of Bourdieu, but his work was at a really different time.

Q4) I wanted to ask about interactions between humans and non-human. Taina, did people feel positive impacts of understanding Facebook algorithms… Or did you see frustrations with the Twitter algorithms. And Tero, I was wondering how those bots had been shaped by humans.

A4 – Taina) The human and non-human, and whether people felt more or less frustrated by understanding the algorithm. Even if they felt they knew, it changes all the time, their strategies might help but then become obsolete… And practices of concealment and misinformation were tactics here. But just knowing what is taking place, and trying to figure it out, is something that I get a sense is helpful… But maybe that is’t the right answer to it. And that notion of a human and a non human is interesting, particularly for when we see something as human, and when we see things as non-human. In terms of some of the controversies… When is an algorithm blamed versus a human… Well there is no necessary link/consistency there… So when do we assign humanness and non-humanness to the system and does it make a difference?

A4 – Tero) I think that’s a really interesting questions…. Looking at social media now from this perspective helps us to understand that, and the idea of how we understand what is human and what is non-human agency… And what it is to be a human.

Q5) I’m afraid I couldn’t here this question

A5 – Richard) Spotify supports what Deleuze wrote about in terms of the individual and how aspects of our personality are highlighted at the points that is convenient. And how does that effect help us regulate. Maybe the individual isn’t the most appropriate unit any more?

A5 – Taine) For users the exposure that they are being manipulated or can be summed up by the algorithm, that is what can upset or disconcert them… They don’t like to feel summed up by that…

Q6) I really like the idea of the imagined… And perceptions of non-human actors… In the Ashley Madison case we assume that men thought bots were real… But maybe not everyone did that. I think that moment of how and when people imagine and ascribe human or non-human status here. In one way we aren’t concerned by the imaginary… And in another way we might need to consider different imaginaries – the imaginary of the platform creators vs. users for instance.

A6 – Tero) Right now I’m thinking about two imaginaries here… Ashley Madison’s imaginary around the bots, and the users encountering them and how they imagine those bots…

A6 – Taine) A good question… How many imaginaries o you think?! It is about understanding more who you encounter, who you engage with. Imaginaries are tied to how people conceive of their practice in their context, which varies widely, in terms of practices and what you might post…

And with that session finished – and much to think about in terms of algorithmic roles in identity – it’s off to lunch… 

PS-09: Privacy (Chair: Michael Zimmer)

Unconnected: How Privacy Concerns Impact Internet Adoption – Eszter Hargittai, Ashley Walker, University of Zurich

The literature in this area seems to target the usual suspects – age, socio-economic status… But the literature does not tend to talk about privacy. I think one of the reasons may be the idea that you can’t compare users and non-users of the internet on privacy. But we have located a data set that does address this issue.

The U.S. Federal Communication Commission’s issued a National Consumer’s Broadband Service Capability Service in 2009 – when about 24% of Americans were still not yet online. This work is some years ago but our insterest is in the comparison rather than numbers/percentages. And this questioned both internet users and non-users.

One of the questions was: “It is too easy for my personal information to be stolen online” and participants were asked if they Strongly agreed, somewhat agreed, somewhat disagreed, disagreed. We looked at that as bivariate – strongly agreed or not. And analysing that we found that among internet users 63.3% said they strongly agreed versus 81% of non internet users. Now we did analyse demographically… It is what you expect generally – more older people are not online (though interestingly more female respondents are online). But even then the internet non-users again strongly agreed about that privacy/concern question.

So, what does that mean? Well getting people online should address people’s concerns about privacy issues. There is also a methodological takeaway – there is value to asking non-users about internet-related questions – as they may explain their reasons.

Q&A

Q1) Was it asked whether they had previously been online?

A1) There is data on drop outs, but I don’t know if that was captured here.

Q2) Is there a differentiation in how internet use is done – frequently or not?

A2) No, I think it was use or non-use. But we have a paper coming out on those with disabilities and detailed questions on internet skills and other factors – that is a strength of the dataset.

Q3) Are there security or privacy questions in the dataset?

A3) I don’t think there are, or we would have used them. It’s a big national dataset… There is a lot on type of internet connection and quality of access in there, if that is of interest.

Note, there is more on some of the issues around access, motivations and skills in the Royal Society of Edinburgh Spreading the Benefits of Digital Participation in Scotland Inquiry report (Fourman et al 2014). I was a member of this inquiry so if anyone at AoIR2016 is interested in finding out more, let me know. 

Enhancing online privacy at the user level: the role of internet skills and policy implications – Moritz Büchi, Natascha Just, Michael Latzer, U of Zurich, Switzerland

Natascha: This presentation is connected with a paper we just published and where you can read more if you are interested.

So, why do we care about privacy protection? Well there is increased interest in/availability of personal data. We see big data as a new asset class, we see new methods of value extraction, we see growth potential of data-driven management, and we see platformisation of internet-based markets. Users have to continually balance the benefits with the risks of disclosure. And we see issues of online privacy and digital inequality – those with fewer digital skills are more vulnerable to privacy risks.

We see governance becoming increasingly important and there is an issue of understanding appropriate measures. Market solutions by industry self-regulation is problematic because of a lack of incentives as they benefit from data. At the same time states are not well placed to regulate because of their knowledge and the dynamic nature of the tech sector. There is also a route through users’ self-help. Users self-help can be an effective method to protect privacy – whether opting out, or using privacy enhancing technology. But we are increasingly concerned but we still share our data and engage in behaviour that could threaten our privacy online. And understanding that is crucial to understand what can trigger users towards self-help behaviour. To do that we need evidence, and we have been collecting that through a world internet study.

Moritz: We can imperically address issues of attitudes, concerns and skills. The literature finds these all as important, but usually at most two factors covered in the literature. Our research design and contributions look at general population data, nationally representative so that they can feed into policy. The data was collected in the World Internet Project, though many questions only asked in Switzerland. Participants were approached on landline and mobile phones. And our participants had about 88% internet users – that maps to the approx. population using the internet in Switzerland.

We found a positive effect of privacy attitudes on behaviours – but a small effect. There was a strong effect of privacy breaches and engaging in privacy protection behaviours. And general internet skills also had an effect on privacy protection. Privacy breaches – learning the hard way – do predict privacy self-protection. Caring is not enough – that pro-privacy attitudes do not really predict privacy protection behaviours. But skills are central – and that can mean that digital inequalities may be exacerbated because users with low general internet skills do not tend to engage in privacy protection behaviour.

Q&A

Q1) What do you mean by internet skills?

A1 – Moritz): In this case there were questions that participants were asked, following a model by Alexander von Durnstern and colleagues developed, that asks for agreement or disagreement

Navigating between privacy settings and visibility rules: online self-disclosure in the social web – Manuela Farinosi1,Sakari Taipale2, 1: University of Udine; 2: University of Jyväskylä

Our work is focused on self-disclosure online, and particularly whether young people are concerned about privacy in relation to other internet users, privacy to Facebook, or privacy to others.

Facebook offers complex privacy settings allowing users to adopt a range of strategies in managing their information and sharing online. Waters and Ackerman (2011) talk about the practice of managing privacy settings and factors that play a role including culture, motivation, risk-taking ratio, etc. And other factors are at play here. Fuchs (2012) talks about Facebook as commercial organisation and concerns around that. But only some users are aware of the platform’s access to their data, may believe their content is (relatively) private. And for many users privacy to other people is more crucial than privacy to Facebook.

And there are differences in privacy management… Women are less likely to share their phone number, sexual orientation or book preferences. Men are more likely to share corporate information and political views. Several scholars have found that women are more cautious about sharing their information online. Nosko et al (2010) found no significant difference in information disclosure except for political informaltion (which men still do more of).

Sakari: Manuela conducted an online survey in 2012 in Italy with single and multiple choice questions. It was issued to university students – 1125 responses were collected. We focused on 18-38 year old respondents, and only those using facebook. We have slightly more female than male participants, mainly 18-25 years old. Mostly single (but not all). And most use facebook everyday.

So, a quick reminder of Facebook’s privacy settings… (a screenshot reminder, you’ve seen these if you’ve edited yours).

To the results… We found that the data that are most often kept private and not shared are mobile phone number, postal address or residence, and usernames of instant messaging services. The only data they do share is email address. But disclosure is high of other types of data – birth date for instance. And they were not using friends list to manage data. Our research also confirmed that women are more cautious about sharing their data, and men are more likely to share political views. The only not gender related issues were disclosure of email and date of birth.

Concerns were mainly about other users, rather than Facebook, but it was not substantially different in Italy. We found very consistent gender effects across our study. We also checked factors related to concerns but age, marital status, education, and perceived level of expertise as Facebook user did not have a significant impact. The more time you spend on Facebook, the less likely you are to care about privacy issues. There was also a connection between respondents’ privacy concerns were related to disclosures by others on their wall.

So, conclusions, women are more aware of online privacy protection than men, and protection of private sphere. They take more active self protection there. And we speculate on the reasons… There are practices around sense of security/insecurity, risk perception between men and women, and the more sociological understanding of women as maintainers of social labour – used to taking more care of their material… Future research needed though.

Q&A

Q1) When you asked users about privacy settings on Facebook how did you ask that?

A1) They could go and check, or they could remember.

WHOSE PRIVACY? LOBBYING FOR THE FREE FLOW OF EUROPEAN PERSONAL DATA – Jockum Philip Hildén, University of Helsinki, Finland

My focus is related to political science… And my topic is lobbying for the free flow of European Personal Data – and how the General Data Protection Regulation come into being and which lobbyists influenced the legislators. This is a new piece of regulation coming in next year. It was the subject of a great deal of lobbying – it became visible when the regulation was in parliament, but the lobbying was much earlier than that.

So, a quick description of EU law making. There is the European Commission which proposes legislation and that goes to both the Council of Europe and also to the Parliament. Both draw up regulations based on the proposal and then that becomes final regulation. In this particular case there was public consultation before the final regulation so I looked at a wide range of publicly available position pages. Looking across here I could see 10 types of stakeholders offering replies to the position papers – far more in 2011 than to the first version in 2009. Companies in the US participated to a very high degree – almost as much as those in the UK and France. That’s interesting… And that’s partly to do with the extended scope of this new regulation that covers EU but also service providers in the US and other locations. This idea is not exclusive to this regulation, known as “the Brussels effect”.

In terms of sector I have categorised the stakeholders so I have divided IP and Node communications for instance, to understand their interests. But I am interested in what they are saying, so I draw on Kluver (2013) and the “preference attainment model” to compare policy preferences of interest groups with the Commissions preliminary draft proposal, the Commission’s final proposal, and the final legislative act adopted by the council. So, what interests did the council take into account? Well almost every article changed – which makes those changes hard to pin down. But…

There is an EU Power Struggle. The Commission draft contained 26 different cases where it was empowered to adopt delegated acts. All but one of these articles were removed from the Council’s draft. And there were 48 exceptions for member states, most of them are “in the public interest”… But that could mean anything! And thus the role of nation states comes into question. The idea of European law is to have consistent policy – that amount of variance undermines that.

We also see a degree of User disempowerment. Here we see responses from Digital Europe – a group of organisations doing any sort of surveillance; But we also see the American Chambers of Commerce submitting responses. In these responses both are lobbying for “implicit consent” – the original draft requested explicit consent. And the Commission sort of brought into this, using a concept of unambiguous consent… Which is itself very ambiguous. Looking at the Council vs Free Data Advocates and then compared to Council vs Privacy Advocates. The Free Data Advocates are pro free movement of data, and privacy – as that’s useful to them too, but they are not keen on greater Commission powers. Privacy Advocates are pro privacy and more supportive of Commission powers.

In Search of Safe Harbors – Privacy and Surveillance of Refugees in Europe – Paula Kift, New York University, United States of America

Over 2015 a million refugees and migrants arrived at the borders of Europe. One of the ways in which the EU attempted to manage this influx was to gather information on these peoples. In particular satellite surveillance and data collection on individuals on arrival.   
The EU does acknowledge that biometric data does raise privacy issues, but that satellites and drones is not personally identifiable or an issue here. I will argue that the right to privacy does not require presence of Personally Identifiable Information.
As background there are two pieces of legislation, Eurosur – regulations to gather and share satelite and drone data across Member States. Although the EU justifies this on the basis of helping refugees in distress, it isn’t written into the regulation. Refugee and human rights organisations say that this surveillance is likely to enable turning back of migrants before they enter EU waters.
If they do reach the EU, according to Eurodac (2000) refugees must give fingerprints (if over 14 years old) and can only apply for asylum status in one country. But in 2013 this regulation has been updated so that fingerprinting can be used in law enforcement – that goes again EU human rights act and Data Protection law. It is also demeaning and suggests that migrants are more likely to be criminal, something not backed up by evidence. They have also proposed photography and fingerprinting be extended to everyone over 6 years old. There are legitimate reasons for this… Refugees come into Southern Europe where opportunities are not as good, so some have burned off fingerprints to avoid registration there, so some of these are attempts to register migrants, and to avoid losing children once in the EU.
The EU does not dispute that biometric data is private data. But with Eurodac and Eurosur the right to data protection does not apply – they monitor boats not individuals. But I argue that the Right to Private Life is jeapodised here, through prejudice, reachability and classifiability… The bigger issue may actually be the lack of personal data being collected… The EU should approach boats and identify those with asylum claim, and manage others differently, but that is not what is done.
So, how is big data relevant? Well big data can turn non personally identifiable information into PII through aggregation and combination. And classifying individuals also has implications for the design of Data Protection Laws. Data protection is a procedural right, but privacy is a substantive right, less dependent on personally identifiable information. Ultimately the right to privacy protects the person, rather than the integrity of the data.
Q&A
Q1) In your research have you encountered any examples of when policy makers have engaged with research here?
A1 – Paula) I have not conducted any on the ground interviews or ethnographic work with policy makers but I would suggest that the increasing focus on national security is driving this activity, whereas data protection is shrinking in priority.
A1 – Jockum) It’s fairly clear that the Council of Europe engaged with digital rights groups, and that the Commission did too. But then for every one of those groups, there are 10 lobby groups. So you have Privacy International and European Digital Rights who have some traction at European level, but little traction at national level. My understanding is that researchers weren’t significantly consulted, but there was a position paper submitted by a research group at Oxford, submitted by lawers, but their interest was more aligned with national rather than digital rights issues.
Q2) You talked about the ? being embedded in the new legislation… You talk about information and big data… But is there any hope? We’ve negotiated for 4 years, won’t be in force until 2018…
A2 – Paula) I totally agree… You spend years trying to come up with a framework, but it all rests on PII…. And so how do we create Data Protection Act that respects personal privacy without being dependent on PII? Maybe the question is not about privacy but about profiles and discrimination.
A2 – Jockum) I looked at all the different sectors to look at surveillance logic, to understand why surveillance is related to regulation. The problem with Data Protection regulation is inherently problematic as it has opposing goals – to protect individuals and to enable the sharing of data… So, in that sense, surveillance logic is informing this here.
Q3) Could you outline again the threats here beyond PII?
A3 – Paula) Refugees who are aware of these issues don’t take their phones – but that reduces chance of identification but also stops potential help calls and rescues. But the risk is also about profiling… High ranking job offers are more likely to be made to women than men… Google thinks I am between 60 and 80 years old and Jewish, I’m neither, they detect who I am… And that’s where the risk is here… profiling… e.g. transactions being blocked through proposals.
Q4) Interesting mixture of papers here… Many people are concerned about social side of privacy… But know little of institutional privacy concerns. Some become more cynical… But how can we improve literacy… How can we influence people here about Data Protection laws, and privacy measures…
A4 – Esther) It varies by context. In the US the concern is with government surveillance, the EU it’s more about corporate surveillance… You may need to target differently. Myself and a colleague wrote a paper on apathy of privacy… There are issues of trust, but also work on skills. There are bigger conversations, not just with users, to be had. There are conversations to have generally with the population… Where do you infuse that, I don’t know… How do you reach adults, I don’t know?
A4 – Natascha) Not enough to strengthen awareness and rights… Skills are important here too… That you really need to ensure that skills are developed to adapt to policies and changes. Skills are key.
Q5) You talked about exclusion and registration,,, And I was wondering how exclusion to and exclusion of registration (e.g. the dead are not registered).
A5 – Paula) They collect how many are registered… But that can lead to threat inflation and very flawed data. In terms of data that is excluded there is a capacity issue… That may be the issue with deaths. The EU isn’t responsible for saving lives, but doesn’t want to be seen as responsible for those deaths either.
Q6) I wanted to come back to what you see as the problematic implications of the boat surveillance.
A6 – Paula) For many data collection is fine until something happens to you… But if you know it takes place it can have an impact on your behaviours… So there is work to be done to understand if refugees are aware of that surveillance. But the other issue here is about the use of drone surveillance to turn people back then that has clear impact on private lives, particularly as EU states have bilateral agreements with nations that have not all ratified refugee law – meaning turned back boats may result in significantly different rights and opportunities.
RT-07: IR (Chair: Victoria Nash)

The Politics of Internet Research: Reflecting on the challenges and responsibilities of policy engagement

Victoria Nash (University of Oxford, United Kingdom), Wolfgang Schulz (Hans-Bredow-Institut für Medienforschung, Germany), Juan-Carlos De Martin (Politecnico di Torino, Italy), Ivan Klimov, New Economic School, Russia (not attending), Bianca C. Reisdorf (representing Bill Dutton, Quello Center, Michigan Statue University), Kate Coyer, Central European University, Hungary (not attending)

Victoria: I am Vicky Nash and I have convened a round table of members of the international network of internet research centres.

Juan-Carlos: I am director of the Nexa Center for Internet and Society in Italy and we are mainly computer scientists like myself, and lawers. We are ten years old.

Wolfgang: I am associated with two centres, in Humboldt primarily and our interest is in governance and surveillance primarily. We are celebrating our five birthday this year. I also work with the Hans-Bredow-Institut a traditional media institute, multidisciplinary, and we increasingly focus on the internet and internet studies as part of our work.

Bianca: I am representing Bill Dutton. I am Assistant Director of the Quello Center at Michigan State University centre. We were more focused on traditional media but have moved towards internet policy in the last few years as Bill moved to join us. There are three of us right now, but we are currently recruiting for a policy post-doc.

Victoria: Thanks for that, I should talk about the department I am representing… We are in a very traditional institution but our focus has explicitly always been involvement in policy and real world impact.

Victoria: So, over the last five or so years, it does feel like there are particular challenges arising now, especially working with politicians. And I was wondering if other types of researchers are facing those same challenges – is it about politics, or is it specific to internet studies. So, can I kick off and ask you to give me an example of a policy your centre has engaged in, how you were involved, and the experience of that.

Juan-Carlos: There are several examples. One with the regional government in our region of Italy. We were aware of data and participatory information issues in Europe. We reached out and asked if they were aware. We wanted to make them aware of opportunities to open up data, and build on OECD work, but we were also doing some research ourselves. Everybody agreed in the technical infrastructure and on political level… We assisted them in creating the first open data portal in Italy, and one of the first in Europe. And that was great, it was satisfying at the time. Nothing was controversial, we were following a path in Europe… But with a change of regional government that portal has somewhat been neglected so that is frustrating…

Victoria: What motivated that approach you made?

JC: We had a chance to do something new and exciting. We had the know-how and the way it could be, at least in Italy, and that seemed like a great opportunity.

Wolfgang: My centres, I’m kind of an outsider in political governance as I’m concerned with media. But in internet governance it feels like this is our space and we are invested in how it is governed – more so than in other areas. The example I have is from more traditional media work… And that’s from the Hans-Bredow-Institute. We were asked to investigate for a report on usage patterns changes, technology changes, and puts strain on governance structures in Germany… And where there is a need for solutions to make federal and state law in Germany more convergent and able to cope with those changes. But you have to be careful when providing options, because of course you can make some options more appealing than others… So you have to be clear about whether you will be and present it as neutral, or whether you prefer an option and present it differently. And that’s interesting and challenging as an academic and with the role of an academic and institution.

Victoria: So did you consciously present options you did not support?

Wolfgang: Yes, we did. And there were two reasons for this… They were convinced we would come up with a suggestion and basis to start working with… And they accepted that we would not be specifically taking a side – for the federal or local government. And also they were confident we wouldn’t attempt to mess up the system… We didn’t present the ideal but we understood other dependencies and factors and trusted us to only put in suggestions to enhance and practically work, not replace the whole thing…

Victoria: And did they use your options?

Wolfgang: They ignored some suggestions, but where they acted they did take our options.

Bianca: I’ll talk about a semi-successful project. We were looking at detailed postcode level data on internet access and quality and reasons for that. We submitted to the National Science Foundation, it was rejected, then two weeks later we were invited to an event on just that topic by the NPIA. So we are collectively drafting suggestions from the NPIA and from a wide range of many research centres, and we are drafting that now. It was nice to be invited by policy makers… and interesting to see that idea picked up through that process in some way…

Victoria: That’s maybe an unintended consequences aspect there… And that suggestion to work with others was right for you?

Bianca: We were already keen to work with other research centres but actually we also now have policy makers and other stakeholders around the table and that’s really useful.

Victoria: those were all very positive… Maybe you could reflect on more problematic examples…

JC: Ministers often want to show that they are consulting on policy but often that is a gesture, a political move to listen but then policy made an entirely different way… After a while you get used to that. And then you have to calculate whether you participate or not – there is a time aspect there.

Victoria: And for conflict of interest reasons you pay those costs of participating…

JC: Absolutely, the costs are on you.

Wolfgang: We have had contact from ministeries in Germany but then discovered they are interested in the process as a public relations tool rather than as a genuine interest in the outcome. So now we assess that interest and engage – or don’t – accordingly. We try to say at the beginning “no, please speak to someone else” when needed. At Humboldt is reluctant to engage in policy making, and that’s a historical thing, but people expect us to get involved. We are one of the few places that can deliver monitoring on the internet, and there is an expectation to do that… And when ministeries design new programmes, we are often asked to be engaged and we have learned to be cautious about when we engage. Experience helps but you see different ways to approach academia – can be PR, sometimes you want support for your position or support politically, or you can actually be engaged in research to learn and have expertise and information. If you can see what approach it is, you can handle it appropriately.

Victoria: I think as a general piece of advice – to always question “why am I being approached” in the framing of “what are their motivations?”, that is very useful.

Wolfgang: I think starting in terms of research questions and programmes that you are concerned with gives you a counterpoint in your own thinking to dealing with requests. Then when good opportunities come up you can take it and make use of it… But academic value can be limited of some approaches so you need a good reason to engage in those projects and they have to align with your own priorities.

Bianca: My bad example is related to that. The Net Neutrality debate is a big part of our work… There are a lot of partisan opinions on that, and not a lot of neutral research there. We wanted to do a big project there but when we try to get funding for that we have been steered to stay away. We’ve been steered that talking about policy with policy makers is very negative, it is taken poorly. This debate has been bouncing around for 10 years, we want to see where Net Neutrality is imposed if we see changes in investment… But we need funding to do that… And funders don’t want to do it and are usually very cosy with policy makers…

Victoria: This is absolutely an issue, these concerns are in the minds of policy makers as well and that’s important.

Wolfgang: When we talk about research in our field and policy makers, it’s not just about when policy makers approach you to do something… You have a term like Net Neutrality at the centre that requires you to be either neutral or not neutral, that really shapes how you handle that as an academic… You can become, without wanting it, someone promoting one side sometimes. On a minor protection issue we did some work on co-regulation with Australia that seemed to solve a problem… But then after this debate in Germany and started drafting the inter-state treaty on media regulation, the policy makers were interested… And then we felt that we should support it… and I entered the stage but it’s not my question anymore… So you have opinion about how you want something done…

JC: As a coordinator of a European project there was a call that included a topic of “Net Neutrality” – we made a proposal but what happened afterwards clearly proved that that whole area was topic. It was in the call… But we should have framed it differently. Again at European level you see the Commission funds research, you see the outcomes, and then they put out a call that entirely contradicts the work that they funded for political reasons. There is such a drive for evidence-based policy making that it is important that they frame that way… It is evidence-based when it fits their agenda, not when it doesn’t.

Victoria: I did some work with the Department of Media, Culture and Sport last year, again on minor protection, and we were told at the offset to assume porn caused harm to minors. And the frames of reference was shaped to be technical – about access etc. They did bring in a range of academic expertise but the terms of reference really constrained the contribution that was possible. So, there are real bear traps out there!

Wolfgang: A few years back the European Commission asked researchers to look at broadcasters and interruptions to broadcasts and the role of advertising, even though we need money we do not do that, it isn’t answering interesting research questions for us.

Victoria: I raised a question earlier about the specific stakes that academia has in the internet, it isn’t just what we study. Do you want to say more about that.

Wolfgang: Yes, at the pre-conference we had an STS stream… People said “of course we engage with policy” and I was wondering why that is the main position… But the internet comes from academia and there is a long standing tradition of engagement in policy making. Academics do engage with media policy, but they would’t class it as “our domain”, but they were not there are part of the beginning – academia was part of that beginning of the internet.

Q&A

Q1) I wonder if you are mistaking the “of-ness” with the fact that the internet is still being formed, still in the making. Broadcast is established, the internet is in constant construction.

A1 – Wolfgang) I see that

Q1) I don’t know about Europe but in the US since the 1970s there have been deliberate efforts to reduce the power of decision makers and policy makers to work with researchers…

A1 – Bianca) The Federal Communications Commission is mainly made of economists…

Q1) Requirements and roles constrain activities. The assumption of evidence-based decisions is no longer there.

Q2) I think that there is also the issue of shifting governance. Internet governance is changing and so many academics are researching the governance of the internet, we reflect greatly on that. The internet and also the governance structure are still in the making.

Victoria: Do you feel like if you were sick of the process tomorrow, you’d still want to engage with policy making?

A2 – Phoebe) We are a publicly funded university and we are focused on digital inequalities… We feel real responsibility to get involved, to offer advice and opinions based on our advice. On other topics we’d feel less responsible, depending on the impact it would have. It is a public interest thing.

A2 – Wolfgang) When we look at our mission at the Hans-Bredow-Institute we have a vague and normative mission – we think a functioning public sphere is important for democracy… Our tradition is research into public spheres… We have a responsibility there. But we also have a responsibility that the evaluation of academic research becomes more and more important but there is no mechanism to ensure researchers answer the problems that society has… We have a completely divided set of research councils and their yardsticks are academic excellence. State broadcasters do research but with no peer review at all… There are some calls from the Ministry of Science that are problem-orientated but on the whole there isn’t that focus on social issues and relevance in the reward process, in the understanding of prestige.

Victoria: In the UK we have a bizarre dichotomy where research is measured against two measures: impact – where policy impact has real value – and that applies in all fields; but there is also regulation that you cannot use project funds to “lobby” government – which means you potentially cannot communicate research to politicians who disagree. This happened because a research organisation (not a university) opposed government policy with research funded by them… Implications for universities is currently uncleared.

JC: Italy is implementing a similar system to the UK. Often there is no actual mandate on a topic, so individuals come up with ideas without numbers and plans… We think there is a gap – but it is government and ministries work. We are funded to work in the national interest… But we need resources to help there. We are filling gaps in a way that is not sustainable in the long term really – you are evaluated on other criteria.

Q3) I wanted to ask about policy research… I was wondering if there is policy research we do not want to engage in. In Europe, and elsewhere, there is increasing need to attract research… What are the guidelines or principles around what we do or do not go for funding wise.

A3 – Bianca) We are small so we go for what interests us… But we have an advisory board that guides us.

A3 – Wolfgang) I’m not sure that there are overarching guidelines – there may be for other types of special centres – but it’s an interesting thing to have a more formalised exchange like we have right now…

A3 – JC) No, no blockers for us.

A3 – Victoria) Academic freedom is vigorously held up at Oxford but that can mean we have radically different research agendas in the same centre.

Q4) With that lack of guidance, isn’t there a need for academics to show that they have trust, especially in the public sphere, especially when getting funding from, say, Google or Microsoft. And how can you embed that trust?

A4 – Wolfgang) I think peer review as a system functions to support that trust. But we have to think about other institutional settings, and that there is enough oversight… And many associations, like Liebneiz, requires an institutional review board, to look over the research agenda and ensure some outside scrutiny. I wouldn’t say every organisation or research centre needs that – it can be helpful but costly in terms of time in particular. And you cannot trust the general public to do that, you need it to be peers. An interesting question though, especially as Humboldt has national funding from Google… In this network academics play a role, and organisations play a role, and you have to understand the networks and relationships of partners you work with, and their interests.

A4 – Bianca) That’s a question that we’ve faced recently… That concern that corporate funding may sway result and the best way to face that is to publish methodology, questionnaires, process… to ensure the work is understood in that context that enables trust in the work.
A4 – JC) We spent years trying to deal with the issue of independence and it is very important as academia has responsibility to provide research that is independent and unbiased by funding etc. And not just about the work itself, but also perceptions of the work… It is quite a local/contextual issue. So, getting money from Google is perceived differently in different countries, and at different times…
Victoria: This is something we have to have more conversations about this. In medicine there is far more conversation about codes of conduct around funding. I am also concerned that PhD funding is now requiring something like a third of PhDs to be co-funded by industry, without any understanding from UK Government about what that means and what that means for peer review… That’s something we need to think about far more stringently.
Q5) For companies there are requirements to review outputs before publications to check for proprietary information and ensure it is not released. That makes industry the final arbiter here. In Canada our funding is also increasingly coming from industry and there that means that proprietary data gives them final say…
A5 – Bianca) Sometimes it has to be about negotiating contracts and being clear what is and is not acceptable.
Victoria) That’s my concern with new PhD funding models, and also with use of industry data. It will be non-negotiable that the research is not compromised but how you make that process clear is important.
Q6) What are your models here – are you academic or outside academia?
A6 – JC) Academic and policy are part of the work we are funded to do.
A6 – Bianca) We are 99% Endowment funded, hence having a lot of freedom but also advisory board guidance.
A6 – Wolfgang) Our success is assessed by academic publication. The Humboldt Institute is funded largely by private companies but a range of them, but also from grants. The Hans-Bredow-Institute is mainly directly funded by the Hamburg Ministry of Science but we’d like to be funded from other funders across Germany.
A6 – Victoria) Our income is research income, teaching income from masters degrees… We are a department of the university. Our projects are usually policy related, but not always government related.
Q7) I was wondering if others in the room have been funded for policy work – my experience has been that policy makers had expectations and an idea of how much control they wanted… By contrast money from Google comes with a “research something on the internet” type freedom. This is not what I would have expected so I just wondered how others experiences compared.
Comment) I was asked to do work across Europe with public sector broadcasters… I don’t know how well my report was seen by policy makers but it was well received by the public sector broadcaster organisations.
Comment) I’ve had public sector funding, foundation funding… But I’ve never had corporate money… My cynical take is that corporations maybe are doing this as PR, hence not minding what you work on!
Comment) I receive money from funding agencies, I did a joint project that I proposed to a think tank… Which was orientated to government… But a real push for impact… Numbers needed to be in the title. I had to be an objective researcher but present it the right way… And that worked with impact… And then the government offered me a contract to continue the research – working for them not against them. The funding was coming from a position close to my own idea… I felt it was a bit instrumentalised in this way…
A7 – Wolfgang) I think that it is hard to generalise… Companies as funders do sometimes make demands and expect control of publishing of results… And whether it is published or not. We don’t do that – our work is always public domain. It’s case by case… But there is one aspect we haven’t talked about and that is the relationship between the individual researcher and their political engagement (or not) and how that impacts upon the neutrality of the organisation. As a lawyer I’m very aware of that… For instance if giving expert evidence in court, the importance of being an individual not the organisation. Especially if partners/funders before or in the future are on the opposite side. I was an expert for Germany in a court case, with private broadcasters on the other side, and you have to be careful there…
A7 – JC) There is so little money for research in Italy… Regarding corporations… We got some money from Google to write an open source library, it’s out there, it’s public… There was no conflict there. But money from companies for policy work is really difficult. But lots of case by case issues in-between.
Q8) But companies often fund social science work that isn’t about policy but has impact on policy.
A8 – JC) We don’t do social science research so we don’t face that issue.
A8 – Victoria) Finding ways to make that work that guarantees independence is often the best way forward – you cannot and often do not want to say no… But you work with codes of conduct, with advisory board, with processes to ensure appropriate freedoms.
JC: A question to the audience… A controversial topic arises, one side owns the debate and a private company approaches to support your voice… Do you take their funding?
Comment) I was asked to do that and I kind of stalled so that I didn’t have to refuse or take part, but in that case I didn’t feel
Comment) If having your voice in the public triggers the conversation, you do make it visible and participate, to progress the issue…
Comment) Maybe this comes down to personal versus institutional points of view. And I would need to talk to colleagues to help me make that decision, to decide if this would be important or not… Then I would say yes… Better solution is to say “no, I’m talking in a private capacity”.
JC) I think that the point of separating individual and centres here is important. Generally centres like ours do not take a position… And there is an added element that if a corporation wants to be involved, a track record of past behaviour makes it less troublesome. Saying something for 10 years gives you credibility in a way that suddenly engaging does not.
Wolfgang) In Germany it is general practice that if your arguments are not being heard, then you engage expertise – it is general practice in German legal academic practice. It is ok I think.
Comment) In the Bundestag they bring in experts… But of course the choice of expert reflects values and opinions made in articles. So you have a range of academics supporting politics… If I am invited to talk to parliament, I say what I always say “this is not a problem”.
Victoria: And I think that nicely reminds us why this is the politics of internet research! Thank you.
Plenary Panel: Who Rules the Internet? Kate Crawford (Microsoft Research NYC), Fieke Jansen (Tactical Tech), Carolin Gerlitz (University of Siegen) – Chair: Cornelius Puschmann
Jennifer Stromer-Galley, President of the Association of Internet Researchers: For those of you who are new to the AoIR, this is our 17th conference and we are an international organisation that looks at issues around the internet – now including those things that have come out of the internet including mobile apps. And our panel today we will be focusing on governance issues. Before that I would like to acknowledge this marvellous city of Berlin, and to thank all of my colleagues in Germany who have taken such care, and to Humboldt University for hosting us in this beautiful venue. And now, I’d like to handover to Herr Matthias Graf von Kielmansegg, representing Professor Dr Elizabeth Wacker, Federal Minister of Labour and Social Affairs.
Matthias Graf von Kielmansegg: is here representing Professor Wacker, who takes a great interest in internet and society, including the issues that you are looking at here this week. If you are not familiar with our digitisation policy, the German government published a digital agenda for the first time two years ago, covering all areas of government operation. In terms of activities it concentrates on the term 2013-2017, and it needs to be extended, and it reaches strategically far into the next decade. Additionally we have a regular summit bringing together the private sector, unions, government and the academic world looking at key issues.
You all know that digital is a fundamental gamechanger, in the way goods and services are used, the ways we communicate and collaborate, and digital loosens our ties to time and place… And we aren’t at the end but at the middle of this process. Wikipedia was founded 16 years ago, the iPhone launched 9 years ago, and now we talk about Blockchain… So we do not know where we will be in 10 or 20 years time. And good education and research are key to that. And we need to engage proactively. In Germany we are incorporating Internet of Things into our industries. In Germany we used to have a technology-driven view of these things, but now we look at economic and cultural contexts or ecosystems to understand digital systems.
Research is one driver, the other is that science, education, and research are users in their own right. Let me focus first on education… Here we must answer some major issues – what will drive change here, technology or pedagogy? Who will be the change agents? And what of the role of teachers and schools? They must take the lead in change and secure the dominance of pedagogy, using digital tools to support our key education goals – and not vice versa. And that means digital education must offer more opportunities, flexibilities, and better preparation for tomorrow’s world of work. With this in mind we plan to launch a digital education campaign to help young people find their place in an ever changing digital world, and to be ready to adapt to the changes that arise. How education can support our economic model and higher education. And we will need to address issues of technical infrastructure, governance – and for us how this plays out with our 60 federal states. Closer to your world is the world of science. Digital tools create huge amounts of new data and big data. The challenges organisations face is not just infrastructure but how to access and use this data. We call our approach Securing the Life Cycle of Data, concerned with aceess, use, reuse, interoperability. And how will be decide what we save, and what we delete? And who will decide how third parties use this data. And big data goes alongside other aspects such as high powered computing. We plan to launch an initiative of action in this area next year. To oversee this we have a Scientific Oversight Body with stakeholders. We are also keen to embrace Open Data and the resources to support that. We have added new conditions to our own funding conditions – any publication based on research funded by us, must be published open access.
We need to know more about internet and society need to be known, and there is research to be done. So, the federal government has decided to establish a German Internet Institute. It will address a number of areas of importance: access and use of the digital world; work and value creation and our democracy. We want an interdisciplinary team of social scientists, economists, and information scientists. The competitive selection process is just underway, and we expect the winner to be announced next spring. There is readiness to spend up to €15M over the first five years. And this highlights the importance of the digital world in Germany.
Let me just make one comment. The overall title of this conference is Internet Rules! It is still up to us to be the fool or the wise… We need to understand what might happen is politics, economics and society do not find the answers to the challenges we face. And so hopefully we will find that it’s not the internet that rules, but that democracy rules!
Kate Crawford
When Cornelius asked me to look at the idea of “Who rules the internet?” I looked up at my bookshelf, and found lots of books written by people in this community, many of you in this room, looking at just this question. And we have moved from the ’90s utopianism to the world of infrastructure, socio-technical aspects, the Internet of Things layer – and zombie web cams being coopted by hackers. So many of you have enhanced my understanding of this issue.
Right now we see machine learning and AI being rapidly build into our world without implications being fully understood… I am talking narrowly about AI here… Sometimes they have lovely feminine names: Siri, Alexa, etc… But these systems are embedded in our phones, we have AI analysing images on Facebook. It will never be separate from humans, but it is distinct and significant, and we see AI beyond the internet and into systems – on who gets released from jail, on hospital stays, etc. I am sure all of us were surprised by the fact that Facebook, last month, censored a Pulitzer Prize winning image of a girl being napalmed in Vietnam… We don’t know the processes that triggered this, though an image of a nude girl likely triggers these processes… Now that had attention, the Government of Norway accused Facebook or erasing our shared history. The image was restored but this is the tip of the iceberg – and most images and actions are not so apparent to us…
This lack of visibility is important but it isn’t new… There are many organisational and procedural aspects that are opaque… I think we are having a moment around AI where we don’t know what is taking place… So what do we do?
We could make them transparent… But this doesn’t seem likely to work. A colleague and I have written about the history of transparency and that process and availability code does not necessarily tell you exactly what is happening and how this is used. Y Combinator has installed a system, called HAL 9000 brilliantly, and have boasted that they don’t know how it filters applications, only the system could do that. That’s fine until that system causes issues, denies you rights, gets in your way…
So we need to understand these algorithms from the outside… We have to poke them… And I think of Christian Salmand(?)’s work on algorithmic auditing. Christian couldn’t be here this evening and my thoughts are with him. But he is also part of a group who are trying to pursue legal rights to enable this type of research.
And there are people that say that AI can fix this system… This is something that the finance sector talks about. They have an environment of predatory machine learning hunting each other – Terry Cary has written about this. It’s tempting to create a “police AI” to watch these… I’ve been going back to the 1970s books on AI, and the work of Joseph Weizenbaum who created ELIZA. And he suggested that if we continue to ascribe AI to human acting systems it might be a slow acting poison. It is a reminder to not be seduced by these new forms of AI.
Carolin Gerlitz, University of Siegen
I think after the last few days the answer to the question of “who rules the internet?”, I think the answer is “platforms”!
Their rules of who users are, what they can do, can seem very rigid. Before Facebook introduced the emotions, the Like button was used in a range of ways. With the introduction of emotions they have rigidly defined responses, creating discreet data points to be advertiser ready and available to be recombined.
There are also rules around programmability, that dictate what data can be extracted, how, by whom, in what ways… And platforms also like to keep the interpretation of data in control, and adjust the rules of APIs. Some of you have been working to extract data from platforms where things are changing rapidly – Twitter API changes, Facebook API and Research changes, Instagram API changes, all increasingly restricting access, all dictating who can participate. And limiting the opportunity to hold platforms to account, as my colleague Anne Helmond argues.
Increasingly platforms are accessed indirectly through intermediaries which create their own rules, a cascade of rules for users to engage with. Platforms don’t just extend to platforms but also to apps… As many of you have been writing about in regard to platforms and apps… And Christian, if he were here today, would talk about the increasing role of platforms in this way…
And platforms reach out not only to users but also non-users. They these spaces are also contextual – with place, temporality and the role of commercial content all important here.
These rules can be characterised in different ways… There is a dichotomy of openness and closedness. Much of what takes place is hidden and dictated by cascading sets of rule sets. And then there is the issue of evaluation – what counts, for whom, and in what way? Tailorism refers to the mass production of small tasks – and platforms work in these fine grained algorithmed way. But platforms don’t earn money from users’ repetitive actions… Or from use of platform data by third parties. They “put life to work” (Lazlo) by using data points raising questions of who counts and what counts.
Fieke Jansen, Tactical Tech
I work at an NGO, on the ground in real world scenarios. And we are concerned with the Big Five: Apple, Amazon, Google, Microsoft and Facebook. How did we get like this? People we work with are uncomfortable with this. When we ask activists and ask them to draw the internet, they mostly draw a cloud. We asked at a session “what happens if the government bans Facebook” and they cannot imagine it – and if Facebook is beyond government then where are we at here? And I work with an open source company who use Google Apps for Business – and that seems like an odd situation to me…
But I’ll leave the Big Five for now and turn to BitNik… They used the dark net shopper and brought random stuff for $50… And then placed them in a gallery… They did
Iced T watch… After Wikileaks an activist in Berlin found all the NSA services spying on this and worked out who was working for the secret service… But that triggers a real debate… There was real discussion of being anti-patriotic, and puts people in data… But the data he used, from LinkedIn, is sold every day…. He just used it in a way that raised debate. We allow that selling use… But this coder’s work was not… Isn’t that debate needed.
So, back to the Big Five. In 2014 Google (now Alphabet) was the second biggest company in the world – with equivalent GDP bigger than Austria. We choose to use many of their services every day… But many of their services are less in our face. In the sensor world we have fewer choices about data… And with the big companies it is political too… In Brussels you have to register lobbists – there are 9 for Google, 7 used to work for the European Parliament… There is a revolving door here.
There is also an issue of skill… Google has wealth and power and knowledge that are very large to counter. Facebook have, around 400m active users a month, 300m likes a day, they are worth $190m… And here we miss the political influence. They have an enormous drive to conquer the global south… They want to roll out Facebook Sero as “the internet”…
So, who rules the internet? It’s the 1% of the 1%… It is the Big Five, but also the venture capitalists who back them… Sequoia and Kleiner Perkins Caufield & Byers, and you have Peter Thiel… It is very few people behind many of the biggest companies including some of the Big Five…
People use these services that work well, work easily… I only use open source… Yes, it is harder… Why are so few questioning and critiquing that? We feed the beast on an every day basis… It is our universities – also moving to decentralised Big Five platforms in preference to their own, it is our government… and if we are not critical what happens?
Panel Discussion
Cornelius: Many here study internet governance… So I want to ask, Kate, does AI rule the internet?
Kate: I think it is really hard to think about who rules the internet. The interesting thing about automated decision making networks have been with us for a while… It’s less about ruling, and who… And it’s more about the entanglements, fragmentation and governance. We talk about the Big Five… I would probably say there are Seven companies here, deciding how we get into university, healthcare, housing, filtering far beyond the internet… And governments do have a role to play.
Cornelius: How do we govern what we don’t understand?
Kate: That’s a hard question… That keeps me up at night that question… Governments look to us academics, technology sectors, NGOs, trying to work out what to do. We need really strong research groups to look at this – we tried to do this with AI Now. Interdisciplinary is crucial – these issues cannot be solved by computer science alone or social science alone… This is the biggest challenge of the next 50 years.
Cornelius: What about how national governments can legislate for Facebook, say? (I’m simplifying a longer question that I didn’t catch in time here, correction welcome!)
Carolyn: I’m not sure about Facebook but in our digital methods workshop we talked about how on Twitter content can be deleted, that can then be exposed in other locations via the API. And it is also the case that these services are specific and localised… We expect national governments to have some governance, when what you understand and how you access information varies by location… Increasing that uncanny notion. I also wanted to comment on something you asked Kate – thinking about the actors here, they all require engagement of users – something Fieke pointed to. Those actors involved in rulers are dependent on actions of other actors.
Cornelius: So how else we be running these things? The Chinese option, the Russion options, are there better options?
Carolyn: I think I cannot answer – I’d want to put it to these 570 smart people for the next two days. My answer would be to acknowledge distributedness to which we have to respond and react… We cannot understand algorithms and AI without understanding context…
Carolyn: Fieke, what you talked about… Being extreme… Are we whining because as Europeans we are being colonised by other areas of the world, even as we use and are obsessed by our devices and tools – complaining then checking our iPhones. I’m serious… If we did care that much, maybe actions would change… You said people have the power here, maybe it’s not a big enough issue…
Fieke: Is it Europeans concerned about Americans from a libertarian point of view? Yes. I work mainly in non-European parts of the world and particularly in the North America… For many the internet is seen as magical and neutral – but those who research it we know it is not. But when you ask why people use tools, it’s their friends or community. If you ask them who owns it, that raises questions that are framed in a relevant way. The framing has to fit people’s reality. In South America talk of Facebook Sero as the new colonialism, you will have a political conversation… But we also don’t always know why we are uncomfortable… It can feel abstract, distant, and the concern is momentary. Outside of this field, people don’t think about it.
Kate: Your provocation that we could just step away, and move to open source. The reality includes opportunity costs to employment, to friends and family… But even if you do none of those things then you walk down the streets and you are tracked by sensors, by other devices…
Fieke: I absolutely agree. All the data collected beyond our control is the concern… But we can’t just roll over and die, we have to try and provoke and find mechanisms to play…
Kate: I think that idea of what the political levers may be… Those conversation of legal, ethical, technical parameters seem crucial, more than consumer choice. But I don’t think we have sufficient collective models of changing information ecologies… and they are changing so rapidly.
Q&A
Q1) Thank you for this wonderful talk and perspectives here. You talked about the infrastructure layer… What about that question. You say this 1% of 1% own the internet, but do they own the infrastructure? Facebook is trying to balloon in the internet so that they cannot be cut off… It also – second question – used to be that YOU owns the internet that changed the dominance of big companies… This happens in history quite often… So what about that?
A1 – Fieke) I think that Kate talked about the many levels of ownership… Facebook piggy backs on other infrastructures, Google does the balloons. It used to be that government owned the infrastructure. There are new cables rolling out… EU funding, governments, private companies, rich people… The infrastructure is mainly owned by companies now.
A1 – Kate) I think infrastructure studies has been extraordinarily rich – work of Nicole Serafichi for instance – but also we have art responses. Infrastructure is very of the moment… But what happens next… It is not just about infrastructures and their ownerships, but also surveillance access to these. There are things like MESH networks… And there are people working here in Berlin to flag up faux police networks during protests to help protestors protect themselves.
A1 – Carolyn) I think that platforms would have argued differently ten years ago about who owned the internet – but “you” probably wouldn’t have been the answer…
Q2) I wonder if the real issue is that we are running on very vague ideas of government that have been established for a very different world. People are responding to elections and referenda in very irrational ways that suggest that model is not fit for purpose. Is there a better form of governance or democracy that we should move towards? Can AI help us there?
A2 – Kate) What a beautiful and impossible to answer question! Obviously I cannot answer that properly but part of the reason I do AI research is to try to inform and shape that… Hence my passion for building research in this space. We don’t have much data to go on but the imaginative space here has been dominated by those with narrow ideas. I want to think about how communities can develop and contribute to AI, and what potential there is.
Q3) Do we need to rethink what we mean by democratic control and regulations… Regulations are closely associated with nation states, but that’s not the context that most of the internet operates. Do we need to re-engage with the question of globalisation again.
A3) As Carolyn said, who is the “you” in web 2.0, and whose narrative is there. Globalisation is similar. I pay taxes to a nation state that has rules of law and governance… By denying that they buy into the narrative of mainly internet companies and huge multinational organisations.
Cornelius: I have the declaration of independence of the internet by Perry Barlow which I was tempted to quote you… But it is interesting to reflect on how we have moved from utopian positions to where we are today.
Q4 – participant from Google!) There is an interesting question here… If this question was pointing to deeper truth… A clear ruler, an internet, would allow this question of who rules to be answered. I would ask how we have agency over how the proliferation of internet technologies and how we benefit from them… ?
A4 – Kate) A great title, but long for the programme! But your phrasing is so interesting – if it is so diverse and complex then how we engage is crucial. I think that is important but, the optimistic part, I think we can do this.
A4 – Carolyn) One way to engage is through descent… and negotiating on a level that ensures platforms work beyond economic values…
Q5) The last time I was forced to give away my data was by the Australian state (where I live) in completing the census… I had to complete it or I would be fined over $1000 AUS – Facebook, Twitter, etc. never did that… I rule this kind of internet, I am still free in my choices. But on the other hand why is it that states that are best at governing platforms are the ones I want to live in the least. Maybe without the platforms no-one would use the internet so we’d have one problem less… If we as academics think about platforms in these mythic ways, maybe we end up governing in a way that is more controlled and has undesirable effects.
A5 – Kate) Many questions there, I’ll address two of those. On the census I’d refer you to articles
University of Cambridge study showed huge accuracy in determining marital status, sexuality and whether a drug or alcohol user based on Facebook likes… You may feel free but those data patterns are being built. But we have to move beyond thinking that only by active participation do you contribute to these platforms…
A5 – Fieke) The Census issue you brought up is interesting… In the UK, US and Australia the contractor for the Census is conducted by one of the world’s biggest arms manufacturers… You don’t give data to the Big Five… But…  So, we do need to question the politics behind our actions… There is also a perception that having technical skills makes you superior to those without, and if we do that we create a whole new class system and that raises whole new questions.
Q6) The question of internet raises issues of boundaries, and how we do governance and work of governance and rule-making. Ideally when we do that governance and rule-making there are values behind that… So what are the values that you think need to underlie those structures and systems…
A6 – Carolyn) I think values that do not discriminate people through algorithmic processing, AI, etc. Those tools should allow people to not be discriminated on the basis of things they have done in the past… But that requires understanding of how that discrimination is taking place now…
A6 – Kate) I love that question… All of these layers of control come with values baked in, we just don’t know what they are… I would be interested to see what values drop out of those systems, that don’t fit the easy metricisation of our world. Some great things to fall out of feminist and race theory and values from that…
A6 – Fieke) I would add that values should not just be about the individual, and should ensure that the collective is also considered…
Cornelius: Thank you for offering a glimmer of hope! Thank you all!
Oct 052016
 

If you’ve been following my blog today you will know that I’m in Berlin for the Association of Internet Researchers AoIR 2016 (#aoir2016) Conference, at Humboldt University. As this first day has mainly been about workshops – and I’ve been in a full day long Digital Methods workshop – we do have our first conference keynote this evening. And as it looks a bit different to my workshop blog, I thought a new post was in order.

As usual, this is a live blog post so corrections, comments, etc. are all welcomed. This session is also being videoed so you will probably want to refer to that once it becomes available as the authoritative record of the session. 

Keynote: The Platform Society – José van Dijck (University of Amsterdam) with Session Chair: Jennifer Stromer-Galley

We are having an introduction from Wolfgang (?) from Humboldt University, welcoming us and noting that AoIR 2016 has made the front page of a Berlin newspaper today! He also notes the hunger for internet governance information, understanding, etc. from German government and from Europe.

Wolfgang: The theme of “Internet Rules!” provides lots of opportunities for keynotes, discussions, etc. and it allows us to connect the ideas of internet and society without deterministic structures. I will now hand over to the session chair Cornelius Puschmann.

Cornelius: It falls to me to do the logistical stuff… But first we have 570 people registered for AoIR 2016  so we have a really big conference. And now the boring details… which I won’t blog in detail here, other than to note the hashtag list:

  • Official: #aoir2016
  • Rebel: #aoir16
  • Retro: #ir17
  • Tim Highfield: #itisthesevebeenthassociationofinternetresearchersconferenceanditishappeningin2016

And with that, and a reminder of some of the more experimental parts of the programme to come.

Jennifer: Huge thanks to all of my colleagues here for turning this crazy idea into this huge event with a record number of attendees! Thank you to Cornelius, our programme chair.

Now to introduce our speaker… Jose van Dijck, professor at the University of Amsterdam as well as visiting work across the world. She is the first woman to hold the Presidency of the Royal Academy of Arts, Science and Research in The Netherlands. Her most recent book is the Culture of Connectivity: A History of Social Media. It takes a critical look back at social media and social networking, not only as social spaces but as business spaces. And her lecture tonight will give a preview of her forthcoming work on the Public Values in a Platform Society.

Jose: It is lovely to be here, particularly on this rather strange day…. I became President of the Royal Academy this year and today my colleague won the Nobel Prize in Chemistry – so instead of preparing for my keynote today I was dealing with press inquiries, so it is nice to focus back on my real job…

So a few years ago Thomas Poell wrote an article on the politics of social platforms. His work on platforms inspired my work on networked platforms being interwoven into an ecology economically and socially. Since I wrote that book, the last chapter is on platforms, many of which have now become the main players… I talked about Google (now Alphabet), Facebook, Amazon, Microsoft, LinkedIn (now owned by Microsoft), Apple… And since then we’ve seen other players coming in and creating change – like Uber, AirBnB, Coursera. These platforms have become the gateways to our social life… And they have consolidated and expanded…

So a Platform is an online site that deploys automated technologies and business models to organise data streams, economic interactions, and social exchanges between users of the internet. That’s the core of the social theory I am using. Platforms ARE NOT simple facilitators, and they are not stand alone systems – they are interconnected.

And a Platform Ecosystem is an assemblage of networked platforms, governed by its own dynamics and operating on a set of mechanisms…

Now a couple of years ago Thomas and I wrote about platform mechanisms and the very important idea of “Datafication”. Commodification – a platform’s business model and governance defines the way in which datafied information is transformed into (economic, societal) value. There are many business models and many governance models – they vary but governance models are maybe more important than business models, and they can be hard to pin down. Selection are about data flows filtered by algorithms and bots, allowing for automated selection such as personalisation, rankings, reputation. Those mechanisms are not visible right now, and we need to make those explicit so that we can talk about them and their implications. Can we hold Facebook accountable for Newsfeed in the ways that traditional media are accountable? That’s an important question for us to consider…

The platform ecosystem is not a level playing field. They are gaining traction not through money but through the number of users. And network effects mean that user numbers are the way we understand the size of the network. There is Platformisation (thanks Anna?) across sectors… And that power is gained through cross ownership and cross platform, but also through true architecture and shared platforms. In our book we’ll give both private and public sectors and how they are penetrated by platform ecosystems. We used to have big oil companies, or big manufacturing companies… But now big companies operate across sectors.

So transport for instance… Uber is huge, partly financed by Google and also in competition with Google. If we look at News as a sector we have Huffington Post, Buzzfeed, etc. they are also used as content distribution and aggregators for Google, Facebook, etc.

In health – a second becoming most proliferated – we see fitness and health apps, with Google and Apple major players here. And in your neighbourhood there are apps available, some of these are global apps localised to your neighbourhoods, sitting alongside massive players.

In Education we’ve seen the rise of Massive Online Open Courses, with Microsoft and Google investing heavily alongside players like EdX, Coursera, Udacity, FutureLearn, etc.

All of the sectors are undergoing platformisation… And if you look across them all, all areas of private and public life the activity is revolving around the big five: Google, Facebook. Apple, Amazon, with LinkedIn and Twitter also important. And take, for example, AirBnB

Platform society is a society which social, economic and interpersonal traffic is largely channeled by an (overwhelmingly corporate) global online platform ecosystem that is driven by algorithms and fuelled by data. That’s not a revolution, it’s something we are part of and see every day.

Now we have promises of “participatory culture” and the euphoria of the idea of web 2.0, and of individuals contributing. More recently that idea has shifted to the idea of the “sharing economy”… But sharing has shifted in it’s meaning too. It is about sharing resources or services for some sort of fee, that’s a transaction based idea. And from 2015 we see awareness of the negative sides of the sharing economy. So a Feb 2015 Time cover read: “Strangers crashed my car, ate my food and wore my pants. Tales from the sharing economy” – about the personal discomfort of the downsides. And we see Technology Quarterly writing about “When it’s not so good to share” – from the perspective of securing the property we share here. But there is more at stake than personal discomfort…

We have started to see disruptive protest against private platforms, like posters against AirBnB. City Councils have to hire more inspectors to regulate AirBnB hosts for safety reasons – a huge debate in Amsterdam now, and the public values changing as a consequence of so many AirBnB hosts in this city. And there are more protests about changing values… Saying people are citizens not entrepreneurs, that the city is not for sale…

In another sector we see Uber protests, by various stakeholders. We see these from licenced taxi drivers, accusing them of safety issues and social values; but also protests by drivers. Uber do not call themselves a “transportation” company, instead calling themselves a connectivity company. Now Uber drivers have complained that Uber don’t pay insurance or pensions…

So, AirBnB and Uber are changing public values, they haven’t anchored existing values in their own design and development. There are platform promises and paradoxes here… They offer personalised services whilst contributing to the public good… The idea is that they are better at providing services than existing players. They promote community and connectedness whilst bypassing cumbersome institutions – based on the idea that we can do without big government or institutions, and without those values. These platforms also emphasize public values, whilst obscuring private gain. These are promises claiming that they are in the public interest… But that’s a paradox with hidden private gains.

And so how do we anchor collective, public values in a platform society and how do we govern this. ? has the idea of governance of platforms as opposed to governance by platforms. Our government is mainly concerned with governing platforms – regulations, privacy etc. and that is appropriate but there are public values like fairness, like accuracy, like safety, like privacy, like transparency, like democracy… Those values are increasingly being governed by platforms, and that governance is hidden from us in the algorithms and design decisions…

Who rules the platform society? Who are the stakeholders here? There are many platform societies of course, but who can be held accountable? Well it is an intense ideological battleground… With private stakeholders like (global) corporations, businesses, (micro-)entrepreneurs; consumer groups; consumers. And public stakeholders like citizens; co-ops and collectives, NGOs, public institutions, governments, supra-national bodies… And matching those needs up is never going to happen really…

Who uses health apps here? (many do) In 2015 there were 165,000 health apps in the Google Play store. Most of them promise personalised health and, whilst that is in the future, they track data… They take data right from individual to companies, bi-passing other actors and health providers… They manage a wide variety of data flows (patients, doctors, companies). There is a variety of business models, particularly unclear. There is a site called “Patients like me” which says that it is “not just for profit” – so it is for profit, but not just for profit… Data has become currency in our health economy. And that private gain is hiding behind the public good arguement. A few months ago in Holland we started to have insurance discounts (5%) if you send FitBit scores… But I thin the next step will be paying more if you do not send your scores… That’s how public values change…

Finally we have regulation – government should be regulating security, safety, accuracy, and privacy. It takes the Dutch FDA 6 months to check the safety and accuracy of one app – and if it is updated, you have to start again! In the US the US Dept of Health and Human Services, Office of National Coordinator for Health Information Technology (ONC), Office for Civil Rights (OCR) and Food and Drug Administration (FDA) released a guide called “Developing a mobile health app?” providing guidance on which federal laws need to be followed. And we see not just insurance using apps, but insurance and healthcare providers having to buy data services from providers and that changing the impact of these apps. You have things like 23 and Me, and those are global – and raises global regulation issues – so hard to govern around that issue. But our platform ecosystem is transnational, and governments are national. We also see platforms coming from technology companies – Phillips was building physical kit, MRI machines, but it now models itself as a data company. What you see here is that the big five internet and technology players are also big players in this field – Google Health and 23 and Me (financed by Sergei Brin, run by his ex-wife), Apple HealthKit, etc. And even then you have small independent apps like mPower but they are distributed via the app stores, led by big players and again, hard to govern.

 

We used to build trust in society through institutions and institutional norms and codes, which were subject to democratic controls. But these are increasingly bi-passed… And that may be subtle but it is going uncontrolled. So, how can we build trust in a platformed world? Well, we have to understand who rules the platform ecosystem, and by understanding how it is governed. And when you look at this globally you see competing ideological hemispheres… You see the US model of commercial values, and those are literally imposed on others. And you have Yandex and the Chinese model, and that that’s an interesting model…

I think coming back to my main question: what do we do here to help? We can make visible how this platformised society works… So I did a presentation a few weeks ago and shared recommendations there for users:

  • Require transparency in platforms
  • Do not trade convenience for public values
  • Be vigilant, be informed

But can you expect individuals to understand how each app works and what its implications are? I think government have a key role to protect citizens rights here.

In terms of owners and developers my recommendations are:

  • Put long-term trust over short-term gain
  • Be transparent about data flows, business models, and governance structure
  • Help encode public values in platform architecture (e.g. privacy by design)

A few weeks back the New York Times ran an article on holding algorithms accountable, and I think that that is a useful idea.

I think my biggest recommendations are for governments, and they are:

  • Defend public values and common good; negotiate public interests with platforms. What it could also do is to, for instance, legislate to manage demands and needs in how platforms work.
  • Upgrade regulatory institutions to deal with the digital constellations we are facing.
  • Develop (inter)national blueprint for a democratic platform society.

And we, as researchers, we can help expose and share the platform society so that it is understaood and engaged with in a more knowledgeable way. Governments have a special responsibility to govern the networked society – right now it is a Wild West. We are struggling to resolve these issues, so how can we help govern the platforms to shape society, when the platforms themselves are so enormous and powerful. In Europe we see platforms that are mainly US-based private sector spaces, and they are threatening public sector organisations.. It is important to think about how we build trust in that platform society…

Q&A

Q1) You talked about private interests being concealed by public values, but you didn’t talk about private interests of incumbents…

A1) That is important of course. Those protests that I mentioned do raise some of those issues – undercutting prices by not paying for insurance, pensions etc. of taxi drivers. In Europe those costs can be up to 50% of costs, so what do we do with those public values, how do we pay for this? We’ll pay for it one way or the other. The incumbents do have their own vested interests… But there are also social values there… If we want to retain those values though we need to find a model for that… European economic values have had collective values inscribed in… If that is outmoded, than fine, but how do we build those in in other ways…

Q2) I think in my context in Australia at least the Government is in cahoots with private companies, with public-private partnerships and security arms of government heavily benefitting from data collection and surveillance… I think that government regulating these platforms is possible, I’m not sure that they will.

A2) A lot of governments are heavily invested in private industries… I am not anti-companies or anti-government… My first goal is to make them aware of how this works… I am always surprised how little governments are aware of what runs underneath the promises and paradoxes… There is reluctance to work with companies from regulators but there is also exhaustion and a lack of understanding about how to update regulations and processes. How can you update health regulations with 165k health apps out there? I probably am an optimist… But I want to ensure governments are aware and understand how this is transforming society. There is so much ignorance in the field, and there is nievete about how this will play out. Yes, I’m an optimist. But no, there is something we can do to shape the direction that the platform society will develop.

Q3) You have great faith in regulation, but there are real challenges and issues… There are many cases where governments have colluded with industry to inflate the costs of delivery. There is the idea of regulatory capture. Why should we expect regulators to act in public interest when historically they act in the interest of private companies.

A3) It’s not that I put all my trust there… But I’m looking for a dialogue with whoever is involved in this space, in the contested play of where we start… It is one of many actors in this whole contested battlefield. I don’t think we have the answers, but it is our job to explain the underlying mechanisms… And I’m pretty shocked by how little they know about the platforms and the underlying mechanisms there. Sometimes it’s hard to know where to start… But you have to make a start somewhere…

Oct 052016
 

After a few weeks of leave I’m now back and spending most of this week at the Association of Internet Researchers (AoIR) Conference 2016. I’m hugely excited to be here as the programme looks excellent with a really wide range of internet research being presented and discussed. I’ll be liveblogging throughout the week starting with today’s workshops.

This is a liveblog so all corrections, updates, links, etc. are very much welcomed – just leave me a comment, drop me an email or similar to flag them up!

I am booked into the Digital Methods in Internet Research: A Sampling Menu workshop, although I may be switching session at lunchtime to attend the Internet rules… for Higher Education workshop this afternoon.

The Digital Methods workshop is being chaired by Patrik Wikstrom (Digital Media Research Centre, Queensland University of Technology, Australia) and the speakers are:

  • Erik Borra (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Axel Bruns (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Jean Burgess (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Carolin Gerlitz (University of Siegen, Germany),
  • Anne Helmond (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Ariadna Matamoros Fernandez (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Peta Mitchell (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Richard Rogers (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Fernando N. van der Vlist (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Esther Weltevrede (Digital Methods Initiative, University of Amsterdam, the Netherlands).

I’ll be taking notes throughout but the session materials are also available here: http://tinyurl.com/aoir2016-digmethods/.

Patrik: We are in for a long and exciting day! I won’t introduce all the speakers as we won’t have time!

Conceptual Introduction: Situating Digital Methods (Richard Rogers)

My name is Richard Rogers, I’m professor of new media and digital culture at the University of Amsterdam and I have the pleasure of introducing today’s session. So I’m going to do two things, I’ll be situating digital methods in internet-related research, and then taking you through some digital methods.

I would like to situate digital methods as a third era of internet research… I think all of these eras thrive and overlap but they are differentiated.

  1. Web of Cyberspace (1994-2000): Cyberstudies was an effort to see difference in the internet, the virtual as distinct from the real. I’d situate this largely in the 90’s and the work of Steve Jones and Steve (?).
  2. Web as Virtual Society? (2000-2007) saw virtual as part of the real. Offline as baseline and “virtual methods” with work around the digital economy, the digital divide…
  3. Web as societal data (2007-) is about “virtual as indication of the real. Online as baseline.

Right now we use online data about society and culture to make “grounded” claims.

So, if we look at Allrecipes.com Thanksgiving recipe searches on a map we get some idea of regional preference, or we look at Google data in more depth, we get this idea of internet data as grounding for understanding culture, society, tastes.

So, we had this turn in around 2008 to “web as data” as a concept. When this idea was first introduced not all were comfortable with the concept. Mike Thelwell et al (2005) talked about the importance of grounding the data from the internet. So, for instance, Google’s flu trends can be compared to Wikipedia traffic etc. And with these trends we also get the idea of “the internet knows first”, with the web predicting other sources of data.

Now I do want to talk about digital methods in the context of digital humanities data and methods. Lev Manovich talks about Cultural Analytics. It is concerned with digitised cultural materials with materials clusterable in a sort of art historical way – by hue, style, etc. And so this is a sort of big data approach that substitutes “continuous change” for periodisation and categorisation for continuation. So, this approach can, for instance, be applied to Instagram (Selfiexploration), looking at mood, aesthetics, etc. And then we have Culturenomics, mainly through the Google Ngram Viewer. A lot of linguists use this to understand subtle differences as part of distance reading of large corpuses.

And I also want to talk about e-social sciences data and method. Here we have Webometrics (Thelwell et al) with links as reputational markers. The other tradition here is Altmetrics (Priem et al), which uses online data to do citation analysis, with social media data.

So, at least initially, the idea behind digital methods was to be in a different space. The study of online digital objects, and also natively online method – methods developed for the medium. And natively digital is meant in a computing sense here. In computing software has a native mode when it is written for a specific processor, so these are methods specifically created for the digital medium. We also have digitized methods, those which have been imported and migrated methods adapted slightly to the online.

Generally speaking there is a sort of protocol for digital methods: Which objects and data are available? (links, tags, timestamps); how do dominant devices handle them? etc.

I will talk about some methods here:

1. Hyperlink

For the hyperlink analysis there are several methods. The Issue Crawler software, still running and working, enable you to see links between pages, direction of linking, aspirational linking… For example a visualisation of an Armenian NGO shows the dynamics of an issue network showing politics of association.

The other method that can be used here takes a list of sensitive sites, using Issue Crawler, then parse it through an internet censorship service. And variations on this that indicate how successful attempts at internet censorship are. We do work on Iran and China and I should say that we are always quite thoughtful about how we publish these results because of their sensitivity.

2. The website as archived object

We have the Internet Archive and we have individual archived web sites. Both are useful but researcher use is not terribly signficant so we have been doing work on this. See also a YouTube video called “Google and the politics of tabs” – a technique to create a movie of the evolution of a webpage in the style of timelapse photography. I will be publishing soon about this technique.

But we have also been looking at historical hyperlink analysis – giving you that context that you won’t see represented in archives directly. This shows the connections between sites at a previous point in time. We also discovered that the “Ghostery” plugin can also be used with archived websites – for trackers and for code. So you can see the evolution and use of trackers on any website/set of websites.

6. Wikipedia as cultural reference

Note: the numbering is from a headline list of 10, hence the odd numbering… 

We have been looking at the evolution of Wikipedia pages, understanding how they change. It seems that pages shift from neutral to national points of view… So we looked at Srebenica and how that is represented. The pages here have different names, indicating difference in the politics of memory and reconciliation. We have developed a triangulation tool that grabs links and references and compares them across different pages. We also developed comparative image analysis that lets you see which images are shared across articles.

7. Facebook and other social networking sites

Facebook is, as you probably well know, is a social media platform that is relatively difficult to pin down at a moment in time. Trying to pin down the history of Facebook find that very hard – it hasn’t been in the Internet Archive for four years, the site changes all the time. We have developed two approaches: one for social media profiles and interest data as means of stufying cultural taste ad political preference or “Postdemographics”; And “Networked content analysis” which uses social media activity data as means of studying “most engaged with content” – that helps with the fact that profiles are no longer available via the API. To some extend the API drives the research, but then taking a digital methods approach we need to work with the medium, find which possibilities are there for research.

So, one of the projects undertaken with in this space was elFriendo, a MySpace-based project which looked at the cultural tastes of “friends” of Obama and McCain during their presidential race. For instance Obama’s friends best liked Lost and The Daily Show on TV, McCain’s liked Desperate Housewives, America’s Next Top Model, etc. Very different cultures and interests.

Now the Networked Content Analysis approach, where you quantify and then analyse, works well with Facebook. You can look at pages and use data from the API to understand the pages and groups that liked each other, to compare memberships of groups etc. (at the time you were able to do this). In this process you could see specific administrator names, and we did this with right wing data working with a group called Hope not Hate, who recognised many of the names that emerged here. Looking at most liked content from groups you also see the shared values, cultural issues, etc.

So, you could see two areas of Facebook Studies, Facebook I (2006-2011) about presentation of self: profiles and interests studies (with ethics); Facebook II (2011-) which is more about social movements. I think many social media platforms are following this shift – or would like to. So in Instagram Studies the Instagram I (2010-2014) was about selfie culture, but has shifed to Instagram II (2014-) concerned with antagonistic hashtag use for instance.

Twitter has done this and gone further… Twitter I (2006-2009) was about urban lifestyle tool (origins) and “banal” lunch tweets – their own tagline of “what are you doing?”, a connectivist space; Twitter II (2009-2012) has moved to elections, disasters and revolutions. The tagline is “what’s happening?” and we have metrics “trending topics”; Twitter III (2012-) sees this as a generic resource tool with commodification of data, stock market predictions, elections, etc.

So, I want to finish by talking about work on Twitter as a storytelling machine for remote event analysis. This is an approach we developed some years ago around the Iran event crisis. We made a tweet collection around a single Twitter hashtag – which is no longer done – and then ordered by most retweeted (top 3 for each day) and presented in chronological (not reverse) order. And we then showed those in huge displays around the world…

To take you back to June 2009… Mousavi holds an emergency press conference. Voter turn out is 80%. SMS is down. Mousavi’s website and Facebook are blocked. Police use pepper spray… The first 20 days of most popular tweets is a good succinct summary of the events.

So, I’ve taken you on a whistle stop tour of methods. I don’t know if we are coming to the end of this. I was having a conversation the other day that the Web 2.0 days are over really, the idea that the web is readily accessible, that APIs and data is there to be scraped… That’s really changing. This is one of the reasons the app space is so hard to research. We are moving again to user studies to an extent. What the Chinese researchers are doing involves convoluted processes to getting the data for instance. But there are so many areas of research that can still be done. Issue Crawler is still out there and other tools are available at tools.digitalmethods.net.

Twitter studies with DMI-TCAT (Fernando van der Vlist and Emile den Tex)

Fernando: I’m going to be talking about how we can use the DMI-TCAT tool to do Twitter Studies. I am here with Emile den Tex, one of the original developers of this tool, alongside Eric Borra.

So, what is DMI-TCAT? It is the Digital Methods Initiative Twitter Capture and Analysis Toolset, a server side tool which tries to capture robust and reproducible data capture and analysis. The design is based on two ideas: that captured datasets can be refined in different ways; and that the datasets can be analysed in different ways. Although we developed this tool, it is also in use elsewhere, particularly in the US and Australia.

So, how do we actually capture Twitter data? Some of you will have some experience of trying to do this. As researchers we don’t just want the data, we also want to look at the platform in itself. If you are in industry you get Twitter data through a “data partner”, the biggest of which by far is GNIP – owned by Twitter as of the last two years – then you just pay for it. But it is pricey. If you are a researcher you can go to an academic data partner – DiscoverText or Hexagon – and they are also resellers but they are less costly. And then the third route is the publicly available data – REST APIs, Search API, Streaming APIs. These are, to an extent, the authentic user perspective as most people use these… We have built around these but the available data and APIs shape and constrain the design and the data.

For instance the “Search API” prioritises “relevance” over “completeness” – but as academics we don’t know how “relevance” is being defined here. If you want to do representative research then completeness may be most important. If you want to look at how Twitter prioritises the data, then that Search API may be most relevant. You also have to understand rate limits… This can constrain research, as different data has different rate limits.

So there are many layers of technical mediation here, across three big actors: Twitter platform – and the APIs and technical data interfaces; DMI-TCAT (extraction); Output types. And those APIs and technical data interfaces are significant mediators here, and important to understand their implications in our work as researchers.

So, onto the DMI-TCAT tool itself – more on this in Borra & Reider (2014) (doi:10.1108/AJIM-09-2013-0094). They talk about “programmed method” and the idea of the methodological implications of the technical architecture.

What can one learn if one looks at Twitter through this “programmed method”? Well (1) Twitter users can change their Twitter handle, but their ids will remain identical – sounds basic but its important to understand when collecting data. (2) the length of a Tweet may vary beyond maximum of 140 characters (mentions and urls); (3) native retweets may have their top level text property stortened. (4) Unexpected limitations  support for new emoji characters can be problematic. (5) It is possible to retrieve a deleted tweet.

So, for example, a tweet can vary beyond 140 characters. The Retweet of an original post may be abbreviated… Now we don’t want that, we want it to look as it would to a user. So, we capture it in our tool in the non-truncated version.

And, on the issue of deletion and witholding. There are tweets deleted by users, and their are tweets which are withheld by the platform – and the withholding is a country by country issue. But you can see tweets only available in some countries. A project that uses this information is “Politwoops” (http://politwoops.sunlightfoundation.com/) which captures tweets deleted by US politicians, that lets you filter to specific states, party, position. Now there is an ethical discussion to be had here… We don’t know why tweets are deleted… We could at least talk about it.

So, the tool captures Twitter data in two ways. Firstly there is the direct capture capabilities (via web front-end) which allows tracking of users and capture of public tweets posted by these users; tracking particular terms or keywords, including hashtags; get a small random (approx 1%) of all public statuses. Secondary capture capabilities (via scripts) allows further exploration, including user ids, deleted tweets etc.

Twitter as a platform has a very formalised idea of sociality, the types of connections, parameters, etc. When we use the term “user” we mean it in the platform defined object meaning of the word.

Secondary analytical capabilities, via script, also allows further work:

  1. support for geographical polygons to delineate geographical regions for tracking particular terms or keywords, including hashtags.
  2. Built-in URL expander, following shortened URLs to their destination. Allowing further analysis, including of which statuses are pointing to the same URLs.
  3. Download media (e.g. videos and images (attached to particular Tweets).

So, we have this tool but what sort of studies might we do with Twitter? Some ideas to get you thinking:

  1. Hashtag analysis – users, devices etc. Why? They are often embedded in social issues.
  2. Mentions analysis – users mentioned in contexts, associations, etc. allowing you to e.g. identify expertise.
  3. Retweet analysis – most retweeted per day.
  4. URL analysis – the content that is most referenced.

So Emile will now go through the tool and how you’d use it in this way…

Emile: I’m going to walk through some main features of the DMI TCAT tool. We are going to use a demo site (http://tcatdemo.emiledentex.nl/analysis/) and look at some Trump tweets…

Note: I won’t blog everything here as it is a walkthrough, but we are playing with timestamps (the tool uses UTC), search terms etc. We are exploring hashtag frequency… In that list you can see Bengazi, tpp, etc. Now, once you see a common hashtag, you can go back and query the dataset again for that hashtag/search terms… And you can filter down… And look at “identical tweets” to found the most retweeted content. 

Emile: Eric called this a list making tool – it sounds dull but it is so useful… And you can then put the data through other tools. You can put tweets into Gephi. Or you can do exploration… We looked at Getty Parks project, scraped images, reverse Google image searched those images to find the originals, checked the metadata for the camera used, and investigated whether the cost of a camera was related to the success in distributing an image…

Richard: It was a critique of user generated content.

Analysing Social Media Data with TCAT and Tableau (Axel Bruns)

My talk should be a good follow on from the previous presentation as I’ll be looking at what you can do with TCAT data outside and beyond the tool. Before I start I should say that both Amsterdam and QUT are holding summer schools – and we have different summers! – so do have a look at those.

You’ve already heard about TCAT so I won’t talk more about that except to talk about the parts of TCAT I have been using.

TCAT Data Export allows you to export all tweets from selection – containing all of the tweets and information about them. You can also export a table of hashtags – tweet ids from your selection and hashtags; and mentions – tweet ids from your selection with mentions and mention type. You can export other things as well – known users (politicians, celebrities, etc); URLs; etc. And the structure that emerges are the Main TCAT export file (“full export”) and associating Hashtags; Mentions; Any other additional data. If you are familiar with SQL you are essentially joining databases here. If not then that’s fine, Tableau does this for you.

In terms of processing the data there are a number of tools here. Excel just isn’t good enough at scale – limited to 100,000 rows and that Trump dataset was 2.8 M already. So a tool that I and many others have been working with is Tableau. It’s a tool that copes with scale, it’s user-friendly, intuitive, all-purpose data analytics tool, but the downside is that it is not free (unless you are a student or are using it in teaching). Alongside that, for network visualisation, Gephi is the main tool at the moment. That’s open source and free and a new version came out in December.

So, into Tableau and an idea of what we can do with the data… Tableau enables you to work with data sources of any form, databases, spreadsheets, etc. So I have connected the full export I’ve gotten from TCAT… I have linked the main file to hashtag and mention files. Then I have also generated an additional file that expands the URLs in that data source (you can now do this in TCAT too). This is a left join – one main table that other tables are connected to. I’ve connected based on (tweet) id. And the dataset I’m showing here is from the Paris 2015 UN Climate Change. And all the steps I’m going through today are in a PDF guidebook that is available in that session resources link (http://tinyurl.com/aoir2016-digmethods/).

Tableau then tries to make sense of the data… Dimensions are the datasets which have been brought in, clicking on those reveals columns in the data, and then you see Measures – countable features in the data. Tableau makes sense of the file itself, although it won’t always guess correctly.

Now, we’ve joined the data here so that can mean we get repetition… If a tweet has 6 hashtags, it might seem to be 6 tweets. So I’m going to use the unique tweet ids as a measure. And I’ll also right click to ensure this is a distinct count.

Having done that I can begin to visualise my data and see a count of tweets in my dataset… And I can see when they were created – using Created at but also then finessing that to Hour (rather than default of Year). Now when I look at that dataset I see a peak at 10pm… That seems unlikely… And it’s because TCAT is running on Brisbane time, so I need to shift to CET time as these tweets were concerned with events in Paris. So I create a new Formula called CET, and I’ll set it to be “DateAdd (‘hour’, -9, [Created at])” – which simply allows us to take 9 hours off the time to bring it to the correct timezone. Having done that the spike is 3.40pm, and that makes a lot more sense!

Having generated that graph I can click on, say, the peak activity and see the number of tweets and the tweets that appeared. You can see some spam there – of course – but also widely retweeted tweet from the White House, tweets showing that Twitter has created a new emoji for the summit, a tweet from the Space Station. This gives you a first quick visual inspection of what is taking place… And you can also identify moments to drill down to in further depth.

I might want to compare Twitter activity with number of participating users, comparing the unique number of counts (synchronising axes for scale). Doing that we do see that there are more tweets when more users are active… But there is also a spike that is independent of that. And that spike seems to be generated by Twitter users tweeting more – around something significant perhaps – that triggers attention and activity.

So, this tool enables quantitative data analysis as a starting point or related route into qualitative analysis, the approaches are really inter-related. Quickly assessing this data enables more investigation and exploration.

Now I’m going to look at hashtags, seeing the volume against activity. By default the hashtags are ordered alphabetically, but that isn’t that useful, so I’m going to reorder by use. When I do that you can see that by far COP21 – the official hashtag – is by far the most popular. These tweets were generated from that hashtags but also from several search terms for the conference – official abbreviations for the event. And indeed some tweets have “Null” hashtags – no hashtags, just the search terms. You also see variance in spelling and capitalisation. Unlike Twitter Tableau is case sensitive so I would need to use some sort of Formula to resolve this – combining terms to one hashtag. A quick way to do that is to use “LOWER(‘Hashtag’)” which converts all data in the hashtag fields to lower case. That clustering shows COP21 as an even bigger hashtag, but also identifies other popular terms. We do see spikes in a given hashtag – often very brief – and these are often related to one very popular and heavily retweeted tweet has emerged. So, e.g. a prominent actor/figure has tweeted – e.g. in this data set Cara Delevingne (a British supermodel) triggers a short sharp spike in tweets/retweets.

And we can see these hashtags here, their relative popularity. But remember that my dataset is just based on what I asked TCAT to collect… TCOT might be a really big hashtag but maybe they don’t usually mention my search terms, hence being smaller in my data set. So, don’t be fooled into assuming some of the hashtags are small/low use just because they may not be prominent in a collected dataset.

Turning now to Mentions… We can see several Mention Types: original/null (no mentions); mentions; retweet. You also see that mentions and retweets spikes at particular moments – tweets going viral, key figures getting involved in the event or the tweeting, it all gives you a sense of the choreography of the event…

So, we can now look at who is being mentioned. I’m going to take all Twitter users in my dataset… I’ll see how many tweets mention them. I have a huge Null group here – no mentions – so I’ll start by removing that. The most mentioned accounts we see COP21 being the biggest mentioned account, and others such as Narendra Modi (chair of event?), POTUS, UNFCCC, Francois Hollande, the UN, Mashi Rafael, COP21en – the English language event account; EPN – Justin Trudeau; StationCDRKelly; C Figueres; India4Climate; Barack Obama’s personal account, etc. And I can also see what kind of mention they get. And you see that POTUS gets mentions but no retweets, whilst Barack Obama has a few retweets but mainly mentions. That doesn’t mean he doesn’t get retweets, but not in this dataset/search terms. By contrast Station Commander Kelly gets almost exclusively retweets… The balance of mentions, how people are mentioned, what gets retweeting etc… That is all a starting point for closer reading and qualitative analysis.

And now I want to look at who tweets the most… And you’ll see that there is very little overlap between the people who tweet the most, and the people who are mentioned and retweeted. The one account there that appears in both is COP21 – the event itself. Now some of the most active users are spammers and bots… But others will be obsessive, super-active users… Further analysis lets you dig further. Having looked at this list, I can look at what sort of tweets these users are sending… And that may look a bit different… This uses the Mention type and it may be that one tweet mentions multiple users, so get counted multiple times… So, for instance, DiploMix puts out 372 tweets… But when re-looked at for mentions and retweets we see a count of 636. That’s an issue you have to get your head around a bit… And the same issue occurs with hashtags. Looking at the types of tweets put out show some who post only or mainly original tweets, some who do mention others, some only or mainly retweet – perhaps bots or automated accounts. For instance DiploMix retweets diplomats and politicians. RelaxinParis is a bot retweeting everything on Paris – not useful for analysis, but part of lived experience of Twitter of course.

So, I have lots of views of data, and sheets saved here. You can export tables and graphs for publications too, which is very helpful.

I’m going to finish by looking at URLs mentioned… I’ve expanded these myself, and I’ve got the domain/path as well as the domain captured. I remove the NULL group here. And the most popular linked to domain is Twitter – I’m going to combine http and https versions in Tableau – but Youtube, UN, Leader of Iran, etc. are most popular. If I dig further into the Twitter domains, looking at Path, I can see whose accounts/profiles etc. are most linked to. If I dig into Station Commander Kelly you see that the most shared of these URLs are images… And we can look at that… And that’s a tweet we had already seen all day – a very widely shared image of a view of earth.

My time is up but I’m hoping this has been useful… This is the sort of approach I would take – exploring the data, using this as an entry point for more qualitative data analysis.

Analysing Network Dynamics with Agent Based Models (Patrik Wikström)

I will be talking about network dynamics and how we can understand some of the theory of network dynamics. And before I start a reminder that you can access and download all these materials at the URL for the session.

So, what are network dynamics? Well we’ve already seen graphs and visualisations of things that change over time. Network dynamics are very much about things that change and develop over time… So when we look at a corpus of tweets they are not all simultaneous, there is a dimension of time… And we have people responding to each other, to what they see around them, etc. So, how can we understand what goes on? We are interested in human behaviour, social behaviour, the emergence of norms and institutions, information diffusion patterns across multiple networks, etc. And these are complex and related to time, we have to take time into account. We also have to understand how macro level patterns emerge from local interactions between heterogenous agents, and how macro level patterns influence and impact upon those interactions. But this is hard…

It is difficult to capture complexity of such dynamic phenomena with verbal or conceptual models (or with static statistical models). And we can be seduced by big data. So I will be talking about using particular models, agent-based models. But what is that? Well it’s essentially a computer program, or a computer program for each agent… That allows it to be heterogeneous, autonomous and to interact with the environment and with other agents; that means they can interact in a (physical) space or as nodes in a network; and we can allow them to have (limited) perception, memory and cognition, etc. That’s something it is very hard for us to do and imagine with our own human brains when we look at large data sets.

The fundamental goal of this model is to develop a model that represents theoretical constructs, logics and assumptions and we want to be able to replicate the observed real-world behaviour. This is the same kind of approach that we use in most of our work.

So, a simple example…

Let’s assume that we start with some inductive idea. So we want to explain the emergence of the different social media network structures we observe. We might want some macro-level observations of Structure – clusters, path lengths, degree distributions, size; Time – growth, decline, cyclic; Behaviours – contagion, diffusion. So we want to build some kind of model to transfer or take our assumptions of what is going on, and translate that into a computer model…

So, what are our assumptions?

Well lets say we think people use different strategies when they decide which accounts to follow, with factors such as familiarity, similarity, activity, popularity, random… They may all be different explanations of why I connect with one person rather than another…  And lets also assume that when a user joins Twitter they immediately start following a set of accounts, and once part of the network they add more. And lets also assume that people are different – that’s really important! People are interested in different things – they have different passions, topics that interest them, some are more active, some are more passive. And that’s something we want to capture.

So, to do this I’m going to use something called NetLogo – which some of you may have already played with – it is a tool developed maybe 25 years back at Northwestern University. You can download it – or use a limited browser-based version -from: http://ccl.northwestern.edu/netlogo/.

In NetLogo we start with a 3 node network… I initialise the network and get three new nodes. Then I can add a new node… In this model I have a slider for “randomness” – if I set it to less random, it picks existing popular nodes, in the middle it combines popularity with randomness, and at most random it just adds nodes randomly…

So, I can run a simulation with about 200 nodes with randomness set to maximum… You can see how many nodes are present, how many friends the most popular node has, and how many nodes have very few friends (with 3 which is minimum connections in this model). If I now change the formation strategy here to set randomness to zero… then we see the nodes connecting back to the same most popular nodes… A more broadcast-like network. This is a totally different kind of network.

Now, another simulation here toggles the size of nodes to represent number of followers… Larger blobs represent really popular nodes… So if I run this in random mode again, you’ll see it looks very different…

So, why am I showing you this? Well I live to show a really simple model. This is maybe 50 lines of code – you could build it in a few hours. The first message is that it is easy to build this kind of model. And even though we have a simple model we have at least 200 agents… We normally work with thousands or much greater scale, but you can still learn something here. You can see how to replicate the structure of a network. Maybe it is a starting point that requires more data to be added, but it is a place to start and explore. Even though a simple model you can use this to build theory, to guide data collection and so forth.

So, having developed a model you can set up a simulation to run hundreds of times, to analyse with your data analytics tools… So I’ve run my 200 node network, 5000 simulations, comparing randomness and maximum links to a nodes – helping understand that different formation strategy creates different structures. And that’s interesting but it doesn’t take us all the way. So I’d like to show you a different model that takes this a little bit further…

This model is an extension of the previous model – with all the previous assumptions – so you have two formation strategies, but also other assumptions we were talking about… That I am more likely to connect to accounts with shared interests, more inclines to connect with accounts with shared interests, and with that we generate a simulation which is perhaps a better representation of the kinds of network we might see. And this accommodates the idea that this network has content, sharing, and other aspects that inform what is going on in the formation of that network. This visualisation looks pretty but the useful part is the output you can get at an aggregate level… We are looking at population level, seeing how local interactions at local levels, influence macro level patterns and behaviours… We can look at in-degree distribution, we can look at out-degree… We can look at local clustering coefficients, longest/shortest path, etc. And my assumptions might be plausible and reasonable…

So you can build models that give a much deeper understanding of real world dynamics… We are building an artificial network BUT you can combine this with real world data – load a real world network structure into the model and look at diffusion within that network, and understand what happens when one node posts something, what impact would that have, what information diffusion would that have…

So I’ve shown you NetLogo to play with these models. If you want to play around, that’s a great first step. It’s easy to get started with and it has been developed for use in educational settings. There is a big community and lots of models to use. And if you download NetLogo you can download that library of models. Pretty soon, however, I think you’ll find it too limited. There are many other tools you can use… But in general you can use any programming language that you want… Repast and Mason are very common tools. And they are based on Java or C++. You can also use an ABM Python module.

In the folder for this session there are some papers that give a good introduction to agent-based modelling… If we think about agent-based modelling and network theory there are some books I would recommend: Natatame & Chen: Agent-based modelling and Network dynamics. ABM look at Miller & Scott; Gilbert and Troitzsch; Epstein. Network theory – look at Jackson, Watts (& Strogatz), Barabasi.

So, three things:

Simplify! – You don’t need millions of agents. A simple model can be more powerful than a realistic one

Iterate! – Start simple and, as needed, build up complexity, add more features, but only if necessary.

Validate? – You can build models in a speculative way to guide research, to inform data collection… You don’t always have to validate that model as it may be a tool for your thinking. But validation is important if you want to be able to replicate and ensure relevance in the real world.

We started talking about data collection, analysis, and how we build theory based on the data we collect. After lunch we will continue with Carolin, Anne and Fernando on Tracking the Trackers. At the end of the day we’ll have a full panel Q&A for any questions.

And we are back after lunch and a little exposure to the Berlin rain!

Tracking the Trackers (Anne Helmond, Carolin Gerlitz, Esther Weltevrede and Fernando van der Vlist)

Carolin: Having talked about tracking users and behaviours this morning, we are going to talk about studying the media themselves, and of tracking the trackers across these platforms. So what are we tracking? Berry (2011) says:

“For every explicit action of a user, there are probably 100+ implicit data points from usage; whether that is a page visit, a scroll etc.”

Whenever a user makes an action on the web, a series of tracking features are enabled, things like cookies, widgets, advertising trackers, analytics, beacons etc. Cookies are small pieces of text that are placed on the user’s computer indicating that they have visited a site before. These are 1st party trackers and can be accessed by the platforms and webmasters. There are now many third party trackers such as Facebook, Twitter, Google, and many websites now place third party cookies on the devices of users. And there are widgets that enable this functionality with third party trackers – e.g. Disquus.

So we have first party tracker files – text files that remember, e.g. what you put in a shopping cart; third party tracker files used by marketers and data-gathering companies to track your actions across the web; you have beacons; and you have flash cookies.

The purpose of tracking varies, from functionality that is useful (e.g. the shopping basket example) but increasingly prevelant for use in profiling users and behaviours. The increasing use of trackers has resulted in them becoming more visible. There is lots of research looking at the prevalence of tracking across the web, from the Continuum project and the Guardian’s Tracking the Trackers project. One of the most famous plugins that allows you to see the trackers in your own browser is Ghostery – a browser plugin that you can install and immediately detects different kinds of trackers, widgets, cookies, analytics tracking on the sites that you browse to… It shows these in a pop up. It allows you to see the trackers and to block trackers, or selectively block trackers. You may want to selectively block trackers as whole parts of websites disappear when you switch off trackers.

Ghostery detects via tracker library/code snippets (regular expressions). It currently detects around 2295 trackers – across many different varieties. The tool is not uncontroversial. It started as an NGO but was bought by analytics company Evidon in 2010, using the data for marketing and advertising.

So, we thought that if we, as researchers, want to look at trackers and there are existing tools, lets repurpose existing tools. So we did that, creating a Tracker tracker tool based on Ghostery. It takes up a logic of Digital Methods, working with lists of websites. So the Tracker Tracker tool has been created by the Digital Methods Initiative (2012). It allows us to detect which tracers are present on lists of wevsites and create a network view. And we are “repurposing analytical capabilities”. So, what sort of project can we use this with?

One of our first project was on the Like Economy. Our starting point was the fact that social media widgets place cookies (Gerlitz and Helmond 2013), where are they present. These cookies track both platform users and website users. We wanted to see how pervasive these cookies were on the web, and on the most used sites on the web.

We started by using Alexa to identify a collection of 1000 most-visited websites. We inputted it into the Tracking Tracker tool (it’s only one button so options are limited!). Then we visualised the results with Gephi. And what did we get? Well, in 2012 only 18% of top websites had Facebook trackers – if we did it again today it would probably be different. This data may be connected to personal user profiles – when a user has been previously logged in and has a profile – but it is also being collected for non-users of Facebook, they create anonymous profiles but if they subsequently join Facebook that tracking data can be fed into their account/profile.

Since we did this work we have used this method on other projects. Now I’ll hand over to Anne to do a methods walkthrough.

Anne: Now you’ve had a sense of the method I’m going to do a dangerous walkthrough thing… And then we’ll look at some other projects here.

So, a quick methodological summary:

  1. Research question: type of tracker and sites
  2. Website (URL) collection making: existing expert list.
  3. Input list for Tracker Tracker
  4. Run Tracker Tracker
  5. Analyse in Gephi

So we always start with a research question… Perhaps we start with websites we wouldn’t want to find trackers on – where privacy issues are heightened e.g. childrens’ websites, porn websites, etc. So, homework here – work through some research question ideas.

Today we’ll walk through what we will call “adult sites”. So, we will go to Alexa – which is great for locating top sites in categories, in specific countries, etc. We take that list, we put it into Tracker Tracker – choosing whether or not to look at the first level of subpages – and press the button. The tool looks at the Ghostery database, which now scans those websites for the possible 2600 trackers that may exist.

Carolyn: Maybe some of you are wondering if it’s ok to do this with Ghostery? Well, yes, we developed Tracker Tracker in collaboration with Ghostery when it was an NGO, with one of their developers visiting us in Amsterdam. One other note here: if you use Ghostery on your machine, it may be different to your neighbours trackers. Trackers vary by machine, by location, by context. That’s something we have to take into account when requesting data. So for news websites you may, for instance, have more and more trackers generated the longer the site is open – this tool only captures a short window of time so may not gather all of the trackers.

Anne: Also in Europe you may encounter a so-called cookie walls. You have to press OK to accept cookies… And the tool can’t emulate user experience in clicking beyond the cookie walls… So zero trackers may indicate that issue, rather than no trackers.

Q: Is it server side or client side?

A: It is server side.

Q: And do you cache the tracker data?

A: Once you run the tool you can save the CSV and Gephi files, but we don’t otherwise cache.

Anne: Ghostery updates very frequently so that makes it most useful to always use the most up to date list of trackers to check against.

So, once we’ve run the Tracker Tracker tool you get outputs that can be used in a variety of flexible formats. We will download the “exhaustive” CSV – which has all of the data we’ve found here.

If I open that CSV (in Excel) we can see the site, the scheme, the patterns that was used to find the tracker, the name of the tracker… This is very detailed information. So for these adult sites we see things like Google Analytics, the Porn Ad network, Facebook Connect. So, already, there is analysis you could do with this data. But you could also do further analysis using Gephi.

Now, we have steps of this procedure in the tutorial that goes with today’s session. So here we’ve coloured the sites in grey, and we’ve highlighted the trackers in different colours. The purple lines/nodes are advertising trackers for instance.

If you want to create this tracker at home, you have all the steps here. And doing this work we’ve found trackers we’d never seen before – for instance the porn industry ad network DoublePimp (a play on DoubleClick) – and to see regional and geographic difference between trackers, which of course has interesting implications.

So, some more examples… We have taken this approach looking at Jihadi websites, working with e.g. governments to identify the trackers. And found that they are financially dependent on advertising included SkimLinks, DoubleClick, Google AdSense.

Carolyn: And in almost all networks we encounter DoubleClick, AdSense, etc. And it’s important to know that webmasters enable these trackers, they have picked these services. But there is an issue of who selects you as a client – something journalists collaborating on this work raised with Google.

Anne: The other usage of these trackers has been in historical tracking analysis using the internet archive. This enables you to see the website in the context in a techno-commercial configuration, and to analyse it in that context. So for instance looking at New York Times trackers and the wevsite as an ecosystem embedded in the wider context – in this case trackers decreased but that was commercial concentration, from companies buying each other therefore reducing the range of trackers.

Carolyn: We did some work called the Trackers Guide. We wanted to look not only at trackers, but also look at Content Delivery Networks, to visualise on a website how websites are not single items, but collections of data with inflows and outflows. The result became part artwork, part biological fieldguide. We imagined content and trackers as little biological cell-like clumps on the site, creating a whole booklet of this guide. So the image here shows the content from other spaces, content flowing in and connected…

Anne: We were also interested in what kind of data is being collected by these trackers. And also who owns these trackers. And also the countries these trackers are located in. So, we used this method with Ghostery. And then we dug further into those trackers. For Ghostery you can click on a tracker and see what kind of data it collects. We then looked at privacy policies of trackers to see what it claims to collect… And then we manually looked up ownership – and nationality – of the trackers to understand rules, regulations, etc. – and seeing where your data actually ends up.

Carolyn: Working with Ghostery, and repurposing their technology, was helpful but their database is not complete. And it is biased to the English-speaking world – so it is particularly lacking in Chinese contexts for instance. So there are limits here. It is not always clear what data is actually being collected. BUT this work allows us to study invisible participation in data flows – that cannot be found in other ways; to study media concentration and the emergence of specific tracking ecologies. And in doing so it allows us to imagine alternative spatialities of the web – tracker origins and national ecologies. And it provides insights into the invisible infrastructures of the web.

Slides for this presentation: http://www.slideshare.net/cgrltz/aoir-2016-digital-methods-workshop-tracking-the-trackers-66765013

Multiplatform Issue Mapping (Jean Burgess & Ariadna Matamoros Fernandez)

Jean: I’m Jean Burgess and I’m Professor of Digital Media and Director of the DRMC at QUT. Ariadna is one of our excellent PhD students at QUT but she was previously at DMI so she’s a bridge to both organisations. And I wanted to say how lovely it is to have the DRMC and DMI connected like this today.

So we are going to talk about issue mapping, and the idea of using issue mapping to teach digital research methods, particularly with people who may not be interested in social media outside of their specific research area. And about issue mapping as an approach that is outside the dominant “influencers” narrative that is dominant in the marketing side of social media.

We are in the room with people who have been working in this space for a long time but I just want to raise that we are making connections to AMT and cultural and social studies. So, a few ontological things… Our approach combines digital methods and controversy analysis. We understand there to be Controversies which are discreet, acute, often temporality that are sites of intersectionality, bringing together different issues in new combination. And drawing on Latour, Callon etc. we see controversies as generative. They can reveal the dynamics of issues, bring them together in new combinations, trasform them and mode them forward. And we undertake network and content analysis to understand relations among stakeholders, arguments and objects.

There are both very practical applications and more critical-reflexive possibilities of issue mapping. And we bring our own media studies viewpoint to that, with an interest in the vernacular of the space.

So, issue mapping with social media frequently starts with topical Twitter hashtags/hashtag communities. We then have iteractive “issue inventories” – actors, hashtags, media objects from one dataset used as seeds on their own. We then undertake some hybrid network/thematic analysis – e.g. associations among hashtags; thematic network clusters And we inevitably meet the issue of multi-platform/cross-platform engagement. And we’ll talk more about that.

One project we undertook on the #agchatoz, which is a community in Australia around weekly Twitter chats, but connected to a global community, explored the hashtag as a hybrid community. So here we looked at, for instance, the network of followers/followees in this network. And within that we were able to identify clusters of actors (across: Left-learning Twitterati (30%); Australian ag, farmers (29%); Media orgs, politicians (13%); International ag, farmers (12%); Foodies (10%); Right-wing Australian politics and others), and this reveals some unexpected alliances or crossovers – e.g. between animal rights campaigners and dairy farmers. That suggests opportunities to bridge communities, to raise challenges, etc.

We have linked, in the files for this session, to various papers. One of these, Burgess and Matamoros-Fernandez (2016) looks at Gamergate and I’m going to show a visualisation of the YouTube video network (Reider 2015; Gephi), which shows videos mentioned in tweets around that controversy, showing those that were closely related to each other.

Ariadna: My PhD is looking at another controversy, this one is concerned by Adam Goodes, an Australian Rules Footballer who was a high profile player until he retired last year. He has been a high profile campaigner against racism, and has called out racism on the field. He has been criticised for that by one part of society. And in 2014 he performed an indiginous war dance on the pitch, which again received booing from the crowd and backlash. So, I start with Twitter, follow the links, and then move to those linked platforms and moving onwards…

Now I’m focusing on visual material, because the controversy was visual, it was about a gesture. So there is visual content (images, videos, gifs) are mediators of race and racism on social media. I have identified key media objects through qualitative analysis – important gestures, different image genres. And the next step has been to reflect on the differences between platform traces – YouTube relates videos, Facebook like network, Twitter filters, notice and take down automatic messages. That gives a sense of the community, the discourse, the context, exploring their specificities and how their contributes to the cultural dynamics of face and racism online.

Jean: And if you want to learn more, there’s a paper later this week!

So, we usually do training on this at DMRC #CCISS16 Workshops. We usually ask participants to think about YouTube and related videos – as a way to encourage to people to think about networks other than social networks, and also to get to grips with Gephi.

Ariadna: Usually we split people into small groups and actually it is difficult to identify a current controversy that is visible and active in digital media – we look at YouTube and Tumblr (Twitter really requires prior collection of data). So, we go to YouTube to look for a key term, and we can then filter and find results changing… Usually you don’t reflect that much. So, if you look at “Black Lives Matter”, you get a range of content… And we ask participants to pick out relevant results – and what is relevant will depend on the research question you are asking. That first choice of what to select is important. Once this is done we get participants to use the YouTube Data Tools: https://tools.digitalmethods.net/netvizz/youtube/. This tool enables you to explore the network… You can use a video as a “seed”, or you can use a crawler that finds related videos… And that can be interesting… So if you see an Anti-Islamic video, does YouTube recommend more, or other videos related in other ways?

That seed leads you to related videos, and, depending on the depth you are interested in, videos related to the related videos… You can make selections of what to crawl, what the relevance should be. The crawler runs and outputs a Gephi file. So, this is an undirected network. Here nodes are videos, edges are relationships between videos. We generally use the layout: Force Atlas 2. And we run the Modularity Report to colour code the relationships on thematic or similar basis. Gephi can be confusing at first, but you can configure and use options to explore and better understand your network. You can look at the Data Table – and begin to understand the reasons for connection…

So, I have done this for Adam Goodes videos, to understand the clusters and connections.

So, we have looked at YouTube. Normally we move to Tumblr. But sometimes a controversy does not resonate on a different social media platform… So maybe a controversy on Twitter, doesn’t translate on Facebook; or one on YouTube doesn’t resonate on Tumblr… Or keywords will vary greatly. It can be a good way to start to understand the cultures of the platforms. And the role of main actors etc. on response in a given platform.

With Tumblr we start with the interface – e.g. looking at BlackLivesMatter. We look at the interface, functionality, etc. And then, again, we have a tool that can be used: https://tools.digitalmethods.net/netvizz/tumblr/. We usually encourage use of the same timeline across Tumblr and YouTube so that they can be compared.

So we can again go to Gephi, visualise the network. And in this case the nodes and edges can look different. So in this example we see 20 posts that connect 141 nodes, reflecting the particular reposting nature of that space.

Jean: The very specific cultural nature of the different online spaces can make for very interesting stuff when looking at controversies. And those are really useful starting points into further exploration.

And finally, a reminder, we run our summer schools in DMRC in February. When it is summer! And sunny! Apply now at: http://dmrcss.org/!

Analysing and visualising geospatial data (Peta Mitchell)

Normally when I would do this as a workshop I’d give some theoretical and historical background of the emergence of geospatial data, and then move onto the practical workshop on Carto (was CartoDB). Today though I’m going to talk about a case study, around the G8 meeting in Melbourne, and then talk about using Carto to create a social media map.

My own background is a field increasingly known as the geo humanities or the spatial humanities. And I did a close reading project of novels and films to create a Cultural Atlas of Australia. And how locations relate to narrative. For instance almost all films are made in South Australia, regardless of where they are set, mapping patterns of representation. We also created a CultureMap – an app that went with a map to alert you to literary or filmic places nearby that related back to that atlas.

I’ll talk about that G8 stuff. I now work on rapid spatial analytics; participatory geovisualisation and crowdsourced data; VGI – Volunteered Geographic Information; placemaking etc. But today I’ll be talking about emerging forms of spatial information/geodata, neogeographical tools etc.

So Godon and de Souza e Silva (2011) talk about us witnessing the increasing proliferation of geospatial data. And this is sitting alongside a geospatial revolution – GPS enabled devices, geospatial data permeating social media, etc. So GPS emerged in the late ’90s/early 00’s with a slight social friend-finder function. But the geospatial web really begins around 2000, the beginning of the end of the idea of the web as a “placeless space”. To an extent this came from a legal case brought by a French individual against Yahoo!, who were allowing Nazi memorabilia to be sold. That was illegal in France, and Yahoo! claimed that the internet is global, and claimed that it wasn’t possible. A French judge found in favour of the individual, Yahoo! were told it was both doable and easy, and Yahoo! went on to financially benefit from IP based location information. As Richard Rogers that case was the “revenge of geography against the idea of cyberspace”.

Then in 2005 Google Maps was described by John Yudell as that platform having the potential to be a “service factory for the geospatial web”. So in 2005 the “geospatial web” really is there as a term. By 2006 the concept of “Neogeography” was defined by Andrew (?) to describe the kind of non-professional, user-orientated, web 2.0-enabled mapping. There are are critiques in cultural geography, in geospatial literature about this term, and the use of the “neo” part of it. But there are multiple applications here, from humanities to humanitariasm; from cultural mapping to crisis mapping. An example here is Ushahidi maps, where individuals can send in data and contribute to mapping of crisis. Now Ushahidi is more of a platform for crisis mapping, and other tools have emerged.

So there are lots of visualisation tools and platforms. There are traditional desktop GIS – ArcGIS, QGIS. There is basic web-mapping (e.g. Google Maps); Online services (E.g. CARTO, Mapbox); Custom map design applications (e.g. MapMill); and there are many more…

Spatial data is not new, but there is a growth in ambient and algorithmic spatial data. So for instance ABC (TV channel in Australia) did some investigation, inviting audiences to find out as much as they could based on their reporter Will Ockenden’s metadata. So, his phone records, for instance, revealed locations, a sensitive data point. And geospatial data is growing too.

We now have a geospatial sub stratum underpinning all social media networks. So this includes check-in/recommendation platforms: Foursquare, Swarm, Gowalla (now defunct), Yelp; Meetup/hookup apps: Tinder, Grindr, Meetup; YikYak; Facebook; Twitter; Instagram; and Geospatial Gaming: Ingress; Pokemon Go (from which Google has been harvesting improvements for its pedestrian routes).

Geospatial media data is generated from sources ranging from VGI (Volunteered geographic information) to AGI (ambient geographic information), where users are not always aware that they are sharing data. That type of data doesn’t feel like crowd sourced data or VGI, hence the potential challenges, potential and ethical complexity of AGI.

So, the promises of geosocial analysis include a focus on real-time dynamics – people working with geospatial data aren’t used to this… And we also see social media as a “sensor network” for crisis events. There is also potential to provide new insights into spatio-temporal spread of ideas and actions; human mobilities and human behaviours.

People do often start with Twitter – because it is easier to gather data from it – but only between 1% and 3% of Tweets are located. But when we work at festivals we see around 10% being location data – partly a nature of the event, partly as Tweets are often coming through Instagram… On Instagram we see between 20% and 30% of images georeferenced, but based on upload location, not where image is taken.

There is also the challenge of geospatial granularity. On a tweet with Lat Long, that’s fairly clear. When we have a post tagged with a place we essentially have a polygon. And then when you geoparse, what is the granularity – street, city? Then there are issues of privacy and the extent to which people are happy to share that data.

So, in 2014 Brisbane hosted the G20, at a cost of $140 AUS for one highly disruptive weekend. In preceeding G20 meetings there had been large scale protests. At the time the premier of the city was former military and he put the whole central business district was in lockdown and designated a “declared area” – under new laws made for this event. And hotels for G20 world leaders were inside the zone. So, Twitter mapping is usually during crisis events – but you don’t know where this will happen, where to track it, etc. In this case we knew in advance where to look. So, a Safety and Security Act (2013) was put in place for this event, requiring prior approval for protests; arrests for the duration of the event; on the spot strip search; banning of eggs in the central Business District, no manure, no kayaks or floatation devices, no remote control cars or reptiles!

So we had these fears of violent protests, given all of these draconian measures. We had elevated terror levels. And we had war threatened after Abbott said he would “shirtfront” Vladimir Putin over MH17. But all that concern made city leaders concerned that the city might be a ghost town, when they wanted it marketed as a new world city. They were offering free parking etc. to incentivise them to come in. And tweets reinforced the ghost town trope. So, what geosocial mapping enabled was a close to realtime sensor network of what might be happening during the G20.

So, the map we did was the first close to real time social media map that was public facing, using CARTODB, and it was never more than an hour behind reality. We had few false matches. But we had clear locations and clear keywords – e.g. G20 – to focus on. A very few “the meeting will now be held in G20” but otherwise no false matches. We tracked the data through the meeting… Which ran over a weekend and bank holiday. This map parses around 17,000(?) tweets, most of which were not geotagged but parsed. Only 10% represent where someone was when they tweeted, the remaining 90% are subjects of posts from geoparsing of tweets.

Now, even though that declared area isn’t huge, there are over 300 streets there. I had to build a manually constructed gazeteer, using Open Street Map (OSM) data, and then new data. Picking a bounding box that included that area generated a whole range of features – but I wasn’t that excited about fountains, benches etc. I was looking for places people might mention. And I wanted to know about features people might actually mention in their tweets. So, I had a bounding box, and the declared area before… Would have been ideal if the G20 had given me their bounding polygon but we didn’t especially want to draw attention to what we were doing.

So, at the end we had lat, long, amenity (using OSM terms), name (e.g. Obama was at the Marriott so tweets about that), associated search terms – including local/vernacular versions of names of amenities; Status (declared or restricted); and confidence (of location/coordinates – score of 1 for geospatially tagged tweets, 0.8 for buildings, etc.). We could also create category maps of different data sets. On our map we showed geospatial and parsed tweets inside the area, but we only used geotweets outside the declared area. One of my colleagues created a Python script to “read” and parse tweets, and that generated a CSV. That CSV could then be fed into CARTODB. CARTODB has a time dimension, could update directly every half hour, and could use a Dr0pbox source to do that.

So, did we see much disruption? Well no… About celebrity spotting – the two most tweeted images were Obama with a koala and Putin with a koala. It was very hot and very secured so little disruption happened. We did see selfies with Angela Merkel, images of phallic motorcade. And after the G20 there was a complaint filed to board of corruption about the cooling effect of security on participation, particularly in environmental protests. There was still engagement on social media, but not in-person. Disruption, protest, criticism were replaced by spectacle and distant viewing of the event.

And, with that, we turn to an 11 person panel session to answer questions, wrap up, answer questions, etc. 

Panel Session

Q1) Each of you presented different tools and approaches… Can you comment on how they are connected and how we can take advantage of that.

A1 – Jean) Implicitly or explicitly we’ve talked about possibilities of combining tools together in bigger projects. And tools that Peta and I have been working on are based on DMI tools for instance… It’s sharing tools, shared fundamental techniques for analytics for e.g. a Twitter dataset…

A1 – Richard) We’ve never done this sort of thing together… The fact that so much has been shared has been remarkable. We share quite similar outlooks on digital methods, and also on “to what end” – largely for the study of social issues and mapping social issues. But also other social research opportunities available when looking at a variety of online data, including geodata. It’s online web data analysis using digital methods for issue mapping and also other forms of social research.

A1 – Carolyn) All of these projects are using data that hasn’t been generated by research, but which has been created for other purposes… And that’s pushing the analysis in their own way… And tools that we combine bring in levels, encryptions… Digital methods use these, but also a need to step back and reflect – present in all of the presentations.

Q2) A question especially for Carolyn and Anne: what do you think about the study of proprietary algorithms. You talked a bit about the limitations of proprietary algorithms – for mobile applications etc? I’m having trouble doing that…

A2 – Anne) I think in the case of the tracker tool, it doesn’t try to engage with the algorithm, it looks at presence of trackers. But here we have encountered proprietary issues… So for Ghostery, if you download a Firefox plugin you can access the content. We took the library of trackers from that to use as a database, we took that apart. We did talk to Ghostery, to make them aware… The question of algorithms… Of how you get to the blackbox things… We are developing methods to do this… One way in is to see the outputs, and compare that. Also Christian Zudwig is doing the auditing algorithms work.

A2 – Carolyn) Was just a discussion on Twitter about currency of algorithms and research on them… We’ve tried to ride on them, to implement that… Otherwise difficult. One element was on studying mobile applications. We are giving a presentation on this on Friday. Similar approach here, using infrastructures of app distribution and description etc. to look into this… Using existing infrastructures in which apps are built or encountered…

A2 – Anne) We can’t screenscrape and we are moving to this more closed world.

A2 – Richard) One of the best ways to understand algorithms is to save the outputs – e.g. we’ve been saving Google search outputs for years. Trying to save newsfeeds on Facebook, or other sorts of web apps can be quite difficult… You can use the API but you don’t necessarily get what the user has seen. The interface outputs are very different from developer outputs. So people think about recording rather than saving data – an older method in a way… But then you have the problem of only capturing a small sample of data – like analysing TV News. The new digital methods can mean resorting to older media methods… Data outputs aren’t as friendly or obtainable…

A2 – Carolyn) This one strand is accessing algorithms via transparancy; you can also think of them as situated and in context, seeing it in operation and in action in relation to the data, associated with outputs. I’d recommend Salam Marocca on the Impact of Big Data which sits in legal studies.

A2 – Jean) One of the ways we approach this is the “App Walkthrough”, a method Ben Light and I have worked on and will shortly be published in Media and Society, is to think about those older media approaches, with user studies part of that…

Q3) What is your position as researchers on opening up data, and doing ethically acceptable data on the other side? Do you take a stance, even a public stance on these issues.

A3 – Anne) Many of these tools, like the YouTube tool, and his Facebook tools, our developer took the conscious decision to anonymise that data.

A3 – Jean) I do have public positions. I’ve published on the political economy of Twitter… One interesting thing is that privacy discourses were used by Twitter to shut down TwapperKeeper at a time it was seeking to monetise… But you can’t just published an archive of tweets with username, I don’t think anyone would find that acceptable…

A3 – Richard) I think it is important to respect or understand contextual privacy. People posting, on Twitter say, don’t have an expectation of its use in commercial or research uses. Awareness of that is important for a researcher, no matter what terms of service the user has signed/consented to, or even if you have paid for that data. You should be aware and concerned about contextual privacy… Which leads to a number of different steps. And that’s why, for instance, NetVis – the Facebook tool – usernames are not available for comments made, even though FacePager does show that. Tools vary in that understanding. Those issues need to be thought about, but not necessarily uniformly thought about by our field.

A3 – Carolyn) But that becomes more difficult in spaces that require you to take part to research them – WhatsApp? for instance – researchers start pretending to be regular users… to generate insights.

Comment (me): on native vs web apps and approaches and potential for applying Ghostery/Tracker Tracker methods to web apps which are essentially pointing to URLs.

Q4) Given that we are beholden to commercial companies, changes to algorithms, APIs etc, and you’ve all spoken about that to an extent, how do you feel about commercial limitations?

A4 – Richard) Part of my idea of digital methods is to deal with ephemerality… And my ideal to follow the medium… Rather than to follow good data prescripts… If you follow that methodology, then you won’t be able to use web data or social media data… Unless you either work with the corporation or corporate data scientist – many issues there of course. We did work with Yahoo! on political insights… categorising search queries around a US election, which was hard to do from outside. But the point is that even on the inside, you don’t have all the insight or the full access to all the data… The question arises of what can we still do… What web data work can we still do… We constantly ask ourselves, I think digital methods is in part an answer to that, otherwise we wouldn’t be able to do any of that.

A4 – Jean) All research has limitations, and describing that is part of the role here… But also when Axel and I started doing this work we got criticism for not having a “representative sample”… And we have people from across humanities and social sciences seem to be using the same approaches and techniques but actually we are doing really different things…

Q5) Digital methods in social sciences looks different from anthropology where this is a classical “informant” problem… This is where digital ethnography is there and understood in a way that it isn’t in the social sciences…

Resources from this workshop:

Aug 092016
 
Notes from the Unleashing Data session at Repository Fringe 2016

After 6 years of being Repository Fringe‘s resident live blogger this was the first year that I haven’t been part of the organisation or amplification in any official capacity. From what I’ve seen though my colleagues from EDINA, University of Edinburgh Library, and the DCC did an awesome job of putting together a really interesting programme for the 2016 edition of RepoFringe, attracting a big and diverse audience.

Whilst I was mainly participating through reading the tweets to #rfringe16, I couldn’t quite keep away!

Pauline Ward at Repository Fringe 2016

Pauline Ward at Repository Fringe 2016

This year’s chair, Pauline Ward, asked me to be part of the Unleashing Data session on Tuesday 2nd August. The session was a “World Cafe” format and I was asked to help facilitate discussion around the question: “How can the respository community use crowd-sourcing (e.g. Citizen Science) to engage the public in reuse of data?” – so I was along wearing my COBWEB: Citizen Observatory Web and social media hats. My session also benefited from what I gather was an excellent talk on “The Social Life of Data” earlier in the event from the Erinma Ochu (who, although I missed her this time, is always involved in really interesting projects including several fab citizen science initiatives).

I won’t attempt to reflect on all of the discussions during the Unleashing Data Session here – I know that Pauline will be reporting back from the session to Repository Fringe 2016 participants shortly – but I thought I would share a few pictures of our notes, capturing some of the ideas and discussions that came out of the various groups visiting this question throughout the session. Click the image to view a larger version. Questions or clarifications are welcome – just leave me a comment here on the blog.

Notes from the Unleashing Data session at Repository Fringe 2016

Notes from the Unleashing Data session at Repository Fringe 2016

Notes from the Unleashing Data session at Repository Fringe 2016

If you are interested in finding out more about crowd sourcing and citizen science in general then there are a couple of resources that made be helpful (plus many more resources and articles if you leave a comment/drop me an email with your particular interests).

This June I chaired the “Crowd-Sourcing Data and Citizen Science” breakout session for the Flooding and Coastal Erosion Risk Management Network (FCERM.NET) Annual Assembly in Newcastle. The short slide set created for that workshop gives a brief overview of some of the challenges and considerations in setting up and running citizen science projects:

Last October the CSCS Network interviewed me on developing and running Citizen Science projects for their website – the interview brings together some general thoughts as well as specific comment on the COBWEB experience:

After the Unleashing Data session I was also able to stick around for Stuart Lewis’ closing keynote. Stuart has been working at Edinburgh University since 2012 but is moving on soon to the National Library of Scotland so this was a lovely chance to get some of his reflections and predictions as he prepares to make that move. And to include quite a lot of fun references to The Secret Diary of Adrian Mole aged 13 ¾. (Before his talk Stuart had also snuck some boxes of sweets under some of the tables around the room – a popularity tactic I’m noting for future talks!)

So, my liveblog notes from Stuart’s talk (slightly tidied up but corrections are, of course, welcomed) follow. Because old Repofringe live blogging habits are hard to kick!

The Secret Diary of a Repository aged 13 ¾ – Stuart Lewis

I’m going to talk about our bread and butter – the institutional repository… Now my inspiration is Adrian Mole… Why? Well we have a bunch of teenage repositories… EPrints is 15 1/2; Fedora is 13 ½; DSpace is 13 ¾.

Now Adrian Mole is a teenager – you can read about him on Wikipedia [note to fellow Wikipedia contributors: this, and most of the other Adrian Mole-related pages could use some major work!]. You see him quoted in two conferences to my amazement! And there are also some Scotland and Edinburgh entries in there too… Brought a haggis… Goes to Glasgow at 11am… and says he encounters 27 drunks in one hour…

Stuart Lewis at Repository Fringe 2016

Stuart Lewis illustrates the teenage birth dates of three of the major repository softwares as captured in (perhaps less well-aged) pop hits of the day.

So, I have four points to make about how repositories are like/unlike teenagers…

The thing about teenagers… People complain about them… They can be expensive, they can be awkward, they aren’t always self aware… Eventually though they usually become useful members of society. So, is that true of repositories? Well ERA, one of our repositories has gotten bigger and bigger – over 18k items… and over 10k paper thesis currently being digitized…

Now teenagers also start to look around… Pandora!

I’m going to call Pandora the CRIS… And we’ve all kind of overlooked their commercial background because we are in love with them…!

Stuart Lewis at Repository Fringe 2016

Stuart Lewis captures the eternal optimism – both around Mole’s love of Pandora, and our love of the (commercial) CRIS.

Now, we have PURE at Edinburgh which also powers Edinburgh Research Explorer. When you looked at repositories a few years ago, it was a bit like Freshers Week… The three questions were: where are you from; what repository platform do you use; how many items do you have? But that’s moved on. We now have around 80% of our outputs in the repository within the REF compliance (3 months of Acceptance)… And that’s a huge change – volumes of materials are open access very promptly.

So,

1. We need to celebrate our success

But are our successes as positive as they could be?

Repositories continue to develop. We’ve heard good things about new developments. But how do repositories demonstrate value – and how do we compare to other areas of librarianship.

Other library domains use different numbers. We can use these to give comparative figures. How do we compare to publishers for cost? Whats our CPU (Cost Per Use)? And what is a good CPU? £10, £5, £0.46… But how easy is it to calculate – are repositories expensive? That’s a “to do” – to take the cost to run/IRUS cost. I would expect it to be lower than publishers, but I’d like to do that calculation.

The other side of this is to become more self-aware… Can we gather new numbers? We only tend to look at deposit and use from our own repositories… What about our own local consumption of OA (the reverse)?

Working within new e-resource infrastructure – http://doai.io/ – lets us see where open versions are available. And we can integrate with OpenURL resolvers to see how much of our usage can be fulfilled.

2. Our repositories must continue to grow up

Do we have double standards?

Hopefully you are all aware of the UK Text and Data Mining Copyright Exception that came out from 1st June 2014. We have massive massive access to electronic resources as universities, and can text and data mine those.

Some do a good job here – Gale Cengage Historic British Newspapers: additional payment to buy all the data (images + XML text) on hard drives for local use. Working with local informatics LTG staff to (geo)parse the data.

Some are not so good – basic APIs allow only simple searchers… But not complex queries (e.g. could use a search term, but not e.g. sentiment).

And many publishers do nothing at all….

So we are working with publishers to encourage and highlight the potential.

But what about our content? Our repositories are open, with extracted full-text, data can be harvested… Sufficient but is it ideal? Why not do bulk download from one click… You can – for example – download all of Wikipedia (if you want to).  We should be able to do that with our repositories.

3. We need to get our house in order for Text and Data Mining

When will we be finished though? Depends on what we do with open access? What should we be doing with OA? Where do we want to get to? Right now we have mandates so it’s easy – green and gold. With gold there is PURE or Hybrid… Mixed views on Hybrid. Can also publish locally for free. Then for gree there is local or disciplinary repositories… For Gold – Pure, Hybrid, Local we pay APCs (some local option is free)… In Hybrid we can do offsetting, discounted subscriptions, voucher schemes too. And for green we have UK Scholarly Communications License (Harvard)…

But which of these forms of OA are best?! Is choice always a great thing?

We still have outstanding OA issues. Is a mixed-modal approach OK, or should we choose a single route? Which one? What role will repositories play? What is the ultimate aim of Open Access? Is it “just” access?

How and where do we have these conversations? We need academics, repository managers, librarians, publishers to all come together to do this.

4. Do we now what a grown-up repository look like? What part does it play?

Please remember to celebrate your repositories – we are in a fantastic place, making a real difference. But they need to continue to grow up. There is work to do with text and data mining… And we have more to do… To be a grown up, to be in the right sort of environment, etc.

Q&A

Q1) I can remember giving my first talk on repositories in 2010… When it comes to OA I think we need to think about what is cost effective, what is sustainable, why are we doing it and what’s the cost?

A1) I think in some ways that’s about what repositories are versus publishers… Right now we are essentially replicating them… And maybe that isn’t the way to approach this.

And with that Repository Fringe 2016 drew to a close. I am sure others will have already blogged their experiences and comments on the event. Do have a look at the Repository Fringe website and at #rfringe16 for more comments, shared blog posts, and resources from the sessions. 

Jul 122016
 

This week I am at the European Conference on Social Media 2016. I’m presenting later today, and have a poster tomorrow, but will also be liveblogging here. As usual the blog is live so there may be small errors or typos – all corrections and additions are very much welcomed!

We are starting with an introduction to EM Normandie, which has 4 campuses and 3000 students.

Introduction from Sue Nugus, ACPI, welcoming us to the event and the various important indexing etc.

Christine Bernadas, ECSM is co-chair and from EM Normandie, is introducing our opening keynote Abi Ouni, Co-founder and CEO of Spectrum Group. [http://www.spectrumgroupe.fr/]

Keynote Address:Ali Ouni,Spectrum Group, France – Researchers in Social Media, Businesses Need You!!!

My talk today is about why businesses need social media. And that, although we have been using social media for the last 10-15 years, we still need some approaches and frameworks to make better use of it.

My own personal background is in Knowledge Manageent, with a PhD from the Ecole Centrale Paris and Renault. Then moved to KAP IT as Head of Enterprise 2.0, helping companies to integrate new technologies, social media, in their businesses. I belive this is a hard question – the issue of how we integrate social media in our businesses. And then in 2011 I co-founded Spectrum Groupe, a consulting firm of 25 people who work closely with researchers to define new approaches to content management, knowledge management, to define new approaches. And our approach is to design end to end approaches, from diagnostic, to strategy development through to technologies, knowledge management, etc.

When Christine asked me to speak today I said “OK, but I am no longer a researcher”, I did that 12-15 years ago, I am now a practitioner. So I have insights but we need you to define the good research questions based on them.

I looked back at what has been said about social media in the last 10-15 years: “Organisationz cannot afford not to be listening to what is being said about them or interacting with their customers in the space where they are spending their time and, increasingly, their money too” (Malcolm Alder, KPMG, 2011).

And I agree with that. This space has high potential for enterprises… So, lets start with two slides with some statistics. So, these statistics are from We Are Social’s work on digital trends. They find internet activity increasing by 10% every year; 10% growth in social media users; and growth of 4% in social media users accessing via mobile; which takes us to 17% of the total population actively engaging in social media on mobile.

So, in terms of organisations going to social media, it is clearly important. Ut it is also a confusion question. We can see that in 2010 70%+ of big international organisations were actively using social media, but of these 80% have not achieved the intended businesses. So, businesses are expending time and energy on social media but they are not accruing all of the benefits that they have targeted.

So, for me social media are new ways of working, new business models, new opportunities, but also bringing new risks and challenges. And there are questions to be answered that we face every day in an organisational context.

The Social Media Landscape today is very very diverse, there is a high density… There are many platforms, sites, medias… Organisationsa re confused by this landscape and they require help to navigate this space. The choice they have is usually to go to the biggest social media in terms of total users – but is that a good strategy? They need to choose sites with good business value. There are some challenges when considering external sites versus internal sites – should they replicate functionality themselves? And where are the values and risks of integrating social media platforms with enterprise IT systems? For instance listening to social media and making connecting back to CRMs (Customer Relationship Management System(s)).

What about using social media for communications? You can experiement, and learn from those… But that makes more sense when these tools are new, and they are not anymore. Is experimenting always the best approach? How ca we move faster? Clients often ask if they can copy/adopt the digital strategies of their competitors but I think generally not, that these approaches have to be specific to the context and audience.

Social media has a fast evolution speed, so agility is required… Organisations can struggle with that in terms of their own speed of organizational change. A lot of agility is requires to address new technologies, new use cases, new skills. And decisions over skills and whether to own the digital transformation process, or to delegate to others.

The issue of Return on Investment (ROI) is long standing but still important. Existing models do not work well with social media – we are in a new space, new technology, a new domain. There is a need to justify the value of these kinds of projects, but I think a good approach is to work on new social constructs, such as engagement, sentiment, retention, “ROR” – Return on Relationship, collective intelligence… But how does one measure these?

And organisations face challenges of governance… Understanding rules and policies of engagement on social media, on understanding issues of privacy and data protection. And thought around who can engage on social media.

So, I have presented some key challenges… Just a few. There are many more on culture, change, etc. that need to be addressed. I think that it is important that businesses and researchers work together on social media.

Q&A

Q1) Could you tell me something on Return on Relationships… ?

A1) This is a new approach. Sometimes the measure of Return on Investment is to measure every conversation and all time spent… ROR is about long term relationships with customers, partners, suppliers… and it is about having benefits after a longer period of time, rather than immediate Return on Investment. So some examples include turning some customers into advocates –so they become your best salespeople. That isn’t easy, but organisations are really very aware about these social constructs.

Q1) And how would you calculate that?

Comment) That is surely ROI still?

Comment) So, if I have a LinkedIn contact, and they buy my software, then that is a return on investment, and value from social capital… There is a time and quality gain too – you identify key contact and context here. Qualitative but eventually quantitative.

A1) There absolutely is a relationship between ROR and ROI.

Q2) It was interesing to hear your take on research. What you said reminded me of 20 years ago when we talked about “Quality Management” and there was a tension between whether that should be its own role, or part of everyone’s role.

A2) Yes, so we have clients that do want “community management” and ask us to do that for them – but they are the experts in their own work and relationships. The quality of content is key, and they have that expertise. Our expertise is around how to use social media as part of that. The good approach is to think about new ways to work with customers, and to define with our consulting customers what they need to do that. We have a coaching role, helping them to design a good approach.

Q3) Thank you for your presentation. I would like to ask you if you could think of a competency framework for good community management, and how you would implement that.

A3) I couldn’t define that framework, but I think rom what I see there are some key skills in community management are about expertise – people from the business who understands their own structure, needs, knowledge. I think that communication skills need to be good – writing skills, identifying good questions, an ability to spot and transform key questions. From our experience, knowing the enterprise, communication skills and coordinating skills are all key.

Q3) What about emotional engagement?

A3) I think emotional engagement is both good and dangerous. It is good to be invested in the role, but if they are too invested there is a clear line to draw beteen professional engagement and personal engagement. And that can make it dangerous.

Stream B – Mini Track on Empowering Women Through Social Media (Chair – Danilo Piaggesi)

Danilo: I proposed this mini track as I saw that the issues facing women in social media were different, but that women were self-organising and addressing these issues, so that is the genesis of this strand. My own background is in ICT in development and developing countries – which is why I am interested in this area of social media… The UN Sustainable Development Goals (SDG), which include ICT, have been defined as needing to apply to developing and developed countries. And there is a specific goal dedicated to Women and ICT, which has a deadline of 2030 to achieve this SDG.

Sexting & Intimate Relations Online: Identifying How People Construct Emotional Relationships Online & Intimacies Offline
Spurling – Esme, Coventry University, West Midlands, UK

Sexting and intimate relations online have accelerated with the use of phones and smart phones, particularly platforms such as SnapChat and Whats App… Sexting for the purpose of this paper is about the sharing of intimate texts through digital information. But this raises complexity for real life relationships, and how the online experience relates to that, and how heterosexual relationships are mediated. My work is based on interviewees.

I will be talking about “sex selfies”, which are distributed to a global audience online. These selfies (Ellie is showing examples on the “sexselfie” hashtags) purport to be intimate, despite their global sharing and nature. The hashtags here (established around 2014) show heterosexual couples… There is (by comparison to non-heterosexual selfies) a real focus on womens bodies, which is somewhat at odds with the expectations of girls and women showing an interest in sex. Are we losing our memory of what is intimate? Are sexselfies a way to share and retain that memory?

I spoke to women in the UK and US for my research. All men approached refused to be interviewed. We have adapted to the way we communicate face to face through the way we connect online. My participants reflect social media trends already reported in the media, of the blurring of different spheres of public and private. And that is feeding into our intimate lives too. Prensky (2001) refers to this generation as “Digital Natives” (I insert my usual disclaimer that this is the speaker not me!), and it seems that this group are unable to engage in that intimacy without sharing that experience. And my work focuses on shairng online, and how intimacy is formed offline. I took an ethnographic approach, and my participants are very much of a similar age to me, which helped me to connect as I spoke to them about their intimate relationships.

There becomes a dependency on mobile technologies, of demand and expectation… And that is leading to a “leisure for pleasure” mentality (Cruise?)… You need that reward and return for sharing, and that applies to sexting. Amy Hassenhof notes that sexting can be considered a broadcast media. And mainstream media has also been scrutinising sexting and technology, and giving coverage to issues such as “Revenge Porn” – which was made a criminal offence in 2014. This made texting more taboo and changed public perceptions – with judgement online of images of bodies shared on Twitter. When men participate they sidestep a label, being treated in the highly gendered “boys will be boys” casualness. By contrast women showing their own agency may be subject to “slut shaming” (2014 onwards), but sexting continues. And I was curious to find out why this continues, and how the women in my studies relate to comments that may be made about them. Although there is a feeling of safety (and facelessness) about posting online, versus real world practices.

An expert interview with Amy Hassenhof raised the issue of expectations of privacy – that most of those sexting expect their image to be private to the recipient. Intimate information shared through technology becomes tangled with surveillance culture that is bound up with mobile technologies. Smartphones have cameras, microphone… This contributes to a way of imagining the self that is formed only by how we present ourselves online.

The ability to sext online continues despite Butler noting the freedom of expression online, but also the way in which others comment and make a real impact on the lives of those sharing.

In conclusion it is not clear the extent to which digital natives are sharing deliberately – perceptions seemed to change as a result of the experience encountered. One of my participants felt less in control after reflective interviews about her practice, than she had before. We demand communication instantly… But this form of sharing enables emotional reliving of the experience.

Q&A

Q1) Really interesting research. Do you have any insights in why no men wanted to take part?

A1) The first thing is that I didn’t want to interview anyone that I knew. When I did the research I was a student, I managed to find fellow student participants but the male participants cancelled… But I have learned a lot about research since I undertook my evidence gathering. Women were happy to talk about – perhaps because they felt judged online. There is a lot I’d do differently in terms of the methodology now.

Q2) What is the psychological rationale for sharing details like the sex selfies… Or even what they are eating. Why is that relevant for these people?

A2) I think that the reason for posting such explicit sexual images was to reinforce their heterosexual relationships and that they are part of the norm, as part of their identity online. They want others to know what they are doing… As their identity online. But we don’t know if they have that identity offline. When I interviewed Amy Hassenhof she suggested it’s a “faceless identity” – that we adopt a mask online, and feel able to say something really explicit…

A Social Network Game for Encouraging Girls to Engage in ICT and Entrepreneurship: Findings of the Project MIT-MUT
–  Natalie Denk, Alexander Pfeiffer and Thomas Wernbacher, Donau Universität Krems, Ulli Rohsner, MAKAM Research Gmbh, Wien, Austria and Bernhard Ertl,Universität der Bundeswehr, Munich, Germany

This work is based on a mixture of literature review, qualitative analysis of interviews with students and teachers, and the development of the MIT-MUT game, with input and reflection from students and teachers. We are testing the game, and will be sharing it with schools in Austria later this year.

Our intent was to broaden career perspectives of girls at the age of 12-14 – this is younger than is usually targeted but it is the age at which they have to start making decisions and steps in their academic life that will impact on their career. Their decisions are impacted by family, school, peer groups. But the issue is that a lot of girls don’t even see a career in ICT as an option. We want them to show that that is a possibility, to show them the skills they already have, and that this offers a wide range of opportunities, possible career pathways. We also want to provide a route to mentors who are role models, as this is still a male dominated field especially when it comes to entrepreneurship.

Children and young people today grow up as “digital natives” (Prensky 2001) (again, my usual critical caveat), they have a strong affinity towards digital media, they frequently use internet, they use social media networks – primarily WhatsApp, but also Facebook and Instagram. Girls also play games – it’s not just boys that enjoy online gaming – and they do that on their phones. So we wanted to bring this all together.

The MIT-MUT game takes the form of a 7 week long live challenge. We piloted this in Oct/Nov 2015 with 6 schools and 65 actie players in 17 teams. The main tasks in the game are essentially role playing ICT entrepreneurship… Founding small start up companies, creating a company logo, and find an idea for an app for the target group of youth. They needed to then turn their game into a paper prototype – drawing screens and ideas on paper to demonstrate basic functionality and ideas. The girls had to make a video of this paper prototype, and also present their company on video. We deliberately put few technological barriers in place, but the focus was on technology, and the creative aspects of ICT. We wanted the girls to use their skills, to try different roles, to have opportunity to experiment and be creative.

To bring the schools and the project team we needed a central connecting point… We set up a SeN (Social Enterprise ?? Network), and we did that with Gemma – a Microsoft social networking tool for use within companies, that are closed to outside organisations. This was very important for us, given the young age and need for safety in our target user group. They had many of the risks and opportunities of any social network but in this safe bounded space. And, to make this more interesting for the girls, we created a fictional mentor character, “Rachel Lovelace” (named for Ada Lovelace), who is a Silicon Valley entrepreneur, coming to Austria to invest. And the students see a video introduction – we had an actress record about 15 video messages. So everything from the team was through the character of Rachel, whether video or in her network.

A social network like Gemma is perfect for gamification aspects – we did have winners and prizes – but we also had achievements throughout the challenge for finishing a face, making a key contribution, etc. And if course there is a “like” button, the ability to share or praise someone in the space, etc. We also created some mini games, based on favourite genres of the girls – the main goal of these were as starting points for discussing competencies in ICT and Entrepreneurship contexts. With the idea that if you play this game you have these competencies, and why not considering doing more with that.

So, within Gemma, the interface looks a lot like Facebook… And I’ll show you one of these paper prototypes in action (it’s very nicely done!), see all of the winning videos: http://www.mitmut.at/?page_id=940.

To evaluate this work we had a quantitative approach – part of the game presented by Rachel – as well as a quantitative approach based on feedback from teachers and some parents. We had 65 girls, 17 teams, 78% completed the challenge at least to phase 4 (the video presentation – all the main tasks completed). 26% participated in the voting phase (phase 5). Of our participants 30 girls would recommend the game to others, 10 were uncertain, and 4 would not recommend the game. They did enjoy the creativity, design, the paper prototyping. They didn’t like the information/the way the game was structured. The communication within the game was rated in quite a mixed way – some didn’t like it, some liked it. The girls interested in ICT rated the structure and communication more highly than others. The girls stayed motivated but didn’t like the long time line of the game. And we saw a significant increase in knowledgeability of ICT professions, they reported increase in feeling talented, and they had a higher estimation of their own presentation skills.

In the qualitative approach students commented on the teamwork, the independence, the organisational skills, the presentation capabilities. They liked having a steady contact person (the Rachel Lovelace character), the chance of winning, and the feeling of being part of a specialist project.

So now we have a beta version, we have added a scoring system for contributions with points and stars. We had a voting process but didn’t punish girls for not delivering on time, wanted to be very open… But girls thought that we should have done this and given more objective, more strict feedback. And they wanted more honest and less enthusiastic feedback from “Rachel”. They felt she was too enthusiastic. We also restructured the information a bit…

For future development we’d like to make a parallel programme for boys. The girls appreciated the single sex nature of the network. And I would personally really like to develop a custom made social media network for better gamifiation integration, etc. And I’d like

Q&A

Q1) I was interested that you didn’t bring in direct technical skills – coding, e.g. on Raspberry PIs etc. Why was that?

A1) Intentionally skipped programming part… They have lessons and work on programming… But a lack of that idea of creative ways to use ICT, the logical and strategic skills you would need… But they already do informatics as part of their teaching.

Q2) You set this up because girls and women are less attracted to ICT careers… But what is the reason?

A2) I think they can’t imagine to have a career in ICT… I think that is mainly about gender stereotypes. They don’t really know women in ICT… And they can’t imagine what that is as a career, what it means, what that career looks like… And to act out their interests…

And with that I’ve switched to the Education track for the final part of this session… 

Social Media and Theatre Pedagogy for the 21C: Arts-Based Inquiry in Drama Education – Amy Roberts and Wendy Barber, University of Ontario, Canada

Amy is starting her presentation with a video on social media and performance pedagogy, the blurring of boundaries and direct connection that it affords. The video notes that “We have become a Dramaturgical Community” and that we decide how we present ourselves.

Theatre does not exist without the audience, and theatre pedagogy exists at the intersection between performance and audience. Cue another video – this time more of a co-presentation video – on the experience of the audience being watched… Blau in The Audience (1990) talks about the audience “not so much as a mere congregation of people as a body of thought and desire”.  Being an audience member is now a standard part of everyday life – through YouTube, Twitter, Facebook, Vine… We see ourselves every day. The song “Digital Witness” by Saint Vincent sums this up pretty well.

YouTube Preview Image

Richard Allen in 2013 asked whether audience actually wants conclusive endings in their theatre, instead showing preference for more videogame open ended type experiences. When considering what modern audiences want… Liveness is prioritised in all areas of life and that that does speak to immediacy of theatre. Originally “live” was about co-presence but digital spaces are changing that. The feeling of liveness comes from our engagement with technology – if we engage with machines, like we do with humans, and there is a response, then that feels live and immediate. Real time experiences gives a feeling of liveness… One way to integrate that with theatre is through direct digital engagement across the audience, and with performance. Both Baker and Auslander agree that liveness is about immediate human contact.

The audience is demanding for live work that engages them in its creation and consumption through the social media spaces they use all the time. And that means educators have to be part of connecting the need for art and tech… So I want to share some of my experiences in attempting “drama tech” research. I’m calling this: “Publicly funded social board presents… Much ado about nothing”. I had been teaching dramatic arts for many years, looking at new technologies and the potential for new tools to enable students to produce “web theatre” around the “theatre of the oppressed” for their peers, with collaboration with audience as creator and viewer. I was curious to see how students would use the 6 second restriction of Vine, and that using familiar tools students could create tools familiar to the students.

The project had ethics approval… All was set but a board member blocked the project as Twitter and Vine “are not approved learning tools”… I was told I’d have to use Moodle… Now I’ve used Moodle before… And it’s great but NOT for theatre (see Nicholls and Phillip 2012). Eisner (2009) talks about “Education can learn from the arts that form and content cannot be separated.How something is said or done shapes the content of experience.”. The reason for this blocking was that there was potential that students might encounter risks and issues that they shouldn’t access… But surely that is true of television, of life, everything. We have to teach students to manage risks… Instead we have a culture of blocking of content, e.g. anything with “games” in the name – even if educational tools. How can you teach media literacy if you don’t have the support to do that, to open up. And this seems to be the case across publicly funded Ontario schools. I am still hoping to do this research in the future though…

Q&A

Q1) How do you plan to overcome those concerns?

A1) I’m trying to work with those in power… We had loads of safeguards in place… I was going to upload the content myself… It was really silly. The social media policy is just so strict.

Q1) They’ll have reasons, you have to engage with those to make that case…

Q2) Can I just ask what age this work was to take place with?

A2) I work with Grade 9-12… But this work specifically was going to focus on 17 and 18 year olds.

Q3) I think that many arts teachers are quite scared by technology – and you made that case well. You focus on technology as a key tool at the end there… And that has to be part of that argument.

A3) It’s both… You don’t teach hammer, you teach how you use the hammer… My presentation is part of a much bigger paper which does address both the traditional and that affordances of technology.

Having had a lovely chat with Amy over lunch, I have now joined Stream B – Monitoring and Privacy on Social media – Chair – Andree Roy

Monitoring Public Opinion by Measuring the Sentiment of Re-tweets on Twitter – LashariIntzar Ali and Uffe KockWiil,University of Southern Denmark, Denmark

I have just completed my PhD at the University of Southern Denmark, and I’ll be talking about some work I’ve been doing on measuring public opinion using social media. I have used Twitter to collect data – this is partly as Twitter is most readibly accessible and it is structured in a way that suits this type of analysis – it operates in real time, people use hashtags, and there are frequent actors and influencers in this space. And there are lots of tools available for analysis such as Tweetreach, Google Analytics, Cytoscope. My project, CBTA, is combining monitoring and analysis of Tweets…

I have been looking for dictation on geographical location based tweets, using a trend based data analyser, with data collection of a specific date and using network detection on negative comments. I also limited my analysis to tweets which have been retweeted – to show they have some impact. In terms of related studies supporting this approach: Steiglitx (2012) found that retweets is a simple powerful mechanism for information diffusion; Shen (2015) found re-tweeting behaviour is an influencing behaviour from the post of influential user. The sentiment analysis – a really useful quick assessment of content – looks at “positive”, “negative” and “neutral” content. I then used topic base monitoring an overview of the wider public. The intent was to move towards real-time monitoring and analysis capabilities.

So, the CBTA Tool display shows you trending topics, which you can pick from, and then you can view tweets and filter by positive, negative, or neutral posts. The tool is working and the code will be shared shortly. In this system there is a keyword search of tweets which collects tweets, these are then filtered. Once filtered (for spam etc), tweets are classified using NLTK which categorises into “Endorse RT”, “Oppose RT” and “Report RT”, the weighted retweets are then put through a process to compute net influence.

So for my work has looked at data from Pakistan around terms: Zarb-e-Azb; #OpZarbeAzb; #Zerb-e-asb etc. And I gathered tweets and retweets, and deduplicated those tweets with more than one hashtag. Once collected the algorithm for measuring re-tweets influence used follower counts, onward retweets etc. And looking at the influence here, most of the influential tweets were those with a positive/endorsing tone.

But we now have case studies for Twitter, but also for other social media sites. We will be making case studies available online. And looking at other factors, for instance we are interested in the location of tweets as a marker for accuracy/authenticity and to understand how other areas are influencing/influenced by global events.

Q&A

Q1) I have a question about the small amount of negative sentiment… What about sarcasm?

A1) When you look at data you will see I found many things… There was some sarcasm there… I have used NLTK but I added my own analysis to help deal with that.

Q2) So it registers all tweets right across Twitter? So can you store that data and re-parse it again if you change the sentiment analysis?

A2) Yes, I can reprocess it. In Twitter there is limited availability of Tweets for 7 days only so my work captures a bigger pool of tweets that can then be analysed.

Q3) Do you look at confidence scores here? Sentiment is one thing…

A3) Yes, this processing needs some human input to train it… But in this approach it is trained by data that is collected each week.

Social Media and the European Fundamental Rights to Privacy and Data Protection – BeyversEva, University of Passau and TilmanHerbrich, University of Leipzig, Germany

Tilman: Today we will be talking about Data Protection and particularly potential use in commercial contexts, particularly marketing. This is a real area of conflict in social media. We are going to talk about those fundamental rights to privacy and data protection in the EU, the interaction with other fundamental rights, and things like profiling etc. The Treaties and the Charter of Fundamental Rights (CFR) are primary law based on EU law. There is also secondary law including Directives (requiring transposition into national law, but are not binding until then), and Regulations (binding in entirity on all member states, they are automatically law in all member states).

In 2018 the CFR will become legally binding across the piece. In this change private entities and public bodies will all be impacted by the CFR. But how does one enforce those? They could institute a proceeding before a national court, then the National Court must refer questions to the European Court of Human Rights who will answer and provide clarifications, that will then enable the National Courts to take a judgement on the specific case at hand.

When we look across the stakeholders, we see that they all have different rights under the law. And that means there is a requirement to balance those rights. The European Court of Justice (ECJ) has always upheld that concerned rights and interests must be considered, evaluated and weighed in order to find an adequate balance between colliding fundamental rights – as an example the Google Spain Data Protection case in Spain where their commercial rights were deemed secondary to the inidividual rights to privacy.

Eva: Most social media sites are free to use, but this is made possible by highly profiled advertising. Profiling is articulated in Article 4 in the CFR as including aspects of behaviours, personality, etc. Profiling is already seen as an issue that is a threat to Data Protection. We would argue that it poses an even greater threat: users are frequently comfortable to give their real name in order to find others which means they are easily identifiable; users private lives are explicity part of the individual’s profile and may include sensitive data; further this broad and comprehensive data set has very wide scope.

So, on the one hand the users individual privacy is threatened, but so is the freedom to conduct a business (Art 16 CFR). The right to data protection (Article 8, CFR) rests on the idea of consent – and the way that consent is articulated in the law – that consent must be freely given, informed and specific – is incompatible with social networking services and the heavy level of data processing associated with them. These spaces adopt excessive processing, there is dynamic evolution of these platforms, and their concept is networking. Providers make changes in platform, affordances, advertising, etc. create continued changes of the use and collection of data – at odds with specific requirements for consent. The concept of networking means that individuals manage information that is not just about themselves but also others – their image, their location, etc. European Data Protection law does nothing to accommodate the privacy of others in this way. There has been no specific ruling on the interaction of business and personal rights here, but given previous trends it seems likely that business will win.

These data collections by social networking sites also has commercialisation potential to exploit users data. It is not clear how this will evolve – perhaps through greater national law in the changing or terms and conditions?

This is a real tension, with rights of businesses on one side, the individual on the other. The European legislator has upheld fundamental data protection law, but there is still much to examine here. We wanted to give you an overview of relevant concepts and rights in social media contexts and we hope that we’ve done this.

Q&A

Q1) How do these things change when Europe is outwith the legislative jurisdiction of most social media companies are – they are global
A1) General Data Protection Law 2018 will target companies in the EU, if they profile there. It was unclear until now… Previously you had to have a company here in Europe (usually Ireland), but in 2018 it will be very clear and very strict.

Q2) How has the European Court of Human rights fared so far in judgements?

A2) In Google Spain case, in another Digital Rights case the ECHR has upheld personal rights. And we see this also on the storage and retention of data… But the regulation is quite open, right now there are ways to circumvent.

Q3) What are the consequences of non-compliance? Maybe the profit I make is greater than that risk?

A3) That has been an issue until now. Fines have been small. From 2018 it will be up to 5% of worldwide revenue – that’s a serious fine!

Q4) Is private agreement… Is the law stronger than private agreement? Many agree without reading, or without understanding, are they protected if they agree to something illegal.

A4) Of course you are able to contract and agree to data use. But you have to be informed… So if you don’t understand, and don’t care… The legislator cannot change this. This is a problem we don’t have an approach for. You have to be informed, have to understand purpose, and understand means and methods, so without that information the consent is invalid.

Q5) There has been this Safe Harbour agreement breakdown. What impact is that having on regulations and practices?

A5) The regulations, probably not? But the effect is that data processing activities cannot be based on Safe Harbour agreement… So companies have to work around or work illegally etc. So now you can choose a Data Protection agreement – standardised contracts to cover this… But that is insecure too.

Digital Friendship on Facebook and Analog Friendship Skills – KordoutisPanagiotis, Panteion University of Social and Political Sciences, Athens and EvangeliaKourti,University of Athens, Greece

Panagiotis: My colleague and I were keen to look at friendship on Facebook. There is a lot of work on this topic of course, but very little work connecting Facebook and real life friendship from a psychological perspective. But lets start by seeing how Facebook describes itself and friendship… Facebook talk about “building, strengthening and enriching friendships”. Operationally they define friendship through digital “Facebook acts” such as “like”, “comment”, “chat” etc. But this creates a paradox… You can have friends you have never met and will never meet – we call them “unknown friends” and they can have real consequences for life.

People perceive friendship in Facebook in different ways. In Greece (Savrami 2009, Kourti, Kourdoutis, Madaglou 2016) young people see Facebook friendship as a “phony” space, due to “unknown friends” and the possibility of manipulating self presentation. As a tool for popularity, public relations, useful acquaintances; a doubtful and risky mode of dating; the resort of people with a limited nnumber of friends and lack of “real” social live; and the resort of people who lack friendship skills (Buotte, wood and pratt 2009). BUT it is widely used and most are happy with their usage…

So, how about psychological definitions of analog friendship? Baron-Cohen and Wheelright (2003) talk about friendship as survival supporting social interdependence based on attachment and instrumentality skills.

Attachment involves high interdependence, commitment, systematic support, responsiveness, communication, investment in joint outcomes, high potential for developing the friendship – it is not static but dynamic. It is being satisfied by the interaction with each other, with the company of each other. They are happy to just be with someone else.

Instrumentality is also part of friendship though and it involves low interdependence, low commitment, non-systematic support, low responsiveness, superficial communication, expectations for specific benefits and personal outomes, little potential for developing the relationship – a more static arrangements. And they are satisfied by interacting with others for a specific goal or activity.

Now the way that I have presented this can perhaps look like the good and the bad side… But we need both sides of that equation, we need both sets of skills. What we perceive as friendship in analog life usually has a prevalence of attachement over instrumentality…

So, why are we looking at this? We wanted to look into whether those common negative attitudes about Facebook and friendship were accurate. Will FB users with low friendship skills have more Fb friends? Engage in more Fb “friendship acts”; will they use Fb more intensely; will they have more “unknown” friends than users with stronger friendship skills”. And when I say stronger friendship skills – I mean those with more attachment skills versus those with more instrumental skills.

In our method here we had 201 participants, most were women (139) from Universities and technological Institutes in metropolitan areas of Greece. All had profiles in Fb. median age was 20, all had used Facebook for 2 hours the day before, and many reported being online at least 8 hours a day, some on a permanent ongoing basis. We asked them how many friends they have… Then we asked them for an estimate of how many they know in-person. Then we asked them how many of these friends they have never met or will never meet – they provided an estimation. There were other questions about interactions in Facebook. We used a scale called the Facebook Insensity Scale (Ellison, Steinfield and Lampe 2007) which looks at importance of Facebook in the persons life (this is a 12 pt Likert scale). We also used an Active Digital Sociability Scale which we came up with – this was a 12 pt likert scale on Fb Friendship acts etc. And we used a Friendship Questionnaire (Baron-Cohen and Wainwright 2003). This was a paper exercise, for less than 30 minutes.

When we looked at stronger and weaker friendship skills groups – we had 44.3% of participants in the stronger friendship skills group, 52% in the weaker friendship skills group. More women had stronger friendship skills – consistent with the general population across countries.

So, firstly do people with weaker friendship skills have more friends? No, there was no difference. But we found a gender result – men had more friends in facebook, and also had weaker friendship skills.

Do people with weaker friendships skills engage more frequently in Fb friendship operations of friendship acts? No. No difference. Chatting wa smost popular, browsing adn liking were most frequet acts regardless of skills. Less frequent were participating in groups, check in and gaming. BUT a very telling difference: Men were more likely to comment than women, and that’s significant for me.

Do people with weaker friendship skills engage in Fb use it more intensively? Yes and No. There was a difference… But those with stronger friendship skills showed high Fb intensity, compared to those with weaker friendship. Men with stronger skills were more intensive in their use than women with strong skills.

Do people with weaker friendship skills have more friends on facebook? No. Do they have more unknown friends? No. But there was a gender effect. 16% of men have unknown friends, ony 9% of women do. Do those with weaker friendship skills interact more with unknown friends? No, opposite. Those with stroger skills, interact more with unknown friends. And so on.

And do those with weaker friendship skills actually meet unknown friends from Fb in real life? Yes, but opposite to expected. If they have stronger skills I’m more likely to meet you in real life… If I am a man… The percentages are small (3% of men, 1% of women).

So, what do I make of all this? Facebook is not the resort of people with weak friendship skills. Our data suggests it may be advantageous space for those with higher friendship skills, it is a socail space regulated by lots of social norms – it is an extension of what happens in real life. And what is the norm at play? It is the famous idea that men are encouraged to be bold, women to be cautious and apprehensive. Women have stronger social skills, but Facebook and the dynamics suppresses them, and enhances men with weaker skills… So, that’s my conclusion here!

Q&A

Q1) Very interesting. When men start to see someone they haven’t met before… Wouldn’t it be women? To hit on them?

A1) Actually yes, often it is dating. But men are eager to go on about it… to interact and go on to meet. Women are very cautious. We have complimented this work with qualitative work that shows women need much longer interaction – they need to interact for maybe 3 years before meeting. Men are not so concerned.

Q2) You haven’t talked about quality etc. of your quantitative data?

A2) I haven’t mentioned it here, but it’s in the paper (in the Proceedings). The Friendship questionnaire is based on established work, saw similar distribution ratios as seen elsewhere. We haven’t tried it (but are about to) with those with clinical status, Aspergers, etc. The Facebook Intensity questionnaire had a high reliability alpha.

Q3) Did you do any comparison of this data with any questions on trolling, cyber bullying, etc. as the consequences for sharing opinion or engaging with strangers for women is usually harsher than for men.

A3) Yes, some came up in the qualitative study where individuals were able to explain their reasons.

Q4) Did your work look at perceptions by employers etc. And how that made a difference to selecting friends?

A4) We didn’t look at this, but others have. Some are keen not to make friends in specific groups – they use Facebook to sell a specific identity to a specific audience.

Q5) The statistics you produced are particularly interesting… What is your theoretical conjecture as a result of this work?

A5) My feeling is that we have to see looking at Facebook as an alternative mode of socialising. It has been normalised so the same social rules functioning in the rest of society do function in Facebook. This was an example. It sounds commonplace but it is important.

The Net Generation’s Perceptions of Digital Activism –  StochLouise and SumarieRoodt, University of Cape Town, South Africa

Sumarie: I will be talking about how the Net Generation view digital activism. And the reason this is of interest to me is because of the many examples of digital activism we see around us. I’ll talk a bit about activism in South Africa, and particularly a recent campaign called “Fees Must Fall”.

There are various synonyms for Digital Activism but that’s the term I’ll use. So what is this? It’s origins start with the internet, with connection and mobilisation. We saw the rise of social media and the huge increase in people using it. We saw economies and societies coming online and using these spaces over the last 10 years. What does this mean for us? Well it enables quick and far-reaching information sharing. And there is a video that goes with this too.

Joyce 2013 defines Digital Activism as being about “the use of digital media in collective efforts to bring about social or political change, using methods outside of routine decision-making processes”. “It is non-violent and civil but can involve hacking (Edwards et al. 2013). We see digital activism across a range of approaches: from Slacktivism (things that are easy to participate in); online activism; internet activism; cyber activism; hacktivism. That’s a broad range, there are subtleties that divide into these and other terms, and the different characteristics of these types of activism.

Some examples…

In 2011 we saw revolutions in Egypt, Tunisia, Occupy Wall Street;

2012-14 we saw BringBackOurGirls, and numerous others;

2015 onwards we have:

  • RhodesMustFall – on how Cecil John Rhodes took resources from the indigenous communities, and recent removals of statues etc. and naming of buildings, highly sensitive.
  • FeesMustFall  – about providing free education to everybody, particularly university – less than 10% of South Africans go to University and they tend to be those from the more privileged background – as a result of that we weren’t allowed to raise our fees for now, and we are encouraged to find other funders to subsidise education and we cannot exclude anyone because of lack of economic access, the government will help but…. a lot of conflict there particularly around corruption, but government also classified universities as advantaged or non advantaged university and distributes funds much more to non advantaged university.
  • ZumaMustFall – our president is also famous for causing havoc politically and economically for what many see as very poor decisions, particularly under public scrutiny in the last 12 months.

In the room we are having a discussion about other activist activities, including an Israeli campaign against internet censorship law targeted at pornography etc. but including political and cultural aspects. Others mention 38 degrees etc. and successful campaigns to get issues debated. 

Now, digital activism can be on any platform – not necessarily Facebook or Twitter.

When we look at who our students are today – the “Net Generation”, “Millennials”, “Digital Natives” – and characteristics (Oblinger and Oblinger) associated this group include: confidence with technologu, always connected, immediate, social and team orientated, diverse, visual, education driven, emotionally open. But this isn’t homogenous, not all students will have these qualities.

So, what did we do with our students to assess students view? We looks at 230 students, and targeted those looked at in the literature: those born in any year from 1983 to 2003, and they needed to be those with some form of online identit(ies). We had an online questionnare that ran over 5 days. We analysed with Qualtrics, and thematic analysis. There are limitations here – all students were registered in the Comms department – business etc.

In terms of the demographics: Male participants were 38%, female were 62%; Average age was 22, minimum was 17, maximum was 33. We asked about the various characteristics, using a Likert scale questions… Showing that all qualify suffiently to be this “Net Generation”. We asked if they paid attention to digital activism… Most did, but it’s not definitive. Now this is the beginning of a much bigger project…

We asked if the participants had ever signed an online petition – 145 had; and 144 believed online petitions made a difference. We also asked if the internet and social media have a positive effect on an activism campaign – 92% do, and that has huge interest to companies and advertisers. And 89% of participants felt the use of social media in these causes has contributed to creating a society that is more aware of important issues.

What did we learn? Well we did see that this generation are inclined to participate in slacktivism. They believe digital activism mades a difference. They pay attention to online campaigns and are aware of which ones have been successful – at least in terms of having some form of impact or engagement.

Now, if you’d like access to the surveys, etc. do get in touch.

Q&A

Q1) How does UCT engage with the student body around local activism?

A1) Mostly that has been digitally, with the UCT Facebook page. There were also official statements from the University… But individual staff were discouraged from reacting. But freedom of speech for the students. It increased conflict in some way, but it also made students feel heard. Hard to call which side it fell on. Policy change is being made as a result of this work… They had a chance to be heard. We wanted free speech (unless totally inappropriate).

Q2) I see that you use a lot of “yes” and “no” questions… I like that but did you then also get other data?

A2) Yes. I present that work here. This paper doesn’t show the thematic analysis – we are still working on submitting that somewhere. We have that data, so once the full piece is in a journal we can let you know.

Q3) Do you know any successful campaigns in your context?

A3) Yes, FeesMustFall started in individual universities, and turned then to the government. It actually got quite serious, quite violent, but that definitely has changed their approach. And that campaign continues and will continue for now.

At this point of the day my laptop lost juice, the internet connection dropped, and there was a momentary power outage just as my presentation was about to go ahead! All notes from my strand are therefore from those taken on my mobile – apologies for more typos than usual!

Stream C – Teaching and Supporting Students – Chair – Ted Clark

Students’ Digital Footprints: Curation of Online Presences, Privacy and Peer Support – Nicola Osborne and Louise Connelly,University of Edinburgh, UK

That was me!

My slides are available on Prezi herehttps://prezi.com/hpphwg6u-f6b/students-digital-footprints-curation-of-online-presences-privacy-and-peer-support/

The paper can be found in the ECSM 2016 Proceedings, and will also be shared on the University of Edinburgh Research Explorer along with others on the Managing Your Digital Footprint (research strand) researchhttp://www.research.ed.ac.uk/portal/en/publications/students-digital-footprints(5f3dffda-f1b4-470f-abd4-24fd6081ab98).html 

Please note that the remaining notes are very partial as taken on my smartphone and, unfortunately, somewhat eaten by the phone in the process… 

How do you Choose a Friend? Greek Students’ Friendships in Facebook – KourtiEvangelia, University of Athens and PanagiotisKordoutisand AnnaMadoglou,Panteion University of Social and Political Sciences, Greece

This work, relating to Panagiotis’ paper earlier (see above) looked at how individuals make friends on Facebook. You can find out more about the methodology in this paper and Panagiotis’ paper on Analog and Facebook friends.

We asked our cohort of students to tell us specifically about their criteria for making new friends, whether they were making the approach for friendship or responding to others’ requests. We also wanted to find out how they interacted with people who were not (yet) their friends in Facebook, and what factors played a part. The data was collected in a paper questionnaire with the same cohort as reported in Panagiotis’ paper earlier today.

Criteria for interacting with a friend, never met before within Facebook. The most frequent answer was “I never do” but the next most popular responses were common interests and interest in getting to know others better. physical appearance seems to play a factor, more so than previous interactions but less so than positive personality traits. 

Criteria for deciding to meet a previously unknown friend. Most popular response here was “I never do so”, followed by sufficient previous FB interaction, common acquaintances, positive personality etc. less so.

Correspondence Analysis – I won’t go into here, very interesting in terms of gender. Have a look at the Proceedings. 

Conclusion is that Facebook operated as social identity tool. And supporting offline relationships. self involvement with the medium seems to define selection criteria compatible with different social goals reinforcing one real-life social network.

Q&A

Q1) I’m very interested in how FB suggests new friends. Did students comment on that. 

A1) We didn’t ask about that.

Q2) isn’t your data gender biased in some way – most of your participants are female.

A2) Yes. But we continue this… With qualitative data it’s a problem, but means and standard deviation cover that. 

Q2) Reasons for sending a request to who you don’t know. First work by Ellison etc. showed people connecting with already known people… I wonder if it is still true? 

A2) Interesting questions. We must say that students answer to their professor in a uni context, that means maybe this is an explanation… 

Comment) Facebook gives you status for numbers and types of friends etc. 

A2) it’s about social identity and identity construction. Many have different presences with different goals. 

Comment) there is a bit of showing off in social. For status. 

Professional Development of Academic Staff in the use of Social Media for Teaching and Learning – Julie Willems, Deakin University, Burwood, Australia

This work has roots in 2012. from then to 2015 I ran classes for staff on using social media. This follows conversations I’ve heard around the place about expecting staff to use social media without training. 

Now I use a very broad definition of social media – from mainstream sites to mobile apps to gaming etc. Media that accesses digital means for communication in various forms. 

Why do we need staff development for social media? To deal with concerns of staff, students move there, also super enthusiasm.. 

My own experience is of colleagues who have run with it, which has raised all sorts of concerns. Some would say that an academic should be doing teaching, research, service and development can end up being the missing leg on the chair there. And staff development is not just about development on social media but also within social media. 

We ran some webinars within Zoom webinar, showing Twitter use with support online, offline and on Twitter – particularly important  for a distributed campus like we have. 

When we train staff we have to think about the pedagogy, we have to think about learning outcomes. We need to align the course structure with LOs, and also to consider staff workload in how we do that training. What will our modes of delivery be? What types of technology will they meet and use – and what prep/overhead is involved in that? We also need to consider privacy issues. And then how do you fill that time. 

So the handout I’ve shared here was work for one days course, to be delivered in a flipped classroom – prep first, in person, then online follow up. Could be completed quickly but many spent more time on these.

This PPT from a module I developed for staff at Monash university, with social media at the intersection of formal and informal learning, and the interaction of teacher-directed learning and student-centred learning. That quadrant model is useful to be aware of: Willem Blakemore(?): 4QF.

Q&A

Q1) What was the object among staff at your university?

A1) First three years were optional. This last year Monash require staff to do 3 one day courses per year. One can be a conference with a full report. Social Media is one of 8 options. Wanted to give an encouragement for folk to attend. 

Q2) How many classes use your social media as a result?

A2) I’ve just moved institution. One of our architecture lecturers was using FB in preference to LMS: students love it, faculty concerned. Complex. At my current university social media isn’t encouraged but it is use. Regardless of attitude social media is in use… And we at least have to be aware of that. 

Q3) I was starting to think that you were encouraging faculty staff to use Social media alone, rather than with LMS.

A3) At Monash reality was using social alongside LMS. That connection discouraged in my new faculty. 

Q4) I loved that you brought up that pressure from teaching staff – as so many academics in social media now, they are min more active and a real pressure to integrate.

A4) I think that gap is growing too… Between resisters and those keen to use. Students are aware of what they share – a Demi formal space… Have to be aware.

Q5) do you have a range of social media tools or just Facebook?

A5) mainly Facebook, sometimes Twitter and Linked In. I’m in engineering and architecture. 

Q5) Are they approved for use by faculty?

A5) Yes, the structure you have there had been. 

Q6) also encourage academic staff to use academic networking sites?

A6) depends on context. Depends… ResearchGate good for pubs, Academic.edu like bus card. 

Q7) Reward and recognition

A7) Stuff on sheet was for GCAP… Came out of that… 

Q8) Will we still have these requirements to train in, say, 5 years time? Surely they’ll be like pen and pencil now?

A8) Maybe. Universities are keen for good profiles though, which means this stuff matters in this competitive academic marketplace. 

And with that Day One has drawn to a close. I’m off to charge a lot of devices and replace my memory sticks! More tomorrow in a new liveblog post. 

 July 12, 2016  Posted by at 9:22 am Events Attended, LiveBlogs Tagged with: , ,  No Responses »
Jul 072016
 

On 27th June I attended a lunchtime seminar, hosted by the University of Edinburgh Centre for Research in Digital Education with Professor Catherine Hasse of Aarhus University

Catherine is opening with a still from Ex-machina (2015, dir. Alex Garland). The title of my talk is the difference between human and posthuman learning, I’ll talk for a while but I’ve moved a bit from my title… My studies in posthuman learning has moved me to more of a posthumanistic learning… Today human beings are capable of many things – we can transform ourselves, and ourselves in our environment. We have to think about that and discuss that, to take account of that in learning.

I come from the centre for Future Technology, Culture and Learning, Aarhus University, Denmark. We are hugely interdisciplinary as a team. We discuss and research what is learning under these new conditions, and to consider the implications for education. I’ll talk less about education today, more about the type of learning taking place and the ways we can address that.

My own background is in anthropology of education in Denmark, specifically looking at physicists.In 2015 we got a big grant to work on “The Technucation Project” and we looked at the anthropology of education in Denmark in nurses and teachers – and the types of technological literacy they require for their work. My work (in English) has been about “Mattering” – the learning changes that matter to you. The learning theories I am interested in acknowledge cultural differences in learning, something we have to take account of. What it is to be human is already transformed. Posthumanistics learning is a new conceptualisations and material conditions that change what it was to be human. It was and it ultra human to be learners.

So… I have become interested in robots.. They are coming into our lives. They are not just tools. Human beings encounter tools that they haven’t asked for. You will be aware of predictions that over a third of jobs in the US may be taken over by automated processes and robots in the next 20 years. That comes at the same time as there is pressure on the human body to become different, at the point at which our material conditions are changing very rapidly. A lot of theorists are picking up on this moment of change, and engaging with the idea of what it is to be human – including those in Science and Technology Studies, and feminist critique. Some anthropologist suggest that it is not geography but humans that should shape our conceptions of the world (Anthrpos- Anthropocene), others differ and conceive of the capitalocene. When we talk about the posthuman a lot of the theories acknowledge that we can’t talk about the fact that we can’t think of the human in the same way anymore. Kirksey & Helmreich (2010) talk of “natural-cultural hybrids”, and we see everything from heart valves to sensors, to iris scanning… We are seeing robots, cybords, amalgamations, including how our thinking feeds into systems – like the stockmarkets (especially today!). The human is de-centered in this amalgamation but is still there. And we may yet get to this creature from Ex-machina, the complex sentient robot/cyborg.

We see posthuman learning in uncanny valley… gradually we will move from robots that feel far away, to those with human tissues, with something more human and blended. The new materialism and robotics together challenge the conception of the human. When we talk of learning we talk about how humans learn, not what follows when bodies are transformed by other (machine) bodies. And here we have to be aware that in feminism that people like Rosa Predosi(?) have been happy with the discarding of the human: for them it was always a narrative, it was never really there. The feminist critique is that the “human” was really retruvian man.. But they also critique the idea that Posthu-man is a continuation of individual goal-directed and rational self-enhancing (white male) humans. And that questions the post human…

There are actually two ways to think of the post human. One way is the posthuman learning as something that does away with useless, biological bodies (Kurzweil 2005) and we see transhumanists, Verner Vinge, Hans Moravec, Natasha Vita-More in this space that sees us heading towards the singularity. But the alternative is a posthumanistic approach, which is about cultural transformations of boundaries in human-material assemblages, referencing that we have never been isolated human beings, we’ve always been part of our surroundings. That is another way to see the posthuman. This is a case that I make in an article (Hayles 1999) that we have always been posthuman. We also see have, on the other hand, Spinozists approach which is about how are we, if we understand ourselves as de-centered, able to see ourselves as agents. In other words we are not separate from the culture, we are all Nature-cultural…Not of nature, not of culture but naturacultural (Hayles; Haraway).

But at the same time if it is true that human beings can literally shape the crust of the earth, we are now witnessing anthropomorphism on steroids (Latour, 2011 – Waiting for Gaia [PDF]). The Anthropocene perspective is that, if human impact on Earth can be translated into human responsibility fr the earth, the concept may help stimulate appropriate societal responses and/or invoke appropriate planetary stewardship (Head 2014); the capitalocene (see Jason Moore) talks about moving away from cartesian dualism in global environmental change, the alternative implies a shift from humanity and nature to humanity in nature, we have to counter capitalism in nature.

So from the human to the posthuman, I have argue that this is a way we can go with our theories… There are two ways to understand that, the singularist posthumanism or spinozist posthumanism. And I think we need to take a posthumanistic stance with learning – taking account of learning in technological naturecultures.

My own take here… We talk about intra-species differentiations. This nature is not nature as resource but rather nature as matrices – a nature that operates not only outside and inside our bodies (from global climate to the microbiome) but also through our bodies, including embodied minds. We do create intra-species differentiation, where learning changes what maters to you and others, and what matters changes learning. To create an ecological responsible ultra-sociality we need to see ourselves as a species of normative learners in cultural organisations.

So, my own experience, after studying physicists as an anthropologists I no longer saw the night sky the same way – they were stars and star constellations. After that work I saw them as thousands of potetial suns – and perhaps planets – and that wasn’t a wider discussion at that time.

I see it as a human thing to be learners. And we are ultra social learning. And that is a characteristic of being human. Collective learning is essentially what has made us culturally diverse. We have learning theories that are relavent for cultural diversity. We have to think of learning in a cultural way. Mediational approachs in collective activity. Vygotsky takes the idea of learners as social learners before we become personal learners and that is about the mediation – not natureculture but cultureculture (Moll 2000). That’s my take on it. So, we can re-centre human beings… Humans are not the centre of the universe, or of the environment. But we can be at the centre and think about what we want to be, what we want to become.

I was thinking of coming in with a critique of MOOCs, particularly as those being a capitolocene position. But I think we need to think of social learning before we look at individual learning (Vygotsky 1981). And we are always materially based. So, how do we learn to be engaged collectively? What does it matter – for MOOCs for instance – if we each take part from very different environments and contexts, when that environment has a significant impact. We can talk about those environments and what impact they have.

You can buy robots now that can be programmed – essentially sex robots like “Roxxxy” – and are programmed by reactions to our actions, emotions etc. If we learn from those actions and emotions, we may relearn and be changed in our own actions and emptions. We are seeing a separation of tool-creation from user-demand in Capitalocene. The introduction of robots in work places are often not replacing the work that workers actually want support with. The seal robots to calm dementia patients down cover a role that many carers actually enjoyed in their work, the human contact and suport. But those introducing them spoke of efficiency, the idea being to make employees superfluous but described as “simply an attempt to remove some of the most demeaning hard task from the work with old people so the wor time ca be used for care and attention” (Hasse 2013).

These alternative relations with machines are things we always react too, humans always stretch themselves to meet the challenge or engagement at hand. An inferentialist approach (Derry 2013) acknowledges many roads to knowledge but materiality of thinking reflects that we live in a world of not just case but reason. We don’t live in just a representationalism (Bakker and Derry 2011) paradigm, it is much more complex. Material wealth will teach us new things.. But maybe these machines will encourage us to think we should learn more in a representative than an inferentialist way. We have to challenge robotic space of reasons. I would recommend Jan Derry’s work on Vygotsky in this area.

For me robot representationalism has the capacity to make convincing representations… You can give and take answers but you can’t argue space and reasons… They cannot reason from this representation. Representational content is not articulated by determinate negation and complex concept formation. Algorithmic learning has potential and limitations, and is based on representationalism. Not concept formation. I think we have to take a position on posthumanistic learning, with collectivity as a normative space of reasons; acknowledge mattering matter in concept formation; acknowledge human inferentialism; acknowledge transformation in environment…

Discussion/Q&A

Q1) Can I ask about causes and reasons… My background is psychology and I could argue that we are more automated than we think we are, that reasons come later…

A1) Inferentialism is challenging  the idea of giving and taking reasons as part of normative space. It’s not anything goes… It’s sort of narrowing it down, that humans come into being in terms of learning and thinking in a normative space that is already there. Wilfred Sellers says there is no “bare given” – we are in a normative space, it’s not nature doing this… I have some problems with the term dialectical… But it is a kind of dialective process. If you give an dtake reasons, its not anything goes. I think Jen Derry has a better phrasing for this. But that is the basic sense. And it comes for me from analytical philosophy – which I’m not a huge fan of – but they are asking important questions on what it is to be human, and what it is to learn.

Q2) Interesting to hear you talk about Jan Derry. She talks about technology perhaps obscuring some of the reasoning process and I was wondering how representational things fitted in?

A2) Not in the book I mentioned but she has been working on this type of area at University of London. It is part of the idea of not needing to learn representational knowledge, which is built into technological systems, but for inferentialism we need really good teachers. She has examples about learning about the bible, she followed a school class… Who look at the bible, understand the 10 commandments, and then ask them to write their own bible 10 commandments on whatever topic… That’s a very narrow reasoning… It is engaging but it is limited.

Q3) An ethics issue… If we could devise robots or machines, AI, that could think inferentially, should we?

A3) A challenge for me – we don’t have enough technical people. My understanding is that it’s virtually impossible to do that. You have claims but the capacities of AI systems so far are so limited in terms of function. I think that “theory of mind” is so problematic. They deteriorise what it means to be human, and narrow what it means to be our species. I think algorithmic learning is representational… I may be wrong though… If we can… There are poiltical issues. Why make machines that are one to one to human beings… Maybe to be slaves, to do dirty work. If they can think inferentiality, should they not have ethical rights. In spinostas we have a responsibility to think about those ethical issues.

Q4) You use the word robot, that term is being used to be something very embodies and physical.. But algorithmic agency, much less embodied and much less visible – you mentioned the stock market – and how that fits in.

A4) In a way robots are a novelty, a way to demonstrate that. A chatbot is also a robot. Robot covers a lot of automated processes. One of the things that came out of AI at one point was that AI couldn’t learn without bodies.. That for deep learning there needs to be some sort of bodily engagement to make bodily mistakes. But then encounters like Roxy and others is that they become very much better… As humans we stretch to engage with these robots… We take an answer for an answer, not just an algorithm, and that might change how we learn.

Q4) So the robot is a point of engaging for machine learning… A provocation.

A4) I think roboticists see this as being an easy way to make this happen. But everything happens so quickly… Chips in bodies etc. But can also have robots moving in space, engaging with chips.

Q5) Is there something here about artifical life, rather than artifical intelligence – that the robot provokes that…

A5) That is what a lot of roboticists work at, is trying to create artificial life… There is a lot of work we haven’t seen yet. Working on learning algorithms in computer programming now, that evolves with the process, a form of artifical life. They hope to create robots and if they malfunction, they can self-repair so that the next generation is better. We asked at a conference in Prague recently, with roboticists, was “what do you mean by better?” and they simply couldn’t answer that, which was really interesting… I do think they are working on artifical life as well. And maybe there are two little connections between those of us in education, and those that create these things.

Q6) I was approached by robotics folks about teaching robots to learn drawing with charcoal, largely because the robotic hand had enough sensitivity to do something quite complex – to teach charcoal drawing and representation… The teacher gesticulates, uses metaphor, describes things… I teach drawing and representational drawing… There is no right answer there, which is tough for robototics… What is the equivelent cyborg/dual space in learning? Drawing toolsa re cyborg-esque in terms of digital and drawing tools… BUt also that diea of culture… You can manipulate tools, awareness of function and then the hack, and complexity of that hack… I suppose lots of things were ringing true but I couldn’t quite stick them in to what I’m trying to get at…

A6) Some of this is maybe tied to Schuman Enhancement Theory – the idea of a perfect cyborg drawing?

Q6) No, they were interested in improving computer learning, and language, but for me… The idea of human creativity and hacking… You could pack a robot with the history of art, and representation, so much information… Could do a lot… But is that better art? Or better design? A conversation we have to have!

A6) I tend to look at the dark side of the coin in a way… Not because I am techno-determinist… I do love gadgets, technology enhances our life, we can be playful… BUt in the capitalocene… There is much more focus on this. The creative side of technology is what many people are working on… Fantastic things are coming up, crossovers in art… New things can be created… What I see in nursing and teaching learning contexts is how to avoid engaging… So lifting robots are here, but nursing staff aren’t trained properly and they avoid them… Creativity goes many ways… I’m seeing from quite a particular position, and that is partly a position of warning. These technologies may be creative and they may then make us less and less creative… That’s a question we have to ask. For physicists, who have to be creative, are always so tied to the materiality, the machines and technologies in their working environments. I’ve also seen some of these drawing programmes…. It is amazing what you can draw with these tools… But you need purpose, awareness of what those changes mean… Tools are never innocent. We have to analyse what tools are doing to us

Jul 052016
 

This afternoon I’m at UCL for the “If you give a historian code: Adventures in Digital Humanities” seminar from Jean Bauer of Center for Digital Humanities at Princeton University, who is being hosted by Melissa Terras of the UCL Centre for Digital HumanitiesI’ll be liveblogging so, as usual, any corrections and additions are very much welcomed. 

Melissa is introducing Jean, who is in London en route to DH 2016 in Kraków next week. Over to Jean:

I’m delighted to be here with all of the wonderful work Melissa has been doing here. I’m going to talk a bit about how I got into digital humanities, but also about how scholars in library and information sciences, and scholars in other areas of the humanities might find these approaches useful.

So, this image (American Commissioners of the Preliminary Peace Negotiations with Great Britain. By Benjamin West, London, England; 1783 (begun). Oil on canvas. (Unframed) Height: 28 ½” (72.3 cm); Width: 36 ¼” (92.7 cm). 1957.856) is by Benjamin West, the Treaty of Paris, 1783. This is the era that I research and what I am interested in. In particular I am interested in John Adam, the first minister of the United States – he even gets one line in Hamilton: the musical. He’s really interested as he was very concerned with getting thinking and processes on paper. And on the work he did with Europe, where there hadn’t really been American foreign consuls before. And he was also working on areas of the North America, making changes that locked the British out of particular trading blocks through adjustments brought about by that peace treaty – and I might add that this is a weird time to give this talk in England!

Now, the foreign service at this time kind of lost contact once they reached Europe and left the US. So the correspondence is really important and useful to understand these changes. There are only 12 diplomats in Europe from 1775-1788, but that grows and grows with consuls and diplomats increasing steadily. And most of those consuls are unpaid as the US had no money to support them. When people talk about the diplomats of this time they tend to focus on future presidents etc. and I was interested in this much wider group of consuls and diplomats. So I had a dataset of letters, sent to John Jay, as he was negotiating the treaty. To use that I needed to put this into some sort of data structure – so, this is it. And this is essentially the world of 1820 as expressed in code. So we have locations, residences, assignments, letters, people, etc. Within that data structure we have letters – sent to or from individuals, to or from locations, they have dates assigned to them. And there are linkages here. Databases don’t handle fuzzy dates well, and I don’t want invalid dates, so I have a Boolean logic here. And also a process for handling enclosures – right now that’s letters but people did enclose books, shoes, statuettes – all sorts of things! And when you look at locations these connect to “in states” and states and location information… This data set occurs within the Napoleonic wars so none of the boundaries are stable in these times so the same location shifts in meaning/state depending on the date.

So, John Jay has all this correspondence between May 27 and Nov 19, 1794 and they are going from Europe to North America, and between the West Indies and North America. Many of these are reporting on trouble. The West Indies are ship siezures… And there are debts to Britain… And none of these issues get resolved in that treaty. Instread John Jay and Lord Granville set up a series of committees – and this is the historical precident for mediation. Which is why I was keen to understand what information John Jay had available. None of this correspondance got to him early enough in time. There wasn’t information there to resolve the issue, but enough to understand it. But there were delays for safety, for practical issues – the State Department was 6 people at this time – but the information was being collected in Philadephia. So you have a centre collecting data from across the continent, but not able to push it out quickly enough…

And if you look at the people in these letters you see John Jay, and you see Edmund Jennings Randolph mentions most regularly. So, I have this elaborate database (The Early American Foreign Service Database – EAFSD) and lots of ways to visualise this… Which enables us to see connections, linkages, and places where different comparisons highlight different areas of interest. And this is one of the reasons I got into the Humanities. There are all these papers – usually for famous historical men – and they get digitised, also the enclosures… In a single file(!), parsing that with a partial typescript, you start to see patterns. You see not summaries of information being shared, not aggregation and analysis, but the letters being bundled up and sent off – like a repeater note. So, building up all of this stuff… Letters are objects, they have relationships to each others, they move across space and time. You look at the papers of John Adams, or of any political leader, and they are just in order of date sent… Requiring us to flip back and forth. Databases and networks allow us to follow those conversations, to understand new orders to read those letters in.

Now, I had a background in code before I was a graduate student. What I do now at Princeton (as Associate Director of the Center for Digital Humanities) is to work with librarians and students to build new projects. We use a lot of relational databases, and network analysis… And that means a student like one I have at the moment can have a fully described, fully structured data set on a vagrant machine that she can engage with, query, analysis, and convey to her examiners etc. Now this student was an excel junky but approaching the data as a database allows us to structure the data, to think about information, the nature of sources and citation practices, and also to get major demographic data on her group and the things she’s working on.

Another thing we do at Princeton is to work with libraries and with catalogue data – thinking about data in MARC, MODS, or METS record, and thinking about the extract and reformatting of that data to query and rethink that data. And we work with librarians on information retrieval, and how that could be translated to research – book history perhaps. Princeton University library brought the personal library of philosopher Jaques Derrida – close to 19,000 volumes (thought it was about 15,000 until they were unpacked), so two projects are happening simultaneously. One is at the Centre for Digital Humanities, looking at how Derrida marked up the texts, and then went on to use and cite in Of Grammatology. The other is with BibFrame – a Linked Open Data standard for library catalogues, and they are looking at books sent to Derrida, with dedications to him. Now there won’t be much overlap of those projects just now – Of Grammatology was his first book so those dedicated/gifted books to him. But we are building our databases for both projects as Linked Open Data, all being added a book at a time, so the hope is that we’ll be able to look at any relationships between the books that he owned and the way that he was using and being gifted items. And this is an experiment to explore those connections, and to expose that via library catalogue… But the library wants to catalogue all works, not just those with research interest. And it can be hard to connect research work, with depth and challenge, back to the catalogue but that’s what we are trying to do. And we want to be able to encourage more use and access to the works, without the library having to stand behind the work or analyse the work of a particular scholar.

So, you can take a data structure like this, then set up your system with appropriate constraints and affordances that need to be thought about as they will shape what you can and will do with your data later on. Continents have particular locations, boundaries, shape files. But you can’t mark out the boundaries for empires and states. The Western boundary at this time is a very contested thing indeed. In my system states are merely groups of locations, so that I can follow mercantile power, and think from a political viewpoint. But I wanted a tool with broader use hence that other data. Locations seem very safe and neutral but they really are not, they are complex and disputed. Now for that reason I wanted this tool – Project Quincy – to have others using it, but that hasn’t happened yet… Because this was very much created for my research and research question…It’s my own little Mind Palace for my needs… But I have heard from a researcher looking to catalogue those letters, and that would be very useful. Systems like this can have interesting afterlives, even if they don’t have the uptake we want Open Source Digital Humanities tools to have. The biggest impact of this project has been that I have the schema online. Some people do use the American Foreign Correspondents databases – I am one of the few places you can find this information, especially about consuls. But that schema being shared online have been helping others to make their own system… In that sense the more open documentation we can do, the better all of our projects could be.

I also created those diagrams that you were seeing – with DAVILA, a programme that creates these allows you to create easy to read, easy to follow, annotated, colour coded visuals. They are prettier than most database diagrams. I hope that when documentation is appealing and more transparent,  that that will get used more… That additional step to help people understand what you’ve made available for them… And you can use documentation to help teach someone how to make a project. So when my student was creating her schema, it was an example I could share or reference. Having something more designed was very helpful.

Q&A

Q1) Can you say more about the Derrida project and that holy grail of hanging that other stuff on the catalogue record?

A1) So the BibFrame schema is not as flexible as you’d like, it’s based on MARC, but it’s Linked Open Data, it can be expressed in RDF or JSON… And that lets us link records up. And we are working in the same library so we can link up on people, locations, maybe also major terms, and on th eaccession id number too. We haven’t tried it yet but…

Q1) And how do you make the distinction between authoritative record and other data.

A1) Jill Benson(?) team are creating authoritative linked open data records for all of the catalogue. And we are creating Linked Open Data, we’ll put it in a relational database with an API and an endpoint to query to generate that data. Once we have something we’ll look at offering a Triple Store on an ongoing basis. So, basically it is two independent data structures growing side by side with an awareness of each other. You can connect via API but we are also hoping for a demo of the Derrida library in BibFrame in the next year or two. At least a couple of the books there will be annotated, so you can see data from under the catalogue.

Q1) What about the commentary or research outputs from that…

A1) So, once we have our data, we’ll make a link to the catalogue and pull in from the researcher system. The link back to the catalogue is the harder bit.

Q2) I had a suggestion for a geographic system you might be interested in called Pelagios… And I don’t know if you could feed into that – it maps historical locations, fictional locations etc.

A2) There is a historical location atlas held by Newbury so there are shapefiles. Last I looked at Pelagios it was concerned more with the ancient world.

Comment) Latest iteration of funding takes it to Medieval and Arabic… It’s getting closer to your period.

A2) One thing that I really like about Pelagios is that they have split locations from their name, which accommodates multiple names, multiple imaginings and understandings etc. It’s a really neat data model. My model is more of a hack together – so in mine “London” is at the centre of modern London… Doesn’t make much sense for London but I do similar for Paris, that probably makes more sense. So you could go in deeper… There was a time when I was really interested in where all of Jay’s London Correspondents were… That was what put me into thinking about networking analysis… 60 letters are within London alone. I thought about disambiguating it more… But I was more interested in the people. So I went down a Royal Mail in London 1794 rabbit hole… And that was interesting, thinking about letters as a unit of information… Diplomatic notes fix conversations into a piece of paper you can refer to later – capturing the information and decisions. They go back and forth… So the ways letters came and went across London – sometimes several per day, sometimes over a week within the city…. is really interesting… London was and is extremely complicated.

Q3) I was going to ask about different letters. Those letters in London sound more like memos than a letter. But the others being sent are more precarious, at more time delay… My background is classics so there you tend to see a single letter – and you’d commission someone like Cicero to write a letter to you to stick up somewhere – but these letters are part of a conversation… So what is the difference in these transatlantic letters?

A3) There are lots of letters. I treat letters capaciously… If there is a “to” or “from” it’s in. So there are diplomatic notes between John Jay and George Hammond – a minister not an ambassadors as the US didn’t warrant that. Hammond was bad at his job – he saw a war coming and therefore didn’t see value in negotiating. They exchange notes, forward conversations back and forth. My data set for my research was all the letters sent to Jay, not those sent by Jay. I wanted to see what information Jay had available. With Hammond he kept a copy of all his letters to Jay, as evidence for very petty disputes. The letters from the West Indies were from Nathanial Cabbot Dickinson, who was sent as an information collector for the US government. Jay was sent to Europe on the treaty…. So the kick off for Jay’s treaty is changes that sees food supplies to British West Indies being stopped. Hammond actually couldn’t find a ship to take evidence against admiralty courts… They had to go through Philadelphia, then through London. So that cluster of letters include older letters. Letters from the coast include complaints from Angry American consuls…. There are urgent cries for help from the US. There is every possible genre… One of the things I love about American history is that Jay needs all the information he can get. When you map letters – like the Republic of Letters project at Stanford – you have this issue of someone writing to their tailor, not just important political texts. But for diplomats all information matters… Now you could say that a letter to a tailor is important but you could also say you are looking to map the boundaries of intellectual history here… Now in my system I map duplicates sent transatlantically, as those really matter, not all arrived, etc. I don’t map duplicates within London, as that isn’t as notable and is more about after the fact archiving.

Q4) Did John Jay keep diaries that put this correspondance in context?

A4) He did keep diaries… I do have analysis of how John Quincy Adams wrote letters in his time. He created subject headings, he analysed them, he recreated a filing system and way of managing his letters – he’d docket his letters, noting date received. He was like a human database… Hence naming my database after him.

Q5) There are a couple of different types of a tool like this. There is your use and then there is reuse of the engineering. I have correspondance earlier than Jay’s, mainly centred on London… Could I download the system and input my own letters?

A5) Yes, if you go to eafsd.org you’ll find more information there and you can try out the system. The database is Project Quincy and that’s on GitHub (GPL 3.0) and you can fire it up in Django. It comes with a nice interface. And do get in touch and I’ll update you on the system etc. It runs in the Django framework, can use any database underneath it. And there may be a smaller tractable letter database running underneath it.

Comment) On BibFrame… We have a Library and Information Studies programme which we teach BibFrame as part of that. We set up a project with a teaching tool which is also on GitHub – its linked from my staff page.

A quick note as follow up:

If you have research software that you have created for your work, and which you are making available under open source license, then I would recommend looking at some of the dedicated metajournals that will help you raise awareness of your project and ensure it is well documented for others to reuse. I would particularly recommend the Journal of Open Research Software (which, for full disclosure, I sit on the Editorial Advisory Board for), or the Journal of Open Source Software (as recommended by the lovely Daniel S. Katz in response to my post).

 

Jun 292016
 

Today I am at theFlood and Coastal Erosion Risk Management Network (FCERM.net) 2016 Annual Assembly in Newcastle. The event brings together a really wide range of stakeholders engaged in flood risk management. I’m here to talk about crowd sourcing and citizen science, with both COBWEB and University of Edinburgh CSCS Network member hats on, as the event is focusing on future approaches to managing flood risk and of course citizen science offers some really interesting potential here. 

I’m going to be liveblogging today but as the core flooding focus of the day is not my usual subject area I particularly welcome any corrections, additions, etc. 

The first section of the day is set up as: Future-Thinking in Flood Risk Management:

Welcome by Prof Garry Pender

Welcome to our third and final meeting of this programme of network meetings. Back at our first Assembly meeting we talked about projects we could do together, and we are pleased to say that two proposals are in the process of submissions. For today’s Assembly we will be looking to the future and future thinking about flood risk management.  There is a lot in the day but also we encourage you to discuss ideas, form your own breakout groups if you want.

And now onto our first speaker. Unfortunately Prof Hayley Fowler, Professor of Climate Change Impacts, Newcastle University cannot be with us today. But Chris Kilby has stepped in for Hayley.

Chris Kilby, Newcastle University – What can we expect with climate change? 

Today is 29th June, which means that four years ago today we had the “Toon Monsoon” –  around 50mm in 2 hours and the full city was in lockdown. We’ve seen some incidents like this in the last year, in London, and people are asking about whether that is climate change. And that incident has certainly changed thinking and practice in the flood risk management community. It’s certainly changed my practice – I’m now working with sewer systems which is not something I ever imagined.

Despite spending millions of pounds on computer models, the so-called GCMs, these models seem increasingly hard to trust as the academic community realise how difficult to predict flooding risk actually is. It is near impossible to predict future rainfall – this whole area is riven with great uncertainty. There is a lot of good data and thinking behind them, but I now have far more concern about the usefulness of these models than 20 years ago – and that’s despite the fact that these models are a lot better than they were.

So, the climate is changing. We see some clear trends both locally and globally. A lot of these we can be confident of. Temperature rises and sea level rise we have great confidence in those trends. Rainfall seasonality change (more in winter, less in summer), and “heavy rainfall” in the UK at least, has been fairly well predicted. What has been less clear is the extremes of rainfall (convective), and extremes of rainfall like the Toon Monsoon. Those extremes are the hardest to predict, model, reproduce.

The so called UKCP09 projections, from 2009, are still there and are still the predictions being used but a lot has changed with the models we use, with the predictions we are making. We haven’t put out any new projections – although that was originally the idea when UK CP09 projections came  out. So, officially, we are still using UKCP09. That produced coherant indications of more frequent and heavy rainfall in the UK. And UKCP09 suggests 15-20% increased in Rmed in winter. But these projections are based on daily rainfall, what was not indicated here was the increased hourly rate. So some of the models looking at decreased summer rainfall, which means a lower mean rainfall per hour, but actually that isn’t looking clear anymore. So there are clear gaps here, firstly with hourly level convective storms, and all climate models have the issue of when it comes to “conveyer belt” sequences of storms, it’s not clear models reliably reproduce these.

So, this is all bad news so far… But there is some good news. More recent models (CMIP5) suggest some more summer storms and accommodate some convective summer storms. And those newer models – CMIP5 and those that follow – will feed into the new projections. And some more good news… The models used in CP09, even high resolution regional models, ran on a resolution of 25km and downscaled using weather generator to 5km but no climate change information beyond 25km. Therefore within the 25km grid box the rain fall is averaged and doesn’t adequately resolve movement of air and clouds, adding a layer of uncertainty, as computers aren’t big/fast enough to do a proper job of resolving individual cloud systems. But Hayley, and colleagues at the Met Office, have been running higher resolution climate models which are similar for weather forecasting models at something like a 1.5km grid size. Doing that with climate data and projecting over the long term do seem to resolve the convective storms. That’s good in terms of new information. Changes look quite substantial: summer precipitation intensities are expected to increase by 30-40% for short duration heavy events. That’s significantly higher than UKCP09 but there are limitations/caveats here too… So far the simulations are on the South East of England only, simulations have been over 10 years in duration, but we’d really want more like 100 year model. And there is still poor understanding of the process and of what makes a thunderstorm – thermodynamic versus circulation changes may conflict. Local thermodynamics are important but that issue of circulation, the impacts of large outbreaks of warm air from across the continent, and that conflict between those processes is far from clear in terms of what makes the difference.

So, Hayley has been working on this with the Met Office, and she now has an EU project with colleagues in the Netherlands which is producing interesting initial results. There is a lot still to do but it does look like a larger increase in convection than we’d previously thought. Looking at winter storms we’ve seen an increase over the last few years. Even the UKCP09 models predicted some of this but so far we don’t see a complete change attributable to climate change.

Now, is any of this new? Our working experience and instrumental records tend to only go back 30-40 years, and that’s not long enough to understand climate change. So this is a quick note of historical work which has been taking place looking at Newcastle flooding history. Trawling through the records we see that the Toon Monsoon isn’t unheard of – we’ve had them three or four times in the last century:

  • 16th Set 1913 – 2.85 inches (72mm) in 90 minutes
  • 22nd June 1941 – 1.97 inches (50mm) in 35 minutes and 3.74 inches (95mm) in 85 minutes
  • 28th June 2012 – 50mm in 90 minutes

So, these look like incidents every 40 years or so looking at historic records. That’s very different from the FEH type models and how they account for Fluvial flooding, storms, etc.

In summary then climate models produce inherently uncertain predictions but major issues remain with extremes in general, and hourly rainfall extremes. Overall picture that is emerging is of increasing winter rainfall (intensity and frequency), potential for increased (summer) convective rainfall, and in any case there is evidence that climate variability over the last century has included similar extremes to those observed in the last decade.

And the work that Hannah and colleagues are working on are generating some really interesting results so do watch this space for forthcoming papers etc.

Q&A

Q1) Is that historical data work just on Newcastle?

A1) David has looked at Newcastle and some parts of Scotland. Others are looking at other areas though.

Q2) Last week in London on EU Referendum day saw extreme rainfall – not as major as here in 2012 – but that caused major impacts in terms of travel, moving of polling station etc. So what else is taking place in terms of work to understand these events and impacts?

A2) OK, so impacts wise that’s a bit difference. And a point of clarification – the “Toon Monsoon” wasn’t really a Monsoon (it just rhymes with Toon). Now the rainfall in London and Brighton being reported looked to be 40mm in an hour, which would be similar or greater than in Newcastle so I wouldn’t downplay them. The impact of these events on cities particularly is significant. In the same way that we’ve seen an increase in fluvial flooding in the last ten years, maybe we are also seeing an increase in these more intense shorter duration events. London is certainly very vulnerable – especially with underground systems. Newcastle Central here was closed because of water ingress at the front – probably wouldn’t happen now as modifications have been made – and metro lines shut. Even the flooding event in Paris a few weeks back was most severely impacting the underground rail/metro, road and even the Louvre. I do worry that city planners have build in vulnerability for just this sort of event.

Q3) I work in flood risk management for Dumfries and Galloway – we were one of the areas experiencing very high rainfall. We rely heavily in models, rainfall predictions etc. But we had an event on 26th/27th January that wasn’t predicted at all – traffic washed off the road, broke instrument peaks, evacuations were planned. SEPA and the Met office are looking at this but there is a gap here to handle this type of extreme rainfall on saturated ground.

A3) I’m not aware of that event, more so with flooding on 26th December which caused flooding here in Newcastle and more widespread. But that event does sound like the issue for the whole of that month for the whole country. It wasn’t just extreme amounts of daily rainfall, but it was the fact that the previous month had also been very wet. That combination of several months of heavy rainfall, followed by extreme (if not record breaking on their own) events really is an issue – it’s the soul of hydrology. And that really hasn’t been recognised to date. The storm event statistics tend to be the focus rather than storms and the antecedent conditions. But this comes down to flood managers having their own rules to deal with this. In the North East this issue has arisen with the River Tyne where the potential for repurposing rivers for flood water retention has been looked at – but you need 30 day predictions to be able to do that. And if this extreme event following a long period of rain really changes that and poses challenges.

Comment – Andy, Environment Agency) Just to note that EA DEFRA Wales have a programme to look at how we extend FEH but also looking at Paleo Geomorphology to extend that work. And some interesting results already.

Phil Younge, Environment Agency – The Future of Flood Risk Management

My role is as Yorkshire Major Incident Recovery Manager, and that involves three things: repairing damage; investing in at-risk communities; and engaging with those communities. I was brought in to do this because of another extreme weather event, and I’ll be talking about the sorts of things we need to do to address these types of challenges.

So, quickly, a bit of background on the Environment Agency. We are the National flood risk agency for England. And we have a broad remit including risk of e.g. implications of nuclear power stations, management of catchment areas, work with other flood risk agencies etc. And we directly look after 7100 km of river, coastal and tidel raised defences; 22,600 defences, with assets worth over 20 billion. There are lots of interventions we can make to reduce the risk to communities. But how do we engage with communities to make them more resiliant for whatever the weather may throw at them? Pause on that thought and I’ll return to it shortly.

So I want to briefly talk about the winter storms of 2015-16. The Foss Barrier in York is what is shown in this image – and what happened there made national news in terms of the impact on central York. The water levels were unprecedentedly high. And this event was across the North of England, with record river levels across the region and we are talking probably 1 metre higher than we had experienced before, since records began. So the “what if” scenarios are really being triggered here. Some of the defences built as a result of events in 2009 were significantly overtopped, so we have to rethink what we plan for in the future. So we had record rainfall, 14 catchments experienced their highest ever river flow. But the investment we had put in made a difference, we protected over 20,000 properties during storms Desmond and Eva – even though some of those defences have been overtopped in 2009. We saw 15k households and 2,600 businesses flooded in 2009, with 150 communities visited by flood support officers. We issued 92 flood warnings – and we only do that when there is a genuine risk to loss of life. We had military support, temporary barriers in place, etc. for this event but the levels were truly unprecedented.

Significant damage was done to our flood defences across the North of England. In parts of Cumbria the speed and impact of the water, the force and energy of that water, have made huge damage to roads and buildings. We have made substantial work to repair those properties to the condition they were in before the rain. We are spending around £24 million to do that and do it at speed for October 2016.

But what do we do about this? Within UK PLC how do we forecast and manage the impact and consequence of flooding across the country? Following the flooding in Cumbria Oliver Letwin set up the Flood Risk Resilience Review, to build upon the plans the Environment Agency and the Government already has, to look at what must be done differently to support communities across the whole of England. The Review has been working hard across the last four months, and there are four strands I want to share:

  • Modelling extreme weather and stress testing resilience to flood risk – What do we plan for? What is a realistic and scalable scenario to plan for? Looking back at that Yorkshire flooding, how does that compare to existing understanding of risk. Reflecting on likely extreme consequences as a yardstick for extreme scenarios.
  • Assessing the resilience of critical local infrastructure – How do we ensure that businesses still run, that we can run as a community. For instance in Leeds on Boxing Day our telecommunications were impacted by flooding. So how can we address that? How do we ensure water supply and treatment is fit for these events? How can we ensure our hospitals and health provision is appropriate? How can we ensure our transport infrastructure is up and running. As an aside the Leeds Boxing Day floods happened on a non working day – the Leeds rail station is the second busiest outside London so if that had happened on a working day the impact could have been quite different, much more severe.
  • Temporary defences – how can we move things around the country to reduce risk as needed, things like barriers and pumps. How do we move those? How do we assess when they are needed? How do we ensure we had the experience and skills to use those temporary defences? A review by the military has been wrapped into this Resilience Review.
  • Flood risk in core cities – London is being used as a benchmark, but we are also looking at cities like Leeds and how we invest to keep these core key cities operating at times of heightened flood risk.

So, we are looking at these areas, but also how we can help our communities to be more resilient. The Environment Agency are looking at community engagement and that’s particularly part of what we are here to do, to develop and work with the wider FCERM community.

We do have an investment programme from 2015-2021 which includes substantial capital investment. We are investing significantly in the North of England (e.g. £54 per person for everyone in Yorkshire, Lancashire, and Cumbria, also East Midlands and Northumbria. And that long planning window is letting us be strategic, to invest based on evidence of need. And in the Budget 2016 there was an additional £700 million for flood risk management to better protect 4,745 homes and 1,700 businesses. There will also be specific injections of investment in places like Leeds, York, Carlisle etc. to ensure we can cope with incidents like we had last year.

One thing that really came out of last year was the issue of pace. As a community we are used to thinking slowly before acting, but there is a lot of pressure from communities and from Government to act fast, to get programmes of work underway within 12 months of flooding incidents. Is that fast? Not if you live in an affected area, but it’s faster than we may be used to. That’s where the wealth of knowledge and experience needs to be available to make the right decisions quickly. We have to work together to do this.

And we need to look at innovation… So we have created “Mr Nosy”, a camera to put down culverts(?) and look for inspect them. We used to (and do) have teams of people with breathing apparatus etc. to do this, but we can put Mr Nosy down so that a team of two can inspect quickly. That saves time and money and we need more innovations that allow us to do this.

The Pitt  Review (2008) looked at climate change and future flood and coastal risk management discussed the challenges. There are many techniques to better defend a community, we need the right blend of approach: “flood risk cannot be managed by building ever bigger “hard” defences”; natural measures are sustainable; multiple benefits for people, properties and wildlife; multi-agency approach is the way forward. Community engagement is also crucial to inform the community to understand the scale of the risk, to understand how to live with risk in a positive way. So, this community enables us to work with research, we need that community engagement, and we need efficiency – that big government investment needs to be well spent, we need to work quickly and to shortcut to answers quickly but those have to be the right answers. And this community is well placed to help us ensure that we are doing the right things so that we can assure the communities, and assure the government that we are doing the right things.

Q&A

Q1) When is that Review due to report?

A1) Currently scheduled for middle of July, but thereabouts.

Q2) You mentioned the dredging of watercourses… On the back of major floods we seem to have dredging, then more 6 weeks lately. For the public there is a perception that that will reduce flood risk which is really the wrong message. And there are places that will continue to flood – maybe we have to move coastal towns back? You can’t just keep building walls that are bigger and bigger.

A2) Dredging depends so much on the circumstances. In Calderdale we are making a model so that people can understand what impact different measures have. Dredging helps but it isn’t the only things. We have complex hydro-dynamic models but how do we simply communicate how water levels are influenced, the ways we influence the river channel. And getting that message across will help us make changes with community understanding. In terms of adaptation I think you are spot on. Some communities will probably adapt because of that, but we can’t just build higher and higher walls. I am keen that flood risk is part of the vision for a community, and how that can be managed. Historically in the North East cities turned their backs on the river, as water quality has improved that has changed, which is great but brings its own challenges.

Q3) You mentioned a model, is that a physical model?

A3) Yes, a physical model to communicate that. We do go out and dredge where it is useful, but in many cases it is not which means we have to explain that when communities think it is the answer to flooding. Physical models are useful, apps are good… But how do we get across some of the challenges we face in river engineering.

Q4) You talked about community engagement but can you say more about what type of engagement that is?

A4) We go out into the communities, listen to the experiences and concerns, gathering evidence, understanding what that flooding means for them. Working with the local authorities those areas are now producing plans. So we had an event in Calderdale marking six months since the flood, discussing plans etc. But we won’t please all the people all of the time, so we need to get engagement across the community. And we need that pace – which means bringing the community along, listening to them, bringing into our plans… That is challenging but it is the right thing to do. At the end of the day they are the people living there, who need to reassured about how we manage risk and deliver appropriate solutions.

The next section of the day looks at: Research into Practice – Lessons from Industry:

David Wilkes – Global Flood Resilience, Arup – Engineering Future Cities, Blue-Green Infrastructure

This is a bit of an amalgam of some of the work from the Blue-Green Cities EPSRC programme, which I was on the advisory board of, and some of our own work at Arup.

Right now 50% of the global population live in cities – over 3.2 billion people. As we look forward, by the middle of this century (2050) we are expecting growth so that around 70% of the world population will live in cities, so 6.3 billion.

We were asked a while ago to give some evidence to the Third Inquiry of the All Party Parliamentary Group for Excellence in the Built Environment info flood migration and resilience, and we wanted to give some clear recommendations that: (1) Spatial planning is the key to long term resilience; (2) Implement programme of improved surface water flood hazard mapping; (3) Nurture capacity within professional community to ensure quality work in understanding flood risk takes place, and a need to provide career paths as part of that nurturing.

We were called into New York to give some support after Hurricane Sandy. They didn’t want a major reaction, a big change, instead they wanted a bottom up resilient approach, cross cutting areas including transportation, energy, land use, insurance and infrastructure finance. We proposed an iterative cycle around: redundancy; flexibility; safe failure; rapid rebound; constant learning. This is a quantum shift from our approach in the last 100 years so that learning is a crucial part of the process.

So, what is a Blue-Green city? Well if we look at the January 2014 rainfall anomaly map we see the shift from average annual rainfall. We saw huge flooding scarily close to London at that time, across the South East of England. Looking at the December 2015 we see that rainfall anomaly map again showing huge shift from the average, again heavily in the South East, but also South West and North of England. So, what do we do about that? Dredging may be part of this… But we need to be building with flood risk in mind, thinking laterally about what we do. And this is where the Blue-Green city idea comes in. There are many levels to this: Understand water cycle at catchment scale; Align with other drivers and development needs; identify partners, people who might help you achieve things, and what their priorities are; build a shared case for investment and action; check how it is working and learn from experience.

Looking, for instance, at Hull we see a city long challenged by flooding. It is a low lying city so to understand what could be done to reduce risk we needed to take a multi faceted view across the long term: looking at frequency/likelihood of risk, understand what is possible, looking at how changes and developments can also feed into local development. We have a few approaches available… There is the urban model, of drainage from concrete into underground drainage – the Blue – and the green model of absorbing surface water and managing it through green interventions.

In the Blue-Green Cities research approach you need to work with City Authority and Community Communications; you need to Model existing Flood Risk Management; Understand Citizens Behaviour, and you need to really make a clear Business Case for interventions. And as part of that process you need to overcome barriers to innovation – things like community expectations and changes, hazards, etc. In Newcastle, which volunteered to be a Blue-Green city research area, we formed the Newcastle Learning and Action Alliance to build a common shared understanding of what would be effective, acceptable, and practical. We really needed to understand citizens’ behaviours – local people are the local experts and you need to tap into that and respect that. Please really value Blue-Green assets but only if they understand how they work, the difference that they make. And indeed people offered to maintain Blue-Green assets – to remove litter etc. but again, only if they value and understand their purpose. And the community really need to feel a sense of ownership to make Blue-Green solutions work.

It is also really important to have modelling, to show that, to support your business case. Options include hard and soft defences. The Brunton Park flood alleviation scheme included landscape proposals, which provided a really clear business case. OfWATT wanted investment from the energy sector, they knew the costs of conventional sewerage, and actually this alternative approach is good value, and significantly cheaper – as both sewer and flood solution – than the previous siloed approach. There are also Grey-Green options – landscaping to store water in quite straightforward purposes, more imaginative purposes, and the water can be used for irrigation, wash down, etc. Again, building the business case is absolutely essential.

In the Blue-Green Cities research we were able to quantify direct and indirect costs to different stakeholders – primary industry, manufacturing, petroleum and chemical, utilities sector, construction, wholesale and retail, transport, hotels and restaurants, info and communication, financial and professional, other services. When you can quantify those costs you really have a strong case for the importance of interventions that reduce risk, that manage water appropriately. That matters whether spending tax payers money or convincing commercial partners to contribute to costs.

Commission of Inquiry into flood resilience of the future: “Living with Water” (2015), from the All Party Group for Excellence in the Built Environment, House of Commons, talk about “what is required is a fundamental change in how we view flood management…”

Q&A

Q1) I wanted to ask about how much green we would have to restore to make a difference? And I wanted to ask about the idea of local communities as the experts in their area but that can be problematic…

A1) I wouldn’t want to put a figure on the green space, you need to push the boundaries to make a real difference. But even small interventions can be significant. If the Blue-Green asset interrupts the flood path, that can be hugely significant. In terms of the costs of maintaining Blue-Green assets, well… I have a number of pet projects and ideas and I think that things like parks and city bike ways, and to have a flood overflow that also encourages the community to use it, will clearly be costlier than flat tarmac. But you can get Sustrans, local businesses, etc. to support that infrastructure and, if you get it right, that supports a better community. Softer, greener interventions require more maintenance but that can give back to the community all year round, and there are ways to do that. You made another point about local people being the experts. Local people do know about their own locality. Arguably as seasoned professionals we also know quite a bit. The key thing is to not be patronising, not to pretend you haven’t listened, but to build concensus, to avoid head to head dispute, to work with them.

Stephen Garvin, Director Global Resilience Centre, BRE – Adapting to change – multiple events and FRM

I will be talking about the built environment, and adaptations of buildings for flood resilience. I think this afternoon’s workshops can develop some of these areas a bit. I thought it would be good to reflect on recent flooding, and the difficulty of addressing these situations. The nature of flooding can vary so greatly in terms of the type and nature of floods. For instance the 2007 floods were very different from the 2012 flooding and fro the 2014 floods in terms of areas effected, the nature of the flood, etc. And then we saw the 2015/16 storms – the first time that every area at risk of flooding in Scotland and the North of the UK flooded – usually not all areas are hit at once.

In terms of the impact water damage is a major factor. So, for instance in Cumbria 2015, we had record rainfall, over-topped defences, Rivers Eden and Petrol, Water depth of 1.5m in some properties in Carlisle. That depth of flooding was very striking. A lot of terraced properties, with underfloor voids, were affected in Carlisle. And water was coming in from every direction. We can’t always keep water from coming in, so in some ways the challenge is getting water out of the properties. How do we deal with it? Some of these properties had had flood resilience measures before – such as raising the height of electrical sockets – but they were not necessarily high enough or useful enough in light of the high water. And properties change hands, are rented to new tenants, extensions are added – the awareness isn’t consistently there and some changes increase vulnerability to flooding.

For instance, one property had, after 2005 less severe floods had led to flood prevention measures being put in place – door surrounds, airbrick covers, and despite those measures water inundated the property. Why? Well there had been a conservatory added which, despite best efforts to seal it, let in a great deal of water. They had also added an outdoor socket for a tumble dryer a few feet off the ground. So we have to think about these measures – are they appropriate? Do they manage the risk sufficiently? How do we handle the flood memory? You can have a flood resilient kitchen installed, but what happens when it is replaced?

There are two approaches really: Flood resilience essentially allows the water to come in, but the building and its materials are able to recover from flooding; by comparison Flood resistance is about keeping water out, dry proof materials etc. And there are two dimensions here as we have to have a technical approach in design, construction, flood defences, sustainable approaches to drainage; and non-technical approaches – policy, regulation, decision making and engagement, etc. There are challenges here – construction are actually very small companies on the whole – more than 5 people is a big company. And we see insurers who are good at swinging into action after floods, but they do not always consider resilience or resistance that will have a long term impact so we are working to encourage that approach, that idea of not replacing like for like but replacing with better more flood resilient or resistant options. For instance there are solutions for apertures that are designed to keep water out to high depths – strong PVC doors, reinforced, and multi-point lockable for instance. In Germany, in Hamburg they have doors like this (though perforated brick work several feet higher!). You can also use materials to change materials, change designs of e.g. power sockets, service entries, etc.

Professor Eric Nehlsen came up with the idea of cascading flood compartments with adaptive response, starting from adaptation to flooding dry and wet-proofing (where we tend to work) through to more ambitious ideas like adaptation by floating and amphibious housing… Some US coastal communities take the approach of raising properties off the ground, or creating floating construction, particularly where hurricanes occur, but that doesn’t feel like the right solution in many cases here… But we have to understand and consider alternative approaches.

There are standards for floor repair – supported by BRE and industry – and there are six standards that fit into this area, which outline approaches to Flood risk assessment, planning for FRR, Property surveys, design and specification of flood resilient repair, construction work, maintenance and operation (some require maintenance over time). I’m going to use those standards for an FRR demonstration. We have offices in Watford in a Victorian Terrace, a 30 square metre space where we can test cases – have done this for energy efficiency before, have now done for flooding. This gives us a space to show what can be achieved, what interventions can be made, to help insurers, construction, policy makers see the possibilities. The age of the building means it is a simple construction – concrete floor and brick walls – so nothing fancy here. You can imagine some tests of materials, but there are no standards for construction products for repair and new builds for flood resistance and resilience. It is still challenging to drive adoption though – essentially we have to disrupt normal business and practice to see that change to resistant or resilient building materials.

Q&A

Q1) One of the challenges for construction is that insurance issue of replacing “like by like”…

A1) It is a major challenge. Insurance is renewed every year, and often online rather than by brokers. We are seeing some insurers introducing resilience and resistance but not wide-scale yet. Flood resilience grants through ECLG for Local Authorities and individuals are helpful, but no guarantee of that continuing. And otherwise we need to make the case to the property owner but that raises issues of affordability, cost, accessibility. So, a good question really.

Jaap Flikweert – Flood and Coastal Management Advisor, Royal HaskoningDHV – Resilience and adaptation: coastal management for the future

I’m going to give a practitioners perspective on ways of responding to climate change. I will particularly talk about adaptation which tends to be across three different areas/meanings: Protection (reduce likelihood); Resilience (reduce consequence); and Adaptation, which I’m going to bluntly call “Relocation” (move receptors away). And I’ll talk about inland flooding, coastal flooding and coastal erosion.. But I tend not to talk as much on coastal erosion as if we focus only on risk we can miss the opportunities. But I will be talking about risk – and I’ll be highlighting some areas for research as I talk.

So, how do we do our planning to think about how we do our city planning to manage the risk. I think the UK – England and Wales especially – are at the lead here in terms of Shoreline Management Plans – they are long term and broad scale view, there is a policy for coastal defence (HtL (Hold the Line)/MR (Managed Realignment)/NAI (No Active Intervention), Strong interaction with other sectors. Scotland are making progress here too. But there is a challenging to be flexible, to think about the process of change.

Setting future plans can be challenging – there is a great deal of uncertainty in terms of climate change, in terms of finances. We used to talk about a precautionary approach but I think we need to talk about “Managed-adaptive” approaches with decision pathways. For instance The Thames Barrier is an example of this sort of approach. This isn’t necessarily new work, there is a lot of good research to date about how to do this but it’s much more about mainstreaming that understanding and approach.

When we think about protection we need to think about how we sustain defences in a future with climate change? We will see loading increase (but extent uncertain); value at risk will increase (but extent uncertain); coastal squeeze and longshore impacts. We will see our beaches disappear – with both environmental and flood risk implications. An example from the Netherlands shows HtL feasible and affordable up to about 6m in sea level rise; with sandy solutions (also deal with coastal squeeze), and radical innovation is of vital importance.

We can’t translate that to the UK, it is a different context, but we need to see this as inspirational. In the UK we won’t hold the line for ever… So how do we deal with that? We can think about the structures, and I think there is research opportunity here about how we justify buying time for adaption, and how we design for short life (~20 years), and how we develop adaptable solutions. We can’t Hold the Line forever, but some communities are not ready for that change so we have to work on what we can achieve and how.

In terms of Resilience we need to think about coastal flooding – in principle not different from inland flooding, design to minimise impact, but in practice that is more difficult with lower change/higher consequence raising challenges of less awareness, more catastrophic if it does happen. New Orleans would be a pertinent example here. And when we see raised buildings – as David mentioned – those aren’t always suitable for the UK, they change how a community looks which may not be acceptable… Coastal erosion raises its own challenges too.

When we think of Adaptation/Relocation we have to acknowledge that protection is always technically possible but what if it was unaffordable or unsustainable. For example a disaster in Grahamstown, Queensland saw a major event in January 2011 lead to protective measures but the whole community moving in land in December 2011. There wasn’t a delay on funding etc. as this was an emergency, it forced the issue. But how can we do that in a planned way? We have Coastal change Pathfinders. This approach is very valuable including actual relocation, awareness, engagement lessons, policy innovation. But the approach is very difficult to mainstream because of funding, awareness, planning policy, local authority capacity. And here too I see research opportunities around making the business case for adaptation/relocation.

To take an example here that a colleague is working on. Fairbourne, Gwynedd, on the West Coast of Wales, is a small community, with a few buildings from the 1890s which has grown to 400 properties and over 800 people. Coastal defences were improved in 1981, and again in 2012. But this is a community which really shouldn’t be in that location in the long term, they are in the middle of flood plans. The Parish Council have adopted an SMP policy which has goals across different timings: in the short term to Hold the Line; medium term – Managed Realignment, and long term – No Active Intervention. There is a need to plan now to look at how we move from one position to another… So this isn’t dissemination needed here, it is true communication and engagement with the community, identifying who that community is to ensure that is effective.

So, in closing I think there is research needed around design for short life; consultation and engagement – about useful work done, lessons learned, moving from informing to involving to ownership, defining what a community is; Making the business case for supporting adaptation/relocation – investment in temporary protection to buy time; investment in increasing communities’ adaptive capacity; value of being prepared vs unprepared – damage (to the nation) such as lack of mobility, employability, burden on health and social services. And I’d like to close with the question: should we consider relocation for some inland areas at risk of flooding?

Q&A

Q1) That closing question… I was driving to a settlement in our area which has major flood risk, is frequently cut off by snow in the summer. There are few jobs there, it is not strategically key although it has a heritage value perhaps. We could be throwing good money after bad to protect a small settlement like that which has minimal contribution. So I would agree that we should look at relocation of some inland properties. Also, kudos to the parish council of Fairbourne for adopting that plan. We face real challenges as politicians are elected on 5 year terms, and getting them persuaded that they need to get communities to understand the long term risks and impacts is really challenging.

A1) I think no-one would claim that Fairbourne was an easy process. The Council adopted the SMP but who goes to parish meetings? But BBC Wales picked it up, rather misreported the timelines, but that raised interest hugely. But it’s worth noting that a big part of Gwynedd and mid Wales faces these challenges. Understanding what we preserve, where investment goes… How do we live with the idea of people living below sea level. The Dutch manage that but in a very different way and it’s the full nation who are on board, very different in the UK.

Q2) What about adopting Dutch models for managing risk here?

A2) We’ve been looking at various ways that we can learn from Dutch approaches, and how that compares and translates to a UK context.

And now, in a change to plans, we are rejuggling the event to do some reflection on the network – led by Prof. Garry Pender – before lunch. We’ll return with 2 minute presentations after that. Garry is keen that all attending complete the event feedback forms on the network, the role of the network, resources and channels such as the website, webinars, events, etc. I am sure FCERM.net would also welcome comments and feedback by email from those from this community who are not able to attend today. 

Sharing Best Practice – Just 2-minutes – Mini presentations from delegates sharing output, experience and best practice

 

I wasn’t able to take many notes from this session, as I was presenting a 2 minute session from my COBWEB colleague Barry Evans (Aberystwyth University), on our co-design work and research associated with our collaboration with the Tal-y-bont Floodees in Mid-Wales. In fact various requirements to re-schedule the day meant that the afternoon was more interactive but also not really appropriate for real time notation so, from hereon, I’m summarising the day. 

At this point in the day we moved to the Parallel Breakout sessions on Tools for the Future. I am leading Workshop 1 on crowd sourcing so won’t be blogging them, but include their titles here for reference:

  • Workshop 1 – Crowd-Sourcing Data and Citizen Science – An exploration of tools used to source environmental data from the public led by Nicola Osborne CSCS Network with case studies from SEPA. Slides and resources from this session will be available online shortly.
  • Workshop 2 – Multi-event modelling for resilience in urban planning An introduction to tools for simulating multiple storm events with consideration of the impacts on planning in urban environments with case studies from BRE and Scottish Government
  • Workshop 3 – Building Resilient Communities Best-practice guidance on engaging with communities to build resilience, led by Dr Esther Carmen with case studies from the SESAME project

We finished the day with a session on Filling the Gaps– Future Projects:

Breakout time for discussion around future needs and projects

I joined a really interesting Community Engagement breakout session, considering research gaps and challenges. Unsurprisingly much of the discussion centred on what we mean by community and how we might go about identifying and building relationships with communities. In particular there was a focus on engaging with transient communities – thinking particularly about urban and commuter areas where there are frequent changes in the community. 

Final Thoughts from FCERM.net – Prof. Garry Pender 

As the afternoon was running behind Garry closed with thank yous to the speakers and contributors to the day. 

Jun 272016
 
This afternoon I’m at the eLearning@ed/LTW monthly Showcase and Network event, which this month focuses on Assessment and Feedback.
I am liveblogging these notes so, as usual, corrections and updates are welcomed. 
The wiki page for this event includes the agenda and will include any further notes etc.: https://www.wiki.ed.ac.uk/x/kc5uEg
Introduction and Updates, Robert Chmielewski (IS Learning, Teaching and Web)
Robert consults around the University on online assessment – and there is a lot of online assessment taking place. And this is an area that is supported by everybody. Students are interested in submitting and receiving feedback online, but we also have technologists who recognise the advantages of online assessment and feedback, and we have the University as a whole seeing the benefits around, e.g. clarity over meeting timelines for feedback. The last group here is the markers and they are more and more appreciative of the affordances of online assessment and feedback. So there are a lot of people who support this, but there are challenges too. So, today we have an event to share experiences across areas, across levels.
Before we kick off I wanted to welcome Celeste Houghton. Celeste: I an the new Head of Academic Development for Digital Education at the University, based at IAD, and I’m keen to meet people, to find out more about what is taking place. Do get in touch.
eSubmission and eFeedback in the College of Humanities and Social Science, Karen Howie (School of History, Classics & Archaeology)
This project started about 2-3 years back in February 2015. The College of Humanities and Social Sciences wants 100% electronic submission/feedback where “pedagogically appropriate” by 2016/17 academic year. Although I’m saying electronic submission/feedback the in-between marking part hasn’t been prescribed. The project board for this work includes myself, Robert and many others any of whom you are welcome to contact with any questions.
So, why do this? Well there is a lot of student demand for various reasons – legibility of comments; printing costs; enabling remote submission. For staff the benefits are ore debatable but they can include (as also reported by Jisc) increased efficiency, and convenience. Benefits for the institution (again as reported by Jisc) include measuring feedback response rates, and efficiencies that free up time for student support.
Now some parts of CHSS are already doing this at the moment. Social and Political Studies are using an in-house system. Law are using Grademark. And other schools have been running pilots, most of them with GradeMark, and these have been mostly successful. But we’ve had lots of interesting conversations around these technologies, around quality of assessment, about health and safety implications of staring at a screen more.
We have been developing a workflow and process for the college but we want this to be flexible to schools’ profiles – so we’ve adopted a modular approach that allows for handling of groups/tutors; declaration of own work; checking for non-submitters; marking sheets and rubrics; moderation, etc. And we are planning for the next year ahead, working closely with the Technology Enhanced Learning group in HSS. We are having some training – for markers it’s a mixture of in-School and is with College input/support; and for administrators by learning technologies in the school or through discussions with IS LTW EDE. To support that process we have screencasts and documentation currently in development. PebblePad isn’t part of this process, but will be.
To build confidence in the system we’re facing some myth busting etc. For instance, anonymity vs pastoral care issues – a receipt dropbox has been created; and we have an agreement with EUSA that we can deanonymise if identification is not provided. And we have also been looking at various other regulations etc. to ensure we are complying and/or interpreting them correctly.
So, those pilots have been running. We’ve found that depending on your preocesses the administration can be complex. Students have voiced concerns around “generic” feedback. Students were anxious – very anxious in some cases. It is much quicker for markers to get started with marking, as soon as the deadline has passed. But there are challenges though – including when networks go down, for instance there was an (unusual) DDOS attack during our pilots that impacted our timeline.
Feedback from students seems relatively good. 14 out of 36 felt quality of marking was better than on paper – but 10 said it was less good. 29 out of 36 said feedback was more legible. 10 felt they had received more feedback than noral, 11 less. 3 out of 36 would rather submit on paper, 31 would would rather submit online. In our first pilot with first year students around 10% didn’t look at feedback for essay, 36% didn’t look at tutorial feedback. In our second pilot about 10% didn’t look at either assignments submissions.
Markers reported finding the electronic marking easier, but some felt that the need to work on screen was challenging or less pleasant than marking on paper.
Q&A
Q1) The students who commented on less or more feedback than normal – what were they comparing to?
A1) To paper-based marking, which they would have had for other courses. So when we surveyed them they would have had some paper-based and some electronic feedback already.
Q2) A comment about handwriting and typing – I read a paper that said that on average people write around 4 times more words when typing than when hand writing. And in our practice we’ve found that too.
A2) It may also be student perceptions – looks like less but actually quite a lot of work. I was interested in students expectations that 8 days was a long time to turn around feedback.
Q2) I think that students need to understand how much care has been taken, and that that adds to how long these things take.
Q3) You pointed out that people were having some problems and concerns – like health and safety. You are hoping for 100% take up, and also that backdrop of the Turnitin updates… Are there future plans that will help us to move to 100%
A3) The health and safety thing came up again and again… But it’s maybe to do with how we cluster assignments. In terms of Turnitin there are updates but not all of those emerge rather slowly – there is a bit more competition now, and some frustration across the UK, so looking likely that there will be more positive developments.
Q4) It was interesting that idea that you can’t release some feedback until it is all ready… For us in the Business School we ended up releasing feedback when there was a delay.
A4) In our situation we had some marks ready in a few days, others not due for two weeks. A few days would be fair, a few weeks would be problematic. It’s an expectation management issue.
Comment) There is also a risk that is marking is incomplete or partially done it can cause students great distress…
Current assessment challenges, Dr. Neil Lent (Institute for Academic Development)
My focus is on assessment and feedback. Initially the expectation was that I’d be focused on how to do assessment and feedback “better”. And you can do that to an extent but… The main challenge we face is a cultural rather than a technical challenge. And I mean technical in the widest sense – technological, yes, but also technical in terms of process and approach. I also think we are talking about “cultures” rather than “culture” when we think about this.
So, why are we focussing on assessment and feedback? Well we have low NSS scores, low league table position and poor student experience reported around this area. Also issues of (un)timely feedback, low utility, and the idea that we are a research-led university and the balance of that and learning and teaching. Some of these areas are more myth than reality. I think as a university we now have an unambiguous focus on teaching and learning but whether that has entirely permeated our organisational culture is perhaps arguable. When you have competing time demands it is hard to do things properly, and the space to actually design better assessment and feedback.
So how do we handle this? Well is we look at the “Implementation Staircase” (Reynolds and Saunders 1987) we can see that it comes from senior management, then to colleges, to schools, to programmes, to courses, to students. Now you could go down that staircase or you can go back up… And that requires us to think about our relationships with students. Is this model dialogic? Maybe we need another model?
Activity theory (Engestrom 1999) is a model for a group like a programme team, or course cohort, etc. So we have a subject here – it’s all about the individual in the context of an object, the community, mediating tool, rules and conventions, division of labour. This is a classic activity theory idea, with modern cultural aspects included. So for us the subject might be the marker, the object the assignment, the mediating tool something like the technological tools or processes, rules and conventions may include the commitment to return marks within 2 weeks, division of labour could include colleagues and sharing of marking, community could be students. It’s just a way to conceptualise this stuff.
A cultural resolution would see culture as practice and discourse. Review and reflection need to be embedded and internalised way of life. We have multiple stakeholders here – not always the teacher or the marker. And we need a bit of risk taking – but that’s scary when we are thinking about risk taking. That can feel at odds with the need to perform at a high level but risk taking is needed. And we need best practice to share experience in events such as this.
So there are technical things we could do better, do right. But the challenge we face is more of a collective one. We need to create time and space to genuinely reflect on their teaching practice, to interact with that culture. But you don’t change practice overnight. And we have to think about our relationship with our students, and thinking about how we encourage and enable them to be part of the process, and building up their own picture of what good/bad work looks like. And then the subject, object, culture will be closer together. Sometimes real change comes from giving examples of what works, inspiring through those examples etc. Technological tools can make life easier, if you have the time to spend time to understand them and how to make them work for you.
Q&A
Q1) Not sure if it’s a question or comment or thought… But I’m wondering what we take from those NSS scores, and if that’s what we should work to or if we should think about assessment and feedback in a different kind of paradigm.
A1) When we think about processes we can kid ourselves that this is all linear, it’s cause and effect. It isn’t that simple… The other thing about concentrating on giving feedback on time, so they can make use of it. But when it comes to the NSS it commodifies feedback, which challenges the idea of feedback as dialogic. There are cultural challenges for this. And I think that’s where risk, and the potential for interesting surprises come in…
Q2) As a parent of a teenager I now wonder about personal resilience, to be able to look at things differently, especially when they don’t feel confident to move forwards. I feel that for staff and students a problem can arise and they panic, and want things resolved for them. I think we have to move past that by giving staff and students the resilience so that they can cope with change.
A2) My PhD was pretty much on that. I think some of this comes from the idea of relatively safe risk taking… That’s another kind of risk taking. As a sector we have to think that through. Giving marks for everything risks everything not feeling like a safe space.
Q3) Do we not need to make learning the focus.
A3) Schools and universities push that grades, outcomes really matter when actually we would say “no, the learning is what matters”, but that’s hard in the wider context in which the certificate in the hand is valued.
Comment) Maybe we need that distinction that Simon Riley talked about at this year’s eLearning@ed conference, of distinguishing between the task and the assignment. So you can fail the task but succeed that assignment (in that case referring to SLICCs and the idea that the task is the experience, the assignment is writing about it whether it went well or poorly).
Not captured in full here: a discussion around the nature of electronic submission, and students concern about failing at submitting their assignments or proof of learning… 
Assessment Literacy: technology as facilitator, Prof. Susan Rhind (Assistant Principal Assessment and Feedback)
I’m going to talk about assessment literacy, and about technology as a facilitator. I’m also going to talk about something I’m hoping you may be able to advise about.
So, what is assessment literacy? It is being talked about a lot in Higher Education at the moment. There is a book all about it (Price et al 2012) that talks about competencies and practices. For me what is most important is the idea of ensuring some practical aspects are in place, that students have an understanding of the nature, meaning and level of assessment standards, that they have skills in self and peer assessment. The idea is to narrow the gap between students and teaching staff. Sadler (1989,2010) and Bod and Molloy (2013) talk about students needing to understand the purpose of assessment and process of assessment. It means understanding assessment as a central part of curriculum design (Medland 2016, Gibbs and Dunbar-Goddet, 2009). We need assessment and feedback at the core, at the heart of our learning and teaching.
We also have to understand assessment in the context of quality of teaching and quality of assessment and feedback. For me there is a pyramid of quality (with programme at bottom, individual at top, course in the middle). When we talk about good quality feedback we have to conceptualise it, as Neil talked about, as a dialogic process. So there is individual feedback… But there is also course design and programme design in terms of assessment and feedback. No matter how good a marker is in giving feedback, it is much more effective when the programme design supports good quality feedback. In this model technology can be a facilitator. For instance I wanted to plug Fiona Hale’s Edinburgh Learning Design Roadmap (ELDeR) workshops and processes. This sort of approach lets us build for longer term improvement in these areas.
Again, thinking about feedback and assessment quality, and things that courses can do, we have a table here that compares different types of assessment, the minimum pre-assessment activity to ensure they have assessment literacy, and then enhancement examples. a minimum requirement for feedback and some exemplars for marking students work.
An example here would be work we’ve done at the Vet School around student use of Peerwise MCQs – here students pushed for use in 3rd year, and for revision at the end of the programme. By the way if you are interested in assessment literacy, or have experience to share, we now have a channel for Assessment and Feedback, and for Assessment Literacy on MediaHopper.
Coming back to that exemplars of students work… We run Learning to be an Examiner sessions which students could take part in, and which includes the opportunity to mark exemplars of students work. That leads to conversations, and exchange of opinions, to understand the reasons behind the marking. And I would add that any place we can bring the students and teaching staff closer together only benefits us and our NSS scores. The themes coming out of this work was that there was real empathy for staff, and quelling fears. Students also noted that as they took part, the better they understood the requirements, the less important feedback felt.
There have been some trials using ACJ (Adaptive Comparative Judgement), which is the idea that with enough samples of work you can use comparison to put work into an order or ranking. So you present staff several assignments and they can rank them. We ran this as an experiment as it provides a chance for students to see others’ work and compare to their own as well as others. We ran a survey after this experiment but students felt seeing others’ responses, and also to understand others’ approaches to comparison and marking.
So, my final point here is a call for help… As we think about what excites and encourages students I would like to find a Peerwise like system for free text type questions. Student feedback was good, but they wanted to do that for a lot more questions than just those we were able to set. So I would like to take Peerwise away from the MCQ context so that students could see and comment and engage with each others work. And I think that anything that brings students and staff closer together in their understanding is important.
Q&A
Q1) How do we approach this in a practical way. We’ve asked students to look at exemplar essays but we bump into problems doing them. It’s easy to persuade those who wrote good essays and have moved to later years, but it’s hard to find those with poorer.
A1) We were doing this with short questions, not long essays. Hazel Marzetti was encouraging sharing of essays and they were reluctant. I think there’s something around expectation management – creating the idea up front that work will be available for others… That one has to opt out rather than opt out. Or you can mock up essays but you lose that edge of it being the real thing.
Q2) On the idea of exemplars… How do we feel about getting students to do a piece of work, and then sharing that with others on, say, the same topic. You could pick a more tangental topic, but that risks being less relevant, that a good essay is properly authentic… But for others there is a risk of copying potential.
A2) I think that it’s about understanding risk and context. We don’t use the idea of “model answers” but instead “outline answers”. Some students do make that connection… But they are probably those with a high degree of assessment literacy who will do well anyway.
Q3) By showing good work, showing a good range with similar scores, but also when you show students exemplars you don’t just give out the work, you annotate it, point out what makes it good, features that make it notable… A way to inspire students and help them develop assessment literacy when judging others’ work.
And with that our main presentations have drawn to a close with a thank you for all our lovely speakers and contributors.  We are concluding with an Open Discussion on technology in Assessment and Feedback.
Susan: Yeah, I’m quite a fan of mandatory activities but which do not carry a mark. But I’d seriously think about not assigning marks for all feedback activities… 
Comment: But the students can respond with “if it’s so important, why doesn’t this carry credit?”
Susan: Well you can make it count. For instance our vet students have to have a portfolio, and are expected to discuss that annually. That has been zero credits before (now 10 credits) but still mandatory. Having said that our students are not as focused on marking in that way.
Comment: I don’t want to be the “ah, but…” person here… But what if a student fails that mandatory non marked work? What’s the make-up task?
Susan: For us we are able to find a suitable bespoke negotiated exercise for the very few students this applies to…
Comment: What about equity?
Susan: I think removing the mark actually removes that baggage from the argument… Because the important thing here is doing the right tasks for the professional world. I think we should be discussing this more in the future.  
And with that Robert is drawing the event to a close. The next eLearning@ed/LTW monthly meet up is in July, on 27th July and will be focused on the programme for attaining the CMALT accreditation.