Feb 222017
 

This afternoon I am delighted to be at the Inaugeral Lecture of Prof. Jonathan Silvertown from the School of Biological Sciences here at the University of Edinburgh.

Vice Chancellor Tim O’Shea is introducing Jonathan, who is Professor of Evolutionary Ecology and Chair in Technology Enhanced Science Education, and who came to Edinburgh from the Open University.

Now to Jonathan:

Imagine an entire city turned into an interactive learning environment. Where you can learn about the birds in the trees, the rock beneath your feet. And not just learn about them, but contribute back to citizen science, to research taking place in and about the city. I refer to A City of Learning… As it happens Robert Louis Stevenson used to do something similar, carrying two books in their pocket: one for reading, one for writing. That’s the idea here. Why do this in Edinburgh? We have the most fantastic history, culture and place.

Edinburgh has an increadible history of enlightenment, and The Enlightenment. Indeed it was said that you could, at one point, stand on the High Street and shake the hands of 50 men of genius. On the High Street now you can shake Hume (his statue) by the toe and I shall risk quoting him: “There is nothing to be learned from a professor which is not to be met within books”. Others you might have met then include Joseph Black, and also James Hutton, known as the “father of modern geology” and he walked up along the crags and a section now known as “Huttons section” (an unconformity to geologists) where he noted sandstone, and above it volcanic rock. He interpreted this as showing that rocks accumulate by ongoing processes that can be observed now. That’s science. You can work out what happened in the past by understanding what is happening now. And from that he concluded that the earth was more than 6000 years old, as Bishop Usher had calculated. In his book The Theory of the Earth he coined this phrase “No vestige of a beginning, no prospect of an end”. And that supported the emerging idea of evolutionary biology which requires a long history to work. That all happened in Edinburgh.

Edinburgh also has a wealth of culture. It is (in the New Town) a UNESCO World Heritage site. Edinburgh has the Fringe Festival, the International Festival, the Book Festival, the Jazz Festival… And then there is the rich literary heritage of Edinburgh – as J.K. Rowling says “Its impossible to live in Edinburgh without sensing it’s literary heritage”. Indeed if you walk in the Meadows you will see a wall painting celebrating The Prime of Miss Jean Brodie. And you can explore this heritage yourself through the LitLong Website and App. He took thousands of books with textmining and a gazeteer of Edinburgh Places, extracting 40,000 snippets of text associated with pinpoints on the map. And you can do this on an app on your phone. Edinburgh is an extraordinary place for all sorts of reasons…

And a place has to be mapped. When you think of maps these days, you tend to think of Google. But I have something better… Open Street Map is to a map what Wikipedia is to the Encyclopedia Britannica. So, when my wife and I moved into a house in Edinburgh which wasn’t on Ordnance Survey, wasn’t on Google Maps, but was almost immediately on OpenStreetMap. It’s Open because there are no restrictions on use so we can use it in our work. Not all cities are so blessed… Geographic misconceptions are legion, if you look at one of th emaps in the British Library you will see the Cable and Wireless Great Circle Map – a map that is both out of date and prescient. It is old and outdated but does display the cable and wireless links across the world… The UK isn’t the centre of the globe as this map shows, wherever you are standing is the centre of the globe now. And Edinburgh is international. At least year’s Edinburgh festival the Deep Time event projected the words “Welcome, World” just after the EU Referendum. Edinburgh is a global city, University of Edinburgh is a global university.

Before we go any further I want to clarify what I mean by learning when I talk about making a city of learning… Kolb (1984) is “How we transform experience into knowledge”, it is learning by discovery. And, wearing my evolutionary hat, it’s a major process of human adaptation. Kolb’s learning cycle takes us from Experience, to Reflect (observe), Conceptualise (Ideas), Experiment (Test), and back to Experience. It is of course also the process of scientific discovery.

So, lets apply that cycle of learning to iSpot, to show how that experiential learning and discovery and what extraordinary things that can do. iSpot is designed to crowdsource the identification of organisms (see Silvertown, Harvey, Greenwood, Dodd, Rosewell, Rebelo, Ansine, McConway 2015). If I see “a white bird” it’s not that exciting, but if I know its a Kittywake then that’s interesting – has it been seen before? Are they nesting elsewhere? You can learn more from that. So you observe an orgnism, you reflect, you start to get comment from others.

So, we have over 60,000 registered users of iSpot, 685k observations, 1.3 million photos, and we have identified over 30,000 species. There are many many stories contained within that. But I will share one of these. So this observation came in from South Africa. It was a picture of some seeds with a note “some children in Zululand just ate some of these seeds and are really ill”. 35 seconds later someone thousands of miles away in Capetown, others agreed on the id. And the next day the doctor who posted the image replied to say that the children were ok, but that it happens a lot and knowing what plant they were from helps them to do something. It wasn’t what we set this up to do but that’s a great thing to happen…

So, I take forward to this city of learning, the lessons of a borderless community; the virtuous circle of learning which empowers and engages people to find out more; and encourage repurposing – use the space as they want and need (we have added extra functions to support that over time in iSpot).

Learning and discovery lends itself to research… So I will show you two projects demonstrating this which gives us lessons to take forward into Edinburgh City of Learning. Evolution Megalab.org was created at the Open University to mark Darwins double centenary in 2009, but we also wanted to show that evolution is happening right now in your own garden… So the snails in your garden have colours and banding patterns, and they have known genetic patterns… And we know about evolution in the field. We know what conditions favour which snails. So, we asked the public to help us test the hypothesis about the snails. So we had about 10,000 populations of snails captured, half of which was there already, half of which was contributed by citizens over a single year. We had seen, over the last 50 years, an increase in yellow shelled snails which do not warm up too quickly. We would expect brown snails further north, yellow snails further south. So was that correct? Yes and No. There was an increase in sanddunes, but not elsewhere. But we also saw a change in patterns of banding patterns, and we didn’t know why… So we went back to pre Megalab data and that issue was provable before, but hadn’t previously been looked for.

Lessons from Megalab included that all can contribute, that it must be about real science and real questions, and that data quality matters. If you are ingenious about how you design your project, then all people can engage and contribute.

Third project, briefly, this is Treezilla, the monster map of trees – which we started in 2014 just before I came here – and the idea is that we have a map of the identity, size and location of trees and, with that, we can start to look at ecosystem impact of these trees, they capture carbon, they can ameliorate floods… And luckily my colleague Mike Dodd spotted some software that could be used to make this happen. So one of the lessons here is that you should build on existing systems, building projects on top of projects, rather than having to happen at the same time.

So, this is the Edinburgh Living Lab, and this is a collaboration between schools and the kinds of projects they do include bike counters and traffic – visualised and analysed – which gives the Council information on traffic in a really immediate way that can allow them to take action. This set of projects around the Living Lab really highlighted the importance of students being let loose on data, on ideas around the city. The lessons here is that we should be addressing real world problems, public engagement is an important part of this, and we are no longer interdisiplinary, we are “post disciplinary” – as is much of the wider world of work and these skills will go with these students from the Living Lab for instance.

And so to Edinburgh Cityscope, a project with synergy across learning, research and engagement. Edinburgh Cityscope is NOT an app, it is an infrastructure. It is the stuff out of which other apps and projects will be built.

So, the first thing we had to do was made Cityscope futureproof. When we built iSpot the iPhone hadn’t been heard of, now maybe 40% of you here have one. And we’ve probably already had peak iPhone. We don’t know what will be used in 5 years time. But there are aspects they will always need… They will need Data. What kinds of data? For synergy and place we need maps. And maps can have layers – you can relate the nitrogen dioxide to traffic, you can compare the trees…. So Edinburgh Cityscope is mapable. And you need a way to bring these things together, you need a workbench. Right now that includes Jupyter, but we are not locked in, so we can change in future if we want to. And we have our data and our code open on Github. And then finally you need to have a presentation layer – a place to disseminate what we do to our students and colleagues, and what they have done.

So, in the last six months we’ve made progress in data – using Scottish Government open data portal we have Lung Cancer registrations that can be mapped and changes seen. We can compare and investigate and our students can do that. We have the SIMD (Scottish Index of Multiple Deprivation) map… I won’t show you a comparison as it has hardly changed in decades – one area has been in poverty since around 1900. My colleague Leslia McAra is working in public engagement, with colleagues here, to engage in ways that make this better, that makes changes.

The workbench has been built. It isn’t pretty yet… You can press a button to create a Notebook. You can send your data to a phone app – pulling data from Cityscope and show it in an app. You can start a new tour blog – which anybody can do. And you create a survey for used for new information…

So let me introduce one of these apps. Curious Edinburgh is an app that allows you to learn about the history of science in Edinburgh, to explore the city. The genius idea – and I can say genius because I didn’t build it, Niki and the folks at EDINA did – is that you can create this tour from a blog. You fill in forms essentially. And there is an app which you can download for iOS, and a test version for Android – full one coming for the Edinburgh International Science Festival in April. Because this is an Edinburgh Cityscope project I’ve been able to use the same technology to create a tour of the botanical gardens for use in my teaching. We used to give out paper, now we have this app we can use in teaching, in teaching in new ways… And I think this will be very popular.

And the other app we have is Fieldtrip, a survey tool borrowed from EDINA’s FieldTrip Open. And that allows anyone to set up a data collection form – for research, for social data, for whatever. It is already open, but we are integrating this all into Edinburgh Cityscope.

So, this seems a good moment to talk about the funding for this work. We have had sizable funding from Information Services. The AHRC has funded some of the Curious Edinburgh work, and ESRC have funded work which a small part of which Edinburgh Cityscope will be using in building the community.

So, what next? We are piloting Cityscope with students – in the Festival of Creative Learning this week, in Informatics. And then we want to reach out to form a community of practice, including schools, community groups and citizens. And we want to connect with cultural institutions and industry – already working with the National Museum of Scotland. And we want to interface with the Internet of Things – anything with a chip in it really. You can interact with your heating systems from anywhere in the world – that’s the internet of things, things connected to the web. And I’m keen on creating an Internet of Living Things. The Atlas of Living Scotland displays all the biological data of Scotland on the map. But data gets out of date. It would be better to updated in real time. So my friend Kate Jones from UCL is working with Intel creating real time data from bats – allowing real time data to be captured through connected sensors. And also in that space Graham Stone (Edinburgh) is working on a project called Edinburgh Living Landscape which is about connecting up green spaces, improve biodiversity…

So, I think what we should be going for is for recognition of Edinburgh as the First UNESCO City of Learning. Edinburgh was the first UNESCO City of Literature and the people who did that are around, we can make our case for our status as City of Learning in much the same way.

So that’s pretty much the end. Nothing like this happens without lots and lots of help. So a big thanks here to Edinburgh Cityscope’s steering group and the many people in Information Services who have been actually building it.

And the final words are written for me: Four Quartets, T.S. Eliot:

“We shall not cease from exploration

And the end of all our exploring 

Will be to arrive where we started

And know the place for the first time”

 February 22, 2017  Posted by at 6:17 pm LiveBlogs Tagged with: , ,  No Responses »
Nov 242016
 

This morning I’m at the Edinburgh Tourism Action Group‘s Digital Solutions for Tourism Conference 2016. Why am I along? Well EDINA has been doing some really interesting cultural heritage projects for years, particularly Curious Edinburgh – history of science tours app and our citizen science apps for COBWEBFieldTrip Open which are used by visitors to locations, not just residents. And of course services like Statistical Accounts of Scotland which have loads of interest from tourists and visitors to Scotand. We are also looking at new mobile, geospatial, and creative projects so this seems like a great chance to hear what else is going on around tourism and tech in Edinburgh.

Introduction James McVeigh, Head of Marketing and Innovation, Festivals Edinburgh

Welcome to our sixth Digital Solutions for Tourism Conference. In those last six years a huge amount has changed, and our programme reflects that, and will highlight much of the work in Edinburgh, but also picking up what is taking place in the wider world, and rolling out to the wider world.

So, we are in Edinburgh. The home of the world’s first commercially available mobile app – in 1999. And did you also know that Edinburgh is home to Europe’s largest tech incubator? Of course you do!

Welcome Robin Worsnop, Rabbie’s Travel, Chair, ETAG

We’ve been running these for six years, and it’s a headline event in the programme we run across the city. In the past six years we’ve seen technology move from business add on to fundamental to what we do – for efficiency, for reach, for increased revenue, and for disruption. Reflecting that change this event has grown in scope and popularity. In the last six years we’ve had about three and a half thousand people at these events. And we are always looking for new ideas for what you want to see here in future.

We are at the heart of the tech industry here too, with Codebase mentioned already, Sky Scanner, and the School of Informatics at the University of Edinburgh all of which attracts people to the city. As a city we have free wifi around key cultural venues, on the buses, etc. It is more and more ubiquitous for our tourists to have access to free wifi. And technology is becoming more and more about how those visitors enhance their visit and experience of the city.

So, we have lots of fantastic speakers today, and I hope that you enjoy them and you take back lots of ideas and inspiration to take back to your businesses.

What is new in digital and what are the opportunities for tourism Brian Corcoran, Director, Turing Festival

There’s some big news for the tech scene in Edinburgh today: SkyScanner have been brought by a Chinese company for 1.5bn. And FanDual just merged with its biggest rival last week. So huge things are happening.

So, I thought today technology trends and bigger trends – macro trends – might be useful today. So I’ll be looking at this through the lens of the companies shaping the world.

Before I do that, a bit about me, I have a background in marketing and especially digital marketing. And I am director of the Turing Festival – the biggest technology festival in Scotland which takes place every August.

So… There are really two drivers of technology… (1) tech companies and (2) users. I’m going to focus on the tech companies primarily.

The big tech companies right now include: Uber, disrupting the transport space; Netflix – for streaming and content commissioning; Tesla – dirupting transport and energy usage; Buzzfeed – influential with huge readership; Spotify – changing music and music payments; banking… No-one has yet dirupted banking but they will soon… Maybe just parts of banking… we shall see.

And no-one is influencing us more than the big five. Apple, mainly through the iPhone. I’ve been awaiting a new MacBook for five years… Apple are moving computing PCs for top end/power users, but also saying most users are not content producers, they are passive users – they want/expect us to move to iPads. It’s a mobile device (running iOS) and a real shift. iPhone 7 got coverage for headphones etc. but cameras didn’t get much discussion, but it is basically set up for augmented reality with two cameras. Air Pods – the cable-less headphones – is essentially a new wearable, like/after the iWatch. And we are also seeing Siri opening up.

Over at Google… Since Google’s inception the core has been search and the Google search index and ranking. And they are changing it for the first time ever really… And building a new one… They are building a Mobile-only search index. They aren’t just building that they are prioritising it. Mobile is really the big tech trend. And in line with that we have their Pixel phone – a phone they are manufacturing themselves… That’s getting them back into wearables after their Google Glass misstep. And Google Assistant is another part of the Pixel phone – a Siri competitor… Another part of us interacting with phones, devices, data, etc. in a new way.

Microsoft is one of the big five that some thing shouldn’t be there… They have made some missteps… They missed the internet. They missed – and have written off phones (and Nokia). But they have moved to Surface – another mobile device. They have abandoned Windows and moved to Microsoft 365. They brought LinkedIn for £26bn (in cash!). One way this could effect us… LinkedIn has all this amazing data… But it is terrible at monetising it. That will surely change. And then we have HoloLens – which means we may eventually have some mixed reality actually happening.

Next in the Big Five is Amazon. Some very interesting things there… We have Alexa – the digital assistant service here. They have, as a device, Echo – essentially a speaker and listening device for your home/hotel etc. Amazon will be in your home listening to you all the time… I’m not going to get there! And we have Amazon Prime… And also Prime Instant Video. Amazon moving into television. Netflix and Amazon compete with each other, but more with traditional TV. And moving from Ad income to subscriptions. Interesting to think where TV ad spend will go – it’s about half of all ad spend.

And Facebook. They are at ad saturation risk, and pushing towards video ads. With that in mind they may also become defacto TV platform. Do they have new editorial responsibility? With Fake News etc. are they a tech company? Are they a media company? At the same time they are caving completely to Chinese state surveillance requests. And Facebook are trying to diversify their ecosystem so they continue to outlast their competitors – with Instagram, WhatsApp, Oculus, etc.

So, that’s a quick look at tech companies and what they are pushing towards. For us, as users the big moves have been towards messaging – Line, Wiichat, Messaging, WhatsApp, etc. These are huge and there has been a big move towards messaging. And that’s important if we are trying to reach the fabled millennials as our audience.

And then we have Snapchat. It’s really impenetrable for those under 30. They have 150 Daily Active Users, they have 1 bn snaps daily, 10bn videos daily. They are the biggest competitor to Facebook, to ad revenue. They have also gone for wearables – in a cheeky cool upstart way.

So, we see 10 emergent patterns:

  1. Mobile is now *the* dominant consumer technology, eclipsing PCs. (Apple makes more from the iPhone than all their other products combined, it is the most successful single product in history).
  2. Voice is becoming in an increasingly important UI. (And interesting how answers there connect to advertising).
  3. Wearables bring tech into ever-closer physical and psychological proximity to us. It’s now on our wrist, or face… Maybe soon it will be inside you…
  4. IoT is getting closer, driven by the intersection of mobile, wearables, APIs and voice UI. Particularly seeing this in smart home tech – switching the heat on away from home is real (and important – it’s -3 today), but we may get to that promised fridge that re-orders…
  5. Bricks and mortar retail is under threat, and although we have some fulfillment challenges, they will be fixed.
  6. Messaging marks generational shift in communification preferences – asynchronous prferred
  7. AR and VR will soon be commonplace in entertainment – other use cases will follow… But things can take time. Apple watch went from unclear use case to clear health, sports, etc. use case.
  8. Visual cmmunications and replacing textural ones for millenials: Snapchat defines that.
  9. Media is increasingly in the hands of tech companies – TV ads will be disrupted (Netflix etc.)
  10. TV and ad revenue will move to Facebook, Snapchat etc.

What does this all mean?

Mobile is crucial:

  • Internet marketing in tourism now must be mobile-centric
  • Ignore Google mobile index at your peril
  • Local SEO is increasing in importance – that’s a big opportunity for small operators to get ahead.
  • Booking and payments must be designed for mobile – a hotel saying “please call us”, well Millennials will just say no.

It’s unclear where new opportunities will be, but they are coming. In Wearables we see things like twoee – wearable watches as key/bar tab etc. But we are moving to a more seamless place.

Augmented reality is enabling a whole new set of richer, previously unavailable interactive experiences. Pokemon Go has opened the door to location-based AR games. That means previously unexciting places can be made more engaging.

Connectivity though, that is also a threat. The more mobile and wearables become conduits to cloud services and IoT, the more the demand for free, flawless internet connectivity will grow.

Channels? Well we’ve always needed to go where the market it. It’s easier to identify where they are now… But we need to adapt to customers behaviours and habits, and their preferences.

Moore’s law: overall processing power for computers will double every two year (Gordon Moore, INTEL, 1965)… And I wonder if that may also be true for us too.

Shine the Light – Digital Sector

Each of these speakers have just five minutes…

Joshua Ryan-Saha, Skills Lead, The Data Lab – data for tourism

I am Skills Lead at The Data Lab, and I was previously looking at Smart Homes at Nesta. The Data Lab works to find ways that data can benefit business, can benefit Scotland, can benefit your work. So, what can data do for your organisation?

Well, personalised experiences… That means you could use shopping habits to predict, say, a hotel visitors preferences for snacks or cocktails etc. The best use I’ve seen of that is in a museum using heart rate monitors to track experience, and areas of high interest. And as an exhibitor you can use phone data to see how visitors move around, what do they see, etc.

You can also use data in successful marketing – Tripadvisor data being a big example here.

You can also use data in efficient operations – using data to ensure things are streamlined. Things like automatic ordering – my dentist did this.

What can data do for Tourism in Scotland? Well we did some work with Glasgow using SkyScanner data, footfall data, etc. to predict hotel occupancy rates and with machine learning and further data that has become quite accurate over time. And as you start to predict those patterns we can work towards seamless experience. At the moment our masters students are running a competition around business data and tourism – talk to me to be involved as I think a hack in that space would be excellent.

What can data lab do for you? Well we fund work – around £70k per project, also smaller funds. We do skills programmes, masters and Phd students. And we have expertise – data scientists who can come in and work with you to sort your organisation a bit. If you want to find out more, come and talk to me!

Brian Smillie, Beezer – app creation made affordable and easy

1 in 5 people own a smart phone, desktop is a secondary touchpoint. The time people spend using mobile app has increased 21% since last year. There are 1 bn websites, only 2 million apps. Why are business embracing mobile apps? Well speed and convenience are key – an app enables 1 click access. Users expect that. And they can also reduce staff time on transations, etc. It allows building connection, build loyalty… Wouldn’t it be great to be able to access that. But the cost can be £10k or more per single app. When I was running a digital agency in Australia I heard the same thing over and over again – that they had spent a small fortune then no-one downloaded it. Beezer enables you to build an app in a few hours, without an app store, and it works on any platforms. SMEs need a quick, cheap, accessible way to build apps and right now Beezer are the only ones who do this…

Ben Hutton, XDesign – is a mobile website enough?

I’m Ben from XDesign – we build those expensive apps Brian was just talking about… A few years ago I was working on analytics of purchasing and ads… I was working on that Crazy Frog ad… We found the way that people would download that ringtone was to batter people into submission, showing it again again again… And that approach has distorted mobile apps and their potential. But actually so has standardised paper… We are so used to A4 that it is the default digital size too… It was a good system for paper merchants in the C17th. It has corrupted the ideas we have about apps… We think that apps are extensions of those battering/paper skillsets.

A mobile phone is a piece of engineering, software that sits in your pocket. It requires software engineers, designers, that will ensure quality assurance, that is focused on that medium. We have this idea of the Gigabit Society… We have 4.5G, the rocket fuel for mobile… And it’s here in London, in Manchester, in Birmingham… It is coming… And to work with that we need to think about the app design. It isn’t meant to be easy. You have to know about how Google is changing, about in-app as well as app sales, you need to know deep linking. To build a successful app you need to understand that you don’t know what you are doing but you have to give it a try anyway… That’s how we got to the moon!

Chris Torres, Director, Senshi Digital – affordable video

We develop tourism brands online to get the most out of online, out of sales. And I’ve been asked today to talk specifically about video. Video has become one of the best tools you can use in tourism. One of the reasons is that on your website or social media if you use video your audience can learn about your offering 60k times faster than if they read your content.

The average user watches 32 videos per month; 79% of travellers search YouTube for travel ideas – and many of them don’t know where they are going. By 2018 video will be 84% of web traffic. And it can really engage people.

So what sort of video do we do? Well we do background video for homepages… That can get across the idea of a place, of what they will experience when they get to your tourism destination.

What else? Staff/tour guide videos is huge. We are doing this for Gray Line at the moment and that’s showing a big uptick in bookings. When people see a person in a video, then meet at your venue, that’s a huge connection, very exciting.

We also have itinerary videos, what a customer can experience on a tour (again my example is Gray Line).

A cute way to do this is to get customers to supply video – and give them a free tour, or a perk – but get them to share their experiences.

And destination videos – it’s about the destination, not neccassarily you, your brand, your work – just something that entices customers to your destination.

Video doesn’t need to be expensive. You can film on your iPhone. But also you can use stock supplies for video – you’ve no excuse not to use video!

Case Study – Global Treasure Apps and Historic Environment Scotland Lorraine Sommerville and Noelia Martinez, Global Treasure Apps

Noella: I am going to talk about the HES project with Edinburgh Castle, Edinburgh College, Young Scot. The project brought together young people and cultural heritage information. The process is a co-production process, collecting images, information, stories and histories of the space with the Global Treasure Apps, creating content. The students get an idea of how to create a real digital project for a real client. (Cue slick video on this project outlining how it worked).

Noella: So, the Global Treasure Apps are clue driven trails, guiding visitors around visitor attractions. For this Edinburgh Castle project we had 20 young people split into 5 groups. They researched at college and drafted trails around the space. Then they went to the castle and used their own mobile devices to gather those digital assets. And we ended up with 5 trails for the castle that can be used. Then, we went back to the college, uploaded content to our database, and then set the trails live. Then we go ESOL students to test the trails, give feedback and update it.

Lorraine: Historic Environment Scotland were delighted with the project, as were Edinburgh College. We are really keen to expand this to other destinations, especially as we enter The Year of Young People 2018, for your visitors and destinations.

Apps that improve your productivity and improve your service Gillian Jones, Qikserve

Before I start I’m going to talk a wee bit about SnapChat… SnapChat started as a sexting app… And I heard about it from my mum – but she was using it for sharing images of her house renovation! And if she can use that tech in new ways, we all can!

I am from Edinburgh and nothing makes me happier than seeing a really diverse array of visitors coming to this city, and I think that SkyScanner development will continue to see that boom.

A few months ago I was in Stockholm. I walked out of the airport and saw a fleet of Teslas as their taxis. It was a premium, innovative, thing to see. I’m not saying we should do that here, I’m saying the tourist experience starts from the moment they see the city, especially the moment that they arrive. And, in this day and age, if I was to guest coming to a restaurant, hotel, etc. what would I want? What would I see? It’s hard as a provider to put yourself in your customers shoes. How do we make tourists and guests feel welcome, feel able to find what they need. Where do we want to go and how to get there? There is a language barrier. There is unfamiliar cuisine – and big pictorial menus aren’t always the most appealing solution.

So, “Francesco” has just flown to Edinburgh from Rome. He speaks little English but has the QikServe app, he can see all the venues that uses that. He’s impatient as he has a show to get to. He is in a rush… So he looks at a menu, in his native language on his phone – and can actually find out what haggis or Cullen Skink is. And he is prompted there for wine, for other things he may want. He gets his food… And then he has trouble finding  a waiter to pay. He wants to pay by Amex – a good example of ways people want to pay, but operators don’t want to take – But in the app he can pay. And then he can share his experience too. So, you have that local level… If they have a good experience you can capitalise on it. If they have a bad experience, you can address it quickly.

What is the benefit of this sort of system? Well money for a start. Mobile is proven for driving up sales – I’ve ordered a steak, do I want a glass of red with that? Yeah, I probably do. So it can increase average transaction value. It can reduce pressure on staff during busy times, allowing them to concentrate on great service. That Starbucks app – the idea of ordering ahead and picking up – is normal now…  You can also drive footfall by providing information in tourists native language. And you can upsell, cross sell and use insights for more targeted campaigns – more sophisticated than freebies, and more enticing. It is about convenience tailored to me. And you can keen your branding at the centre of the conversation, across multiple channels.

There are benefits for tourists here through greater convenience with reduced wait-ties and queues; by identifying restaurant of choice and order in native language and currency; find and navigate to restaurant of choice with geo-location capabilities; order what you want, how you want it with modifiers, upsell and cross sell prompts in native language – we are doing projects in the US with a large burger chain who are doing brilliantly because of extra cheese ordered through the app!; and you can easily share and recommend experience through social media.

We work across the world but honestly nothing would make me happier than seeing us killing it in Edinburgh!

Virtual reality for tourism Alexander Cole, Peekabu Studios

Thank you for having me along, especially in light of recent US events (Alex is American).

We’ve talked about mobile. But mobile isn’t one thing… There are phones, there have been robot sneakers, electronic photo frames, all sorts of things before that are now mixed up and part of our phones. And that’s what we are dealing with with VR. Screens, accelerometers, buttons have all been there for a while! But if I show you what VR looks like… Well… It’s not like an app or a film or something, it’s hard to show. You look like a dork using it…

VR is abou

Right now VR is a $90m industry (2014) but by 2018 we expect it to be at least $5.2bn, and 171m users – and those are really conservative estimates.

So, VR involves some sort of headset… Like an HTC Vive, or Oculus Rift, etc. They include an accelorometer to see where you are looking, tilting, turning. Some include additional sensors. A lot of these systems have additional controllers, that detect orientation, presses, etc. that means the VR knows where I am, where I’m looking, what I’m doing with my hands. It’s great, but this is top end. This is about £1000 set up AND you need a PC to render and support all of this.

But this isn’t the only game in town… Google have the “Daydream” – a fabric covered mobile phone headset with lens. They also have the Google Cardboard. In both cases you have a phone, strap in, and you have VR. But there are limitations… It doesn’t track your movement… But it gives you visuals, it tracks how you turn, and you can create content from your phones – like making photospheres – image and audio – when on holiday.

Capture is getting better, not just on devices. 360 degree cameras are now just a few hundred pounds, you can take it anywhere, it’s small and portable and that makes for pretty cool experiences. So, if you want to climb a tower (Alex is showing a vertigo-inducing Moscow Tower video), you can capture that, you can look down! You can look around. For tourism uses it’s like usual production – you bring a camera, and you go to a space, and you show what you would like, you just do it with a 360 degree camera. And you can share it on YouTube’s 360 video channel…

And with all of this tech together you can set up spaces where sensors are all around that properly track where you are and give much more immersive emotional experiences… Conveying emotion is what VR does better than anything when it is done well.

So, you can do this two ways… You can create content so that someone not in a particular physical space, can feel they are there. OR you can create a new space and experience that. It requires similar investment of time and effort. It’s much like video creation with a little more stitching together that is required.

So, for example this forthcoing space game with VR is beautiful. But that’s expensive. But for tourism the production can be more about filming – putting a camera in a particular place. And, increasingly, that’s live. But, note…

You still look like a ninny taking place! That’s a real challenge and consideration in terms of distribution, an dhow many people engage at the same time… But you can use that too – hence YouTube videos all usually including both what’s on screen, and what’s going on (the ninny view).  And now you have drones and drone races with VR used by the controller… That’s a vantage point you cannot get any other way. That is magical and the cost is not extortionate… You can take it further than this… You can put someone in a rig with wings, with fans, with scents, and with VR, so you can fly around and experience a full sensory experience… This is stupid expensive… But it is the most awesome fun! It conveys a sense of doing that thing VR was always meant to do. When we talk about where VR is going… We have rollercoasters with VR – so you can see Superman flying around you. There are some on big elastic bands – NASA just launched one for Mars landing.

So, tourism and VR is a really interesting marriage. You can convey a sense of place, without someone being there. Even through 360 degree video, YouTube 360 degree video… And you can distribute it in more professional way for Vive, for Oculus Rift… And when you have a space set up, when you have all those sensors in a box… That’s a destination, that’s a thing you can get people too. There is a theme park destination like experiences. You can service thousands+ people with one set up and one application.

So, the three E’s of VR: experience, exploration – you drive this; and emotion – nothing compares to VR for emotion. Watching poeple use VR for the first time is amazing… They have an amazing time!

But we can’t ignore the three A’s of VR: access – no one platform, and lots of access issues; affordability – the biggest most complex units are expensive, your customers won’t have one, but you can put it in your own space; applicability – when you have new tech you can end up treating everything as a nail for your shiny new hammer. Don’t have your honeymoon in VR. Make sure what you do works for the tech, for the space, for the audience’s interest.

Using Data and Digital for Market Intelligence for Destinations and Businesses Michael Kessler, VP Global Sales, Review Pro

I’m going to be talking about leveraging guest intelligence to deliver better experiences and drive revenue. And this isn’t about looking for “likes”, it’s about using data to improve revenue, to develop business.

So, for an example of this, we analysed 207k online reviews in 2016 year to date for 339 3*, 4* and 5* hotels in Glasgow and Edinburgh. We used the Global Review Index (GRI) – which we developed and is an industry-standard reputation score based on review data collected from 175+ OTAs and review sites in over 45 languages. To do that we normalise scores – everyone uses their own scale. From that data we see Edinburgh’s 5* hotels have 90.2% satisfaction in Edinburgh (86.4% in Glasgow), and we can see the variance by * rating (Glasgow does better for satisfaction at 3*).

You can explore satisfaction by traveler types – solo, couples, families, business. The needs are very different. For any destination or hotel this lets you optimise your business, to understand and improve what we do.

We run sentiment analysis, using machine learning, across reviews. We do this by review but also aggregate it so that you can highlight strengths and weaknesses in the data. We show you trends… You will understand many of these but those trends allow you to respond and react to those trends (e.g. Edinburgh gets great scores on Location, Staff, Reception; poorer scores on Internet; Bathroom; Technology. Glasgow gets great Location, Staff, Reception, poorer scores for Internet, Bathroom; Room). We do this across 16 languages and this is really helpful.

We also highlight management response rates. So if guests post on TripAdvisor, you have to respond to them. You can respond and use as a marketing channel too. Looking across Edinburgh and Glasgow we can see a major variation between (high) response rates to TripAdvisor versus (low) response to Booking.com or Expedia.

The old focus of marketing was Product/Promotion/Price/Place. But that has changed forever. It’s all about experience now. That’s what we want. I think we have 4 Es instead of 4 Ps. So, those 4E’s are: Experience; Evangelism; Exchange; Everyplace. In the past I shared experience with friends and families, but now I evangelise, I share much more widely. And everyplace reflects sending reviews too – 60-70% of all reviews and feedback to accommodation is done via mobile. You can’t make better marketing than authentic feedback from guests, from customers.

And this need to measure traveller experience isn’t just about hotels/hostels/services apartments, it is also about restaurants; transportation; outdoor attractions; theme parks; museums; shopping. And those reviews have a solid impact on revenue – 92% of travelers indicate that their decisions are highly influenced by reviews and ratings.

So, how do we use all this data? Well there is a well refined cycle: Online reviews; we can have post-stay/event surveys; and in-stay surveys. Online reviews and post-stay surveys are a really good combination to understand what can be improved, where change can be made. And using that cycle you can get to a place of increased guest satisfaction, growth in review volume, improved online rankings (TripAdvisor privileges more frequently reviewed places for instance), and increased revenue.

And once you have this data, sharing it across the organisation has a huge positive value, to ensure the whole organisation is guest-centric in their thinking and practice.

So, we provide analytics and insights for each of your departments. So, for housekeeping, what happened in the room space in reviews; we can do semantic data checking for cleanliness, clean, etc.

In-stay reviews also helps reduce negative reviews – highlighting issues immediately, make the experience great whilst your guest is still there. And we have talked about travellers being mobile, but our solution is also mobile so that we can use it in all spaces.

How else can we use this? We can use it to increase economic development by better understanding our visitors. How do we do this? Well for instance in Star Ratings Australia we have been benchmarking hotel performances across 5000+ hotels across a range of core KPIs. Greece (SETE) is a client of ours and we help them to understand how they as a country, as cities, as islands, compete with other places and cities across the world.

So our system works for anyone with attractions, guests, reviews, clients, where we can help. Operators can know guests – but that’s opinion. We try to enable decisions based on real information. That allows understanding of weaknesses and drive change. There is evidence that increasing your Global Review Index level will help you raise revenue. It also lets you refine your marketing message based on what you perform best at in your reviews, make a virtue of your strengths on your website, on TripAdvisor, etc.

And with reviews, do also share reviews on your own site – don’t just encourage them to go to Tripadvisor. Publishing reviews and ratings means your performance is shown without automatically requiring an indirect/fee-occuring link, you keep them on your site. And you do need to increase review volume on key channels to keep your offering visible and well ranked.

So, what do we offer?

We have our guest intelligence system, with reputation management, guest surveys, revenue optimiser and data. All of these create actionable insights for a range of tourism providers – hotels, hostels, restaurants, businesses etc. We have webinars, content, and information that we share back with the community for free.

Tech Trends and the Tourism Sector

Two talks here…

Jo Paulson, Events and Experiences Manager, Edinburgh Zoo and Jon-Paul Orsi, Digital Manager, Edinburgh Zoo – Pokemon Go

Jon-Paul: As I think everyone knows Pokemon Go appeared and whether you liked it or not it was really popular. So we wanted to work out what we could do. We are spread over a large site and that was great – loads of pokestops – but an issue too: one was in our blacksmith shop, another in our lion enclosure! So we quickly mapped the safe stops and made that available – and we only had a few issues there. By happy accident we also had some press coverage as one of the best places to find Pokemon – because a visitor happened to have found a poketung on our site.

With that attention we also decided to do some playful things with social media – making our panda a poke-cake; sharing shots of penguins and pokemon. And they were really well received.

Jo: Like many great ideas we borrowed from other places for some of our events. Bristol zoos had run some events and we borrowed ideas – with pokestops, pokedex charging points, and we had themed foods, temporary tattoos etc. We wanted to capitalise on the excitement so we had about a week and a half to do this. As usual we checked with keepers first, closing off areas where the animals could be negatively impacted.

Jon-Paul: In terms of marketing this we asked staff to tell their friends… And we were blown away by how well that went. On August 4th we had 10k hits as they virally shared the details. We kind of marketed it by not marketing it publicly. It being a viral, secret, exciting thing worked well. We sold out in 2 hours and that took us hugely be surprise. Attendees found the event through social primarily – 69% through facebook, 19% by word of mouth.

We didn’t have a great picture of demographics etc. Normally we struggle to get late teens, twenties, early thirties unless they are there as a couple or date. But actually here we saw loads of people in those age ranges.

Jo: We had two events, both of which we kept the zoo opened later than usual. Enclosures weren’t open – though you could see the animals. But it was a surreal event – very chatty, very engaged, and yet a lot of heads down without animal access. For the first event we gave away free tickets, but asked for donations (£5k) and sold out in 2 hours; for the second event we charged £5 in advance (£6500) and sold in around a week. We are really pleased with that though, that all goes into our conservation work. If popularity of Pokemon continues then we will likely run more of these as we reach the better weather/longer light again.

Rob Cawston, Interim Head of Digital Media, National Museum of Scotland – New Galleries and Interactive Exhibitions

One of the advantages of having a 7 year old son is that you can go to Pokemon Go events and I actually went to the second Zoo event which was amazing, if a little Black Mirror.

Here at the NMS we’ve just completed a major project opening 4 new fashion and design galleries, 6 new science and technology galleries, and a new piazza (or expanded pavement if you like). Those ten new galleries allow us to show (75% of 3000+) items for the first time in generations, but we also wanted to work out how to engage visitors in these attractions. So, in the new galleries we have 150+ interactive exhibits in the new galleries – some are big things like a kid sized hamster wheel, hot air balloon, etc. But we also now have digital labels… This isn’t just having touch screens for the sake of it, it needed to add something new that enhances the visitor experience. We wanted to reveal new perspectives, to add fun and activity – including games in the gallery, and providing new knowledge and learning.

We have done research on our audiences and they don’t just want more information – they have phones, they can google stuff, so they want more. And in fact the National Museum of Flight opened 2 new hangers and 30 new digital labels that let us trial some of our approaches with visitors first.

So, on those digital labels and interactives we have single stories, multiple chapters, bespoke interactives. These are on different sorts of screens, formats, etc. Now we are using pretty safe tech. We are based on the umbraco platform, as is our main website. We set up a CMS with colours, text, video, etc. And that content is stored on particular PCs that send data to specific screens in the museums. There is so much content going into the museum, so we were able to prep all this stuff ahead of gallery opening, and without having to be in the gallery space whilst they finished installing items.

We didn’t just put these in the gallery – we put them on the website too. Our games are there, and we know they are a major driver of traffic to the website. That multiple platform digital content includes 3D digital views of fashion; we have a game built with Aardman…

We have learned a lot from this. I don’t think we realised how much would be involved in creating this content, and I think we have created a new atmosphere of engagement. After this session do go and explore our new galleries, our new interactives, etc.

Wrap Up James McVeigh, Festivals Edinburgh

I’m just going to do a few round ups. You’ve heard a lot today. We’ve got exhibitors who are right on your doorstep. We are trying to show you that digital is all around you, it’s right on your doorstep. I got a lot from this myself… I like that the zoo borrowed the ideas – we don’t always need to reinvent the wheel! The success of the Japanese economy is about adopting, not inventing.

Everything we have heard today is about UX, how audiences, share, engage, how they respond afterwards.

And as we finish I’d like to thank ETAG, to Digital Tourism Scotland, to Scottish Enterprise, and to the wider tourism industry in Edinburgh.

And finally, the next events are:

  • 29th November – Listening to our Visitors
  • 6th December – Running Social Media Campaigns
  • 26th January – ETAG Annual Conference

And with that we just have lunch, networking and demos of Bubbal and Hydra Research. Thanks to all from me for a really interesting event – lots of interesting insights into how tech is being used in Edinburgh tourism and where some of the most interesting potential is at the moment. 

Oct 082016
 

Today is the last day of the Association of Internet Researchers Conference 2016 – with a couple fewer sessions but I’ll be blogging throughout.

As usual this is a liveblog so corrections, additions, etc. are welcomed. 

PS-24: Rulemaking (Chair: Sandra Braman)

The DMCA Rulemaking and Digital Legal Vernaculars – Olivia G Conti, University of Wisconsin-Madison, United States of America

Apologies, I’ve joined this session late so you miss the first few minutes of what seems to have been an excellent presentation from Olivia. The work she was presenting on – the John Deere DMCA case – is part of her PhD work on how lay communities feed into lawmaking. You can see a quick overview of the case on NPR All Tech Considered and a piece on the ruling at IP Watchdog. The DMCA is the Digital Millennium Copyright Act (1998). My notes start about half-way through Olivia’s talk…

Property and ownership claims made of distinctly American values… Grounded in general ideals, evocations of the Bill of Rights. Or asking what Ben Franklin would say… Bringing the ideas of the DMCA as being contrary to the very foundations of the United Statements. Another them was the idea of once you buy something you should be able to edit as you like. Indeed a theme here is the idea of “tinkering and a liberatory endeavour”. And you see people claiming that it is a basic human right to make changes and tinker, to tweak your tractor (or whatever). Commentators are not trying to appeal to the nation state, they are trying to perform the state to make rights claims to enact the rights of the citizen in a digital world.

So, John Deere made a statement that tractro buyers have an “implied license” to their tractor, they don’t own it out right. And that raised controversies as well.

So, the final register rule was that the farmers won: they could repair their own tractors.

But the vernacular legal formations allow us to see the tensions that arise between citizens and the rights holders. And that also raises interesting issues of citizenship – and of citizenship of the state versus citizenship of the digital world.

The Case of the Missing Fair Use: A Multilingual History & Analysis of Twitter’s Policy Documentation – Amy Johnson, MIT, United States of America

This paper looks at the multilingual history and analysis of Twitter’s policy documentation. Or policies as uneven scalar tools of power alignment. And this comes from the idea of thinking of the Twitter as more than just the whole complete overarching platform. There is much research now on moderation, but understanding this type of policy allows you to understand some of the distributed nature of the platforms. Platforms draw lines when they decide which laws to tranform into policies, and then again when they think about which policies to translate.

If you look across at a list of Twitter policies, there is an English language version. Of this list it is only the Fair Use policy and the Twitter API limits that appear only in English. The API policy makes some sense, but the Fair Use policy does not. And Fair Use only appears really late – in 2014. It sets up in 2005, and many other policies come in in 2013… So what is going on?

So, here is the Twitter Fair Use Policy… Now, before I continue here, I want to say that this translation (and lack of) for this policy is unusual. Generally all companies – not just tech companies – translate into FIGS: French, Italian, German, Spanish languages. And Twitter does not do this. But this is in contrast to the translations of the platform itself. And I wanted to talk in particularly about translations into Japanese and Arabic. Now the Japanese translation came about through collaboration with a company that gave it opportunities to expand out into Japen. Arabic is not put in place until 2011, and around the Arab Spring. And the translation isn’t doen by Twitter itself but by another organisaton set up to do this. So you can see that there are other actors here playing into translations of platform and policies. So this iconic platforms are shaped in some unexpected ways.

So… I am not a lawyer but… Fair Use is a phenomenon that creates all sorts of internet lawyering. And typically there are four factors of fair use (Section 107 of US Copyright Act of 1976): purpose and character of use; nature of copyright work; amount and substantiality of portion used; effect of use on potential market for or value of copyright work. And this is very much an american law, from a legal-economic point of view. And the US is the only country that has Fair Use law.

Now there is a concept of “Fair Dealing” – mentioned in passing in Fair Use – which shares some characters. There are other countries with Fair Use law: Poland, Israel, South Korea… Well they point to the English language version. What about Japanese which has a rich reuse community on Twitter? It also points to the English policy.

So, policy are not equal in their policynesss. But why does this matter? Because this is where rule of law starts to break down… And we cannot assume that the same policies apply universally, that can’t be assumed.

But what about parody? Why bring this up? Well parody is tied up with the idea of Fair Use and creative transformation. Comedy is protected Fair Use category. And Twitter has a rich seam of parody. And indeed, if you Google for the fair use policy, the “People also ask” section has as the first question: “What is a parody account”.

Whilst Fair Use wasn’t there as a policy until 2014, parody unofficially had a policy in 2009, an official one in 2010, updates, another version in 2013 for the IPO. Biz Stone writes about, when at Google, lawyers saying about fake accounts “just say it is parody!” and the importance of parody. And indeed the parody policy has been translated much more widely than the Fair Use policy.

So, policies select bodies of law and align platforms to these bodies of law, in varying degree and depending on specific legitimation practices. Fair Use is strongly associated with US law, and embedding that in the translated policies aligns Twitter more to US law than they want to be. But parody has roots in free speech, and that is something that Twitter wishes to align itself with.

Visual Arts in Digital and Online Environments: Changing Copyright and Fair Use Practice among Institutions and Individuals Abstract – Patricia Aufderheide, Aram Sinnreich, American University, United States of America

Patricia: Aram and I have been working with the College Art Association and it brings together a wide range of professionals and practitioners in art across colleges in the US. They had a new code of conduct and we wanted to speak to them, a few months after that code of conduct was released, to see if that had changed practice and understanding. This is a group that use copyrighted work very widely. And indeed one-third of respondents avoid, abandon, or are delayed because of copyrighted work.

Aram: four-fifths of CAA members use copyrighted materials in their work, but only one fifth employ fair use to do that – most or always seek permission. And of those that use fair use there are some that always or usually use Fair Use. So there are real differences here. So, Fair Use are valued if you know about it and undestand it… but a quarter of this group aren’t sure if Fair Use is useful or not. Now there is that code of conduct. There is also some use of Creative Commons and open licenses.

Of those that use copyright materials… But 47% never use open licenses for their own work – there is a real reciprocity gap. Only 26% never use others openly licensed work. and only 10% never use others’ public domain work. Respondents value creative copying… 19 out of 20 CAA members think that creative appropriation can be “original”, and despite this group seeking permissions they also don’t feel that creative appropriation shouldn’t neccassarily require permission. This really points to an education gap within the community.

And 43% said that uncertainty about the law limits creativity. They think they would appropriate works more, they would public more, they would share work online… These mirror fair use usage!

Patricia: We surveyed this group twice in 2013 and in 2016. Much stays the same but there have been changes… In 2016, 2/3rd have heard about the code, and a third have shared that information – with peers, in teaching, with colleagues. Their associations with the concept of Fair Use are very positive.

Arem: The good news is that the code use does lead to change, even within 10 months of launch. This work was done to try and show how much impact a code of conduct has on understanding… And really there was a dramatic differences here. From the 2016 data, those who are not aware of the code, look a lot like those who are aware but have not used the code. But those who use the code, there is a real difference… And more are using fair use.

Patricia: There is one thing we did outside of the survey… There have been dramatic changes in the field. A number of universities have changed journal policies to be default Fair Use – Yale, Duke, etc. There has been a lot of change in the field. Several museums have internally changed how they create and use their materials. So, we have learned that education matters – behaviour changes with knowledge confidence. Peer support matters and validates new knowledge. Institutional action, well publicized, matters .The newest are most likely to change quickly, but the most veteran are in the best position – it is important to have those influencers on board… And teachers need to bring this into their teaching practice.

Panel Q&A

Q1) How many are artists versus other roles?

A1 – Patricia) About 15% are artists, and they tend to be more positive towards fair use.

Q2) I was curious about changes that took place…

A2 – Arem) We couldn’t ask whether the code made you change your practice… But we could ask whether they had used fair use before and after…

Q3) You’ve made this code for the US CAA, have you shared that more widely…

A3 – Patricia) Many of the CAA members work internationally, but the effectiveness of this code in the US context is that it is about interpreting US Fair Use law – it is not a legal document but it has been reviewed by lawyers. But copyright is territorial which makes this less useful internationally as a document. If copyright was more straightforward, that would be great. There are rights of quotation elsewhere, there is fair dealing… And Canadian law looks more like Fair Use. But the US is very litigious so if something passes Fair Use checking, that’s pretty good elsewhere… But otherwise it is all quite territorial.

A3 – Arem) You can see in data we hold that international practitioners have quite different attitudes to American CAA members.

Q4) You talked about the code, and changes in practice. When I talk to filmmakers and documentary makers in Germany they were aware of Fair Use rights but didn’t use them as they are dependent on TV companies buy them and want every part of rights cleared… They don’t want to hurt relationships.

A4 – Patricia) We always do studies before changes and it is always about reputation and relationship concerns… Fair Use only applies if you can obtain the materials independently… But then the question may be that will rights holders be pissed off next time you need to licence content. What everyone told me was that we can do this but it won’t make any difference…

Chair) I understand that, but that question is about use later on, and demonstration of rights clearance.

A4 – Patricia) This is where change in US errors and omissions insurance makes a difference – that protects them. The film and television makers code of conduct helped insurers engage and feel confident to provide that new type of insurance clause.

Q5) With US platforms, as someone in Norway, it can be hard to understand what you can and cannot access and use on, for instance, in YouTube. Also will algorithmic filtering processes of platforms take into account that they deal with content in different territories?

A5 – Arem) I have spoken to Google Council about that issue of filtering by law – there is no difference there… But monitoring

A5 – Amy) I have written about legal fictions before… They are useful for thinking about what a “reasonable person” – and that can be vulnerable by jury and location so writing that into policies helps to shape that.

A5 – Patricia) The jurisdiction is where you create, not where the work is from…

Q6) There is an indecency case in France which they want to try in French court, but Facebook wants it tried in US court. What might the impact on copyright be?

A6 – Arem) A great question but this type of jurisdictional law has been discussed for over 10 years without any clear conclusion.

A6 – Patricia) This is a European issue too – Germany has good exceptions and limitations, France has horrible exceptions and limitations. There is a real challenge for pan European law.

Q7) Did you look at all of impact on advocacy groups who encouraged writing in/completion of replies on DCMA. And was there any big difference between the farmers and car owners?

A7) There was a lot of discussion on the digital right to repair site, and that probably did have an impact. I did work on Net Neutrality before. But in any of those cases I take out boiler plate, and see what they add directly – but there is a whole other paper to be done on boiler plate texts and how they shape responses and terms of additional comments. It wasn’t that easy to distinguish between farmers and car owners, but it was interesting how individuals established credibility. For farmers they talked abot the value of fixing their own equipment, of being independent, of history of ownership. Car mechanics, by contrast, establish technical expertise.

Q8) As a follow up: farmers will have had a long debate over genetically modified seeds – and the right to tinker in different ways…

A8) I didn’t see that reflected in the comments, but there may well be a bigger issue around micromanagement of practices.

Q9) Olivia, I was wondering if you were considering not only the rhetorical arguements of users, what about the way the techniques and tactics they used are received on the other side… What are the effective tactics there, or locate the limits of the effectiveness of the layperson vernacular stategies?

A9) My goal was to see what frames of arguements looked most effective. I think in the case of the John Deere DCMA case that wasn’t that conclusive. It can be really hard to separate the NGO from the individual – especially when NGOs submit huge collections of individual responses. I did a case study on non-consensual pornography was more conclusive in terms of strategies that was effective. The discourses I look at don’t look like legal discourse but I look at the tone and content people use. So, on revenge porn, the law doesn’t really reflect user practice for instance.

Q10) For Amy, I was wondering… Is the problem that Fair Use isn’t translated… Or the law behind that?

A10 – Amy) I think Twitter in particular have found themselves in a weird middle space… Then the exceptions wouldn’t come up. But having it in English is the odd piece. That policy seems to speak specifically to Americans… But you could argue they are trying to impose (maybe that’s a bit too strong) on all English speaking territory. On YouTube all of the policies are translated into the same languages, including Fair Use.

Q11) I’m fascinated in vernacular understanding and then the experts who are in the round tables, who specialise in these areas. How do you see vernacular discourse use in more closed/smaller settings?

A11 – Olivia) I haven’t been able to take this up as so many of those spaces are opaque. But in the 2012 rule making there were some direct quotes from remixers. And there a suggestion around DVD use that people should videotape the TV screen… and that seemed unreasonably onorous…

Chair) Do you forsee a next stage where you get to be in those rooms and do more on that?

A11 – Olivia) I’d love to do some ethnographic studies, to get more involved.

A11 – Patricia) I was in Washington for the DMCA hearings and those are some of the most fun things I go to. I know that the documentary filmmakers have complained about cost of participating… But a technician from the industry gave 30 minutes of evidence on the 40 technical steps to handle analogue film pieces of information… And to show that it’s not actually broadcast quality. It made them gasp. It was devastating and very visual information, and they cited it in their ruling… And similarly in John Deere case the car technicians made impact. By contrast a teacher came in to explain why copying material was important for teaching, but she didn’t have either people or evidence of what the difference is in the classroom.

Q12) I have an interesting case if anyone wants to look at it, around Wikipedia’s Fair Use issues around multimedia. Volunteers take pre-emptively being stricter as they don’t want lawyers to come in on that… And the Wikipedia policies there. There is also automation through bots to delete content without clear Fair Use exception.

A12 – Arem) I’ve seen Fair Use misappropriated on Wikipedia… Copyright images used at low resolution and claimed as Fair Use…

A12- Patricia) Wikimania has all these people who don’t want to deal with law on copyright at all! Wikimedia lawyers are in an a really difficult position.

Intersections of Technology and Place (panel): Erika Polson, University of Denver, United States of America; Rowan Wilken, Swinburne Institute for Social Research, Australia; Germaine Halegoua,University of Kansas, United States of America; Bryce Renninger, Rutgers University, United States of America; Adrienne Russell, University of Denver, United States of America (Chair: Jessica Lingel)

Traces of our passage: Locative media and the capture of place data – Rowan Wilken

This is a small part of a book that I’m working on. And I am looking at how technologies are geolocating us… In space, in time, but moreso the ways that they reveal our complex socio-technical context through place. And I’m seeing this from an anthropological point of view of places as having particular

Josia Van Dyke in her work on social media business models talks about the use of “location intelligence” as part of the social media ecosystem and economic system.

I want to focus particular on FourSquare… It has changed significantly changed since repositioning in 2014 and those changes in their own and the Swarn app seek to generate real time and even predictive recommendations. They so this through combining social data/social graph and location/Places Graph data. They look to understand People as nodes with edges of proximity, co-location, etc. And in places the places are nodes, the edges are menus, recommendations, etc. So they have these two graphs, but the engineers seen to understand “What are the underlying properties and dynamics of these networks? How can we predict new connections? How do we measure influence?”. Their work now builds up this rich database of places and data around them.

And these changes have led to new repositioning… This has seen FourSquare selling advertising through predictive analysis… The second service called PinPoint, allowing marketers to target users of FourSquare… And for users beyond FourSquare. This is done through GPS locations, finding patterns and tracking shopping and dining routes…

In the last part of this talk I want to talk about Tim Ingol’s work in . For Ingol our perception of place is less about the birds eye view of maps, but of the walked and experienced route, based on the course of moving about in it, of ambulatory knowing. This is perceptual and way finding, less about co-ordinates, more about situating position in the context of moving, of what one knows about routing and moving.

So, my contention is that it’s way finding or mapping not map making or use that are primarily of use and interest to these social platforms going forward. Ingols talks about how new maps come from the replacement and changes over time… I think that is no longer the case, as what is of interest to companies like Foursquare is the digital trace of our passage, not the map itself.

“We know that right now we are not funky”: Placemaking practices in smart cities – Germaine Halegoua, University of Kansas

I am looking at attempts to use underused urban spaces, based on interviews with planners, architects, developers, about how they were developing these spaces – often on reclaimed land or infill – and about what makes them special and unique.

Placemaking is almost always defined as a bottom up process, often linked to home or making somewhere feel like home… But theories of placemaking are less thought of as strategic, thinking of KirkPatrick, or La Corbuisier. And the idea that these are spaces for dominant players – military, powerful people. So in these urban settings the strategic placemaking connects to powerful people, connected and valued around these international players.

I wanted to look at the differences between the planning behind these spaces and smart cities versus the lived experiences and processes. Smart cities are about urbanism imagines, with sustainable urbanism – everything is leaf certified!; technscientific ubranism – data capture is built in, data and technology are thought of as progressive and solutions to our problems; urban triumphalism (Brenner & Schmid 2015). These smart cities are purported as visionary designs, of this coming from the modern needs of people… Taking the best of global cities around the world, naming locations and designs coming in as fragments from other places. Digital media are used to show that this place works, as a place for ideas, a place to get things done… That they are like campus-based communities, like Silicon Valley, a better place than before…

There is this statistic that 70% of all people live in cities, and growing… But they are seen as dumb, problematic, in need of updating… They need order and smart cities are seen as a solution. There is an ordered view of the city as a lab – showroom and demonstration space as well as petri dish for transforming technology. And these are cities built of systems on top of systems – literally (Le Corbeusier-like but with a flowing soft aesthetic) and bringing of things together. So, in Songdu you see this range of services in the space. And in TechCity we see apps and connectedness within the home… Smart cities are monitoring traffic and centralised systems, to monitor biosigns, climate, etc… But in the green spaces or sustainable urban of getting you to live and linger… So you have this odd mixture of not spending the time in the streets, and these green spaces to linger…

But these are quite cold spaces… Vacancies are extremely high. They are seen as artificial. My talk quote is from a developer who feels that the solution is to bring in some funk… To programme serendipity into their lives… The answer is always more technology…

So a few themes here… There is the People Problem… attracting people to the place – not “funky”; placing people within the union of technology and physical design – claim that tech puts man first and needs of the end user… but there is also a sense of people as “bugs”. And I am producing all this data that aren’t about my experience of the city, but which shape that experience.

Geo-social media and the quest for place on-the-go – Erika Polson, University of Denver

This is coming out of my latest book, a multi-site ethnographic project. In the recent work I have developed an idea of digital place making… And this has been about how location technology can be used to shape the space of mobile people.

Expatriation was previously a post WWII experience, and a family affair… Often those assignments failed, sometimes as one partner (often female) couldn’t work. So, as corporations try to globalise there is a move to send younger, single assignees replacing families – they are cheaper and easier to relocate, they are more used to a global professional life as an idea and are enthusiastic.

And we don’t just see people moving once, we see serial global workers… The international experience can be seen as “a global lifestyle is seen as attractive and exciting” Anne Marie Fetcher 2009(?) but that may not reflects reality. There can be deep feelings of loneliness, the experience does’t match experience, they miss out on families, they lack social connections and possibilities to socialise. Margaret Malewski writes in Generation Expatriot (2005) about how there can be an increasing dependency on friends at home, and the need for these extratiots to get out and meet people…

So, my work is based on a range of meetup apps, from Grindr and Tinder, to MeetUp, InterNations and (less of my focus) Couch Surfing… Tools to build connections and find each other. I have studied use of apps in Paris, Bangalore and Singapore. So this image is of a cafe in Paris full of people – the first meetup that I went to and it was intimidating to walk into but immediately someone approached… And I started to think about Digital Place-making about two months into the Paris experience when a friend wanted to meet for dinner and I was at a MeetUp, and he was super floored by his discomfort with talking to a bar full of strangers in Paris – he’s a local guy, he speaks perfect English, he’s very sociable… On any other night he would have owned the space but he was thrown by these expats making the space their own, through Meetup, through their profiles, through discourse of “who we are” and pre-articulation of some of the expectations and norms.

This made me think about the idea of Place and the feelings of belonging and place attachment (Coulthard and Ledema 2009), about shared meanings of place. We’ve seen lots of work on online world and how to create that sense of place, of attachment, or shared meaning.

So, if everyone is able to drop in and feel part of a place… And if professionals can do this, who else can? So, I’m excited to hear the next paper on Grindr. But it’s interesting to think about who is out-of-place, of the quality of place and place relations. And the fact that even as these people maintain this positive narrative of working globally, but also a feeling of following a common template or script. And problems with place-on-the-go for social commitments, community building… Willingness to meet up again, to drop in rather than create anything.

Grindr – Bryce Renninger, Rutgers University, United States of America

I work on open government issues and the site of my work is Grindr – a location based, mainly male, mainly gay and bi casual dating space. And where I am starting from is the idea that Grindr is killing the gay bar (or gayborhood or the gay resort town), which is part of the gay press, for instance articles on the Pines neighbourhood of Fire Island, from New York Magazine. And quotes Ghaziani, author of There Goes the Gaybourhood, that having the app means they don’t need Boystown any more… And I think this narrative comes from concerns of valuing or not valuing these gay towns, resorts, bars, and of the willingness to defend those spaces. Bumgarner (2013) argues that the app does the same thing as the bar… But that assumes that the bar/place is only there to introduce people to each other for narrow range of purposes…

And my way of thinking about this is to think of technologies in democratic ways… Sclove talks about design criteria for democratic technologies, mainly to do with local labour and contribution but this can also be overlaid on social factors as well. And I think there is a space for democratically deliberating as sex publics. Michael Warner respoonds to Andrew Sullivan by problematizing his idea that “normal” is the place for queer people to exist. There are also authors writing on design in public sex spaces as a way to improve health outcomes.

The founder of Grindr says it isn’t killing the gay bar, and indeed provides a platform for the m to advertise on. And showing a quote here of how it is used shows the wide range of use of Grindr (beyond the obvious). I don’t think that Ghaziani’s writing doesn’t talk enough about what the gayborgoods and LGBT spaces are, how they can be class and race exclusive, fitting into gentrification of public spaces… And therefore I recommend Christina Lagazzi’s book.

One of the things I want to do with this work is to think about narratives in which platforms play a part can be written about, spoken about, that allow challenges to popular discourses of technological disruption. The idea that technological disruption is exciting is prevelant, and we aren’t doing enough to challenge that. This AirBnB billboard campaign – a kind of “Fuck You” to the San Francisco authorities and the legal changes to limit their business – are a reminder that we can respond to disruption…

I’m out of time but I think we need to think critically, about social roles of technology and how technological organisations figure into that… And to acknowledge ethnography and press.

Defining space through activism (and journalism): the Paris climate summit – Adrienne Russell, University of Denver

I’ve been working with researchers around the world on the G8 Climate Summits for around ten years, and coverage around it. I’ve been looking at activists and how they kind of spunk up the sapces where meeting take place…

But let me start with an image of Black Lives Matter protestors from the Daily Mail commenting on protestors using mobile phones. It exemplifies the idea that being on your phone means that you are not fully present… If they are on their phone, that arent that serious. This fits a long term type of coverage of protests that seems to suggest that in-person protests are more effective and authentic than social media. Although our literature shows that it is both approaches in combination that is most effective. And then the issue of official versus unofficial action. Activists in the 2014 Paris protestors were especially reliant on online work as protests were banned, public spaces were closed, activists were placed under house arrests… So they had been preparing for years but their action was restricted.

So, the ways that protestors took action was through tools like Climate Games, a real time game which enable you to see real time photography, but also you could highlight surveillance… It was non-violent but called police and law enforcement “team blue”, and lobbyists and greenwashers were “team grey”!

Probably many of you saw the posters across Paris – mocking corporate ad campaigns – e.g. a VW ad saying “we are sorry we got caught”. So you saw these really interesting alternative narratives and interpretations. There was also a hostel called Place to B which became a defacto media centres for protestors, with interviews being given throughout the event. There was a hub of artists who raised issues faced in their own countries. And outside the city there was a venue where they held a mock trial of Exxon vs the People with prominent campaigners from across the globe, this was on the heals of releases showing Exxon had evidence of climate change twenty years back and ignored it. This mock trial made a real media event.

So all these events helped create an alternative narrative. And that crackdown on protest reflects how we are coming to understand this type of top-down event… And resistance through media and counter narratives to mainstream media running predictable official lines.

Panel Q&A
Q1) I have a question, maybe a pushback to you Germaine… Or maybe not… Who are the “they” you are talking about… You talk about city planners… I admire the critique so I want to know who “they” are, and should we problematise that, especially in contemporary smart cities discourses…
A1 – Germaine) It’s CISCO, Seimans, IBM… Those with smart cities labs… Those are the “they”. And I’ve seen the networking of the expert – it is always the same people… The language is really specific and consistent. Everyone is using this term “solutions”… This is the language to talk about the problems… So “they” are transnational, often US based tech corporation with in-house smart cities labs.
Q1) But “they” are also in meetings across the world with lots of different stakeholders, including those people, but others are there. It looks like you are pulling from corporate discourses… Have you traced how that is translating into everyday city planners who host conferences and events they all meet at… And how that plays out and adopt it…
A1 – Germaine) The most I’ve gone with this is to CIOs and City Planners… But it’s a really interesting questions…
Q1) I think it would be interesting and a direction we need to take… How discourses played out and adopted.
Q2) So I was wanting to follow up that question by asking about the role of governments and funders. In the UK right now there is a big push from Government to engage in smart cities, and that offers local authorities a source of capital income that they are keen to take, but then they need providers to deliver that work and are turning to these private sector players…
A2) With cities I have looked at show no vacancy rates, or very low vacancy rates… Of the need to build more units because all are already sold. Some are dormitories for international schools… That lack of join up between ownership and real estate narrative really differs from lived experience. In Kansas they are retrofitting as a smart cities, and taking on that discourse of efficiencies and costs effectiveness…
Q3) How do narratives here fit versus what we used to have as the Cultural Cities narrative…. Who is pushing this? It’s not the same people from civil society perhaps?
A3 – Erika) When I was in Singapore I had this sense of an almost sterile environment. And I learned that the red light district was cleaned up, moved the transvestities and sex workers out… People thought it was too boring… And they started hiring women to dress as men dressed as women to liven it up…
Q4 – Germaine) I wanted to ask about the discourse around the gaybourhood and where they come from…
A4 – Bryce) I think there are particular stakeholders… So one of the articles I showed was about closure of one of the oldest gay bars in New York, and the idea that Grindr caused that, but someone pointed out in the comments that actually real estate prices is the issue. And there is also this change that came from Mayor Giuliani wanting Christopher Street to be more consistent with the rest of New York…
Q5) I was wondering how that location data and tracking data from Rowan’s paper connects with Smart Cities work…
A5 – Germaine) That idea of tracing is common, but the idea of relational space, whilst there, doesn’t really work as it isn’t made yet… There isn’t sufficient density of people to do that… They need the people for that data. In the social media layer it’s relatively invisible, it’s there… But there really is something connected there.
A5 – Rowan) The move to pinpoint technology at FourSquare, they may be interested in Smart Cities… But quite a lot of the critiques I’ve read is that its just about consumption… I’m tired of that… I think they are trying to do something more interesting, to get at the complexity of everyday life… In Melbourne there was a planned development called Docklands… There is nothing there on Foursquare…
A5 – Erika) I am surprised that they aren’t hiring people to be people…
A5 – Rowan) I was thinking about that William Gibson comment about street signs. One of the things about Docklands was that it had high technology and good connections but low population so it did become a centre for crime.
Q6) I work with low income/low socio-economic groups, and how are people ensuring that those communities are part of smart cities, or how their interests are voiced.
A6 – Germaine) In Kansas Cities Google wired neighbourhoods, but that also raised issues around neighbourhoods that were not reached… And that came from activists. Cable wasn’t fitted for poor and middle income communities, but data centres were also located in them. You also see small MESH and Line of Sight networks emerging as a counter measure in some neighbourhoods. I that place it was activists and the press… But in Kansas City it is being picked up as a story.
A6 -Rowan) In my field Jordan Frick does great work on this area, particularly on issues of monolingualism and how that excludes communities.
A6 – Erika) Tim Cresswell does really interesting work in this space… As I’ve thought about place and whose place a particular space it, I’ve been thinking about activists and police in the US. Would be interesting to look at.
A6 – Adreinne) People who have Tor, who resist surveillance, are well off and tech savvy, almost exclusively…
PS-32: Power (chair: Lina Dencik)
Lina: We have another #allfemalepanel for you! On power. 
The Terms Of Service Of Online Protest – Stefania Milan, University of Amsterdam, The Netherlands.
This is part of a bigger project which is slowly approaching book stage, so I won’t sum everything up here but I will give an overview of the theoretical position.
So, one of our starting points is the materiality and broker role of semiotechnologies, and particularly about mediation of social media and the ways that materiality contributes here. I am a sociologist and I’m looking at change. I have been accursed of being a techno-determinist… Yes, to an extent. I play with this. And I am working from the perspective that algorithmically mediated environment of social media has the ability to create change.
I look at a micro level and meso level, looking at interactions between individuals and how that makes differences. Collective action is a social construct – the result of interactions between social actors (Melucci 1996) – not a huge surprise. Organisation is a communicative and expressive activity. And centrality of sense-making activities (ie how people make sense of what they do) Meaning construction is embedded here. That shouldn’t be a surprise either here. Mediata tech and internet are not just tools but as both metaphors and enablers of a new configuration of collective action: cloud protesting. That’s a term I stick with – despite much criticism – as I like the contradiction that it captures… the direct, on the ground, individual, and the large, opaque, inaccessible.
So, features of “cloud protesting” is about the cloud as an “imagined online space” where resources are stored. In social movements there is something important there around shared resources. In this case resources are soft resources – information and meaning making resources. Resources are the “ingredients” of mobilisation. Cyberspaces gives these soft resources and (immaterial) body.

The cloud is a metaphor for organisational forms… And I relate that back to organisational forms of the 1960s, and to later movements, and now the idea of the cloud protest.  The cloud is also an analogy for individualisation – many of the nodes are individuals, who reject pre-packaged non-negotiable identities and organisations. The cloud is a platform for the movements resources can be… But a cloud movement does not require commitment and can be quite hard to activate and mobilise.

Collective identity, in these spaces, has some particular aspects. The “cloud” is an enabler, and you can identify “we” and “them”. But social media spaces overly emphasise visibility over collective identity.

The consequences of the materiality of social media are seen in four mechanisms: centrality of performance; interpellation to fellows and opponents; expansion of the temporality of the protest; reproducability of social action. Now much of that enables new forms of collective action… But there are both positive and negative aspects. Something I won’t mention here is surveillance and consequences of that on collective action.

So, what’s the role of social media? Social media act as intermediaries, enabling speed in protest organisation and diffusion – shaping and constraining collective action too. The cloud is grounded on everyday technology, everyone has the right in his/her pockets. The cloud has the power to deeply influence not only the nature of the protest but also the tactics. Social media enables the creation of a customisable narrative.

Hate Speech and Social Media Platforms – Eugenia Siapera, Paloma Viejo Otero, Dublin City University, Ireland

Eugenia: My narrative is also not hugely positive. We wanted to look at how social media platforms themselves understand, regulate and manage hate speech on their platforms. We did this through an analysis of terms of service. And we did in-depth interviews with key informants – Facebook, Twitter, and YouTube. These platforms are happy to talk to researchers but not to be quoted. We have permission from Facebook and Twitter. YouTube have told us to re-record interviews with lawyers and PR people present.

So, we had three analytical steps – looking at what constitutes hate speech means.

We found that there is no use of definitions of hate speech based on law. Instead they put in reporting mechanisms and use that to determine what is/is not hate speech.

Now, we spoke to people from Twitter and Facebook (indeed there are a number of staff members who move from one to another). The tactic at Facebook was to make rules, what will be taken down (and what won’t), hiring teams to work to apply then, and then help ensure rules are are appropriate. Twitter took a similar approach. So, the definition largely comes from what users report as hate speech rather than from external definitions or understandings.

We had assumed that the content would be manually and algorithmically assessed, but actually reports are reviewed by real people. Facebook have four teams across the world. There are native speakers – to ensure that they understand context – and they prioritise self-/harm and some other categories.

Platforms are reactively rather than proactively positioned. Take downs are not based on number of reports. Hate speech is considered in context – a compromising selfie of a young woman in the UK isn’t hates speech… Unless in India where that may impact on marriage (See Hot Girls of Mumbai – in that case they didn’t take down on that basis but did remove it directly with the ). And if in doubt they keep the content on.

Twitter talk about reluctance to share information with law enforcement, protective of users, protective of freedom of speech. They are not keen to remove someone, would prefer counter arguments. And there are also tensions created by different local regulations and the global operations of the platforms – tension is resolved by compromise (not the case for YouTube).

A Twitter employee talked about the challenges of meeting with representatives from government, where there is tension between legislation and commercial needs, and the need for consistent handling.

There is also a tension between the principled stance assumed by social media corporations that sends the user to block and protect themselves first – a focus on safety and security and personal responsibility. And they want users to feel happy and secure.

Some conclusions… Social media corporations are increasingly acquiring state-like powers. Users are conditioned to behave in ways conforming to social media corporations’ liberal ideology. Posts are “rewarded” by staying online but only if they conform to social media corporations’ version of what constitutes acceptable hate speech.

#YesAllWomen (have a collective story to tell): Feminist hashtags and the intersection of personal narratives, networked publics, and intimate citizenship – Jacqueline Ryan Vickery, University of North Texas, United States of America

The original idea here was to think about second wave feminism and the idea of sharing personal stories and make the personal political. And how that looks online. Working on Plummer’s work (2003) in this areas. All was well… And then I got stuck down the rabbit hole of publics and public discourses that are created when people share personal stories in public spaces… So I have tried to map these aspects. Thinking about the goals of hashtags and who started them as well… not something non-academics tend to look at. I also will be talking about hashtags themselves.

So I tried to think about and mapping goals, political, affective aspects, and affordances and relationships around these. The affordances of hashtags include: Curational – immediacy, reciprocity and conversationality (Papacharissi 2015); they are Polysemic – plurality, open signifiers, diverse meanings (Fiske 1987); Memetic – replicable, ever-evolving, remix, spreadable cultural information (Knobel and Lankshear 2007); Duality in communities of practice – opposing forces that drive change and creativity, local and broader for instance (Wenger 1988); Articulated subjectivities – momentarily jumping in and out of hashtags without really engaging beyond brief usage.

And how can I understand political hashtags on Twitter and their impact? Are we just sharing amongst ourselves, or can we measure that? So I want to think about agenda setting and re-framing – the hashtags I am looking at speak to a public event, or speak back to a media event that is taking place another way. We have op-option by organisations etc. And we see (strategic) essentialism. Awareness/mobilisation. Amplification/silencing of (privileged/marganlisation narratives). So #Yesallwomen is adopted by many privileged white feminists but was started by a biracial muslim women. Indeed all of the hashtags I study were started by non-white women.

So, looking at #Yesallwomen was in response to a terrible shooting and wrote a diatribe about women denying him. The person who created that hashtags left Twitter for a while but has now returned. So we do see lots of tweets that use that hashtag, responding with appropriate experiences and comments.  But it became problematic, too open… This memetic affordance – a controversial male monologist used it as a title for his show, using it abusively and trolling, and beauty brands being there.

The #WhyIStayed hashtag was started by Beverley Gooden in response to commentary that a woman should have left her partner, and that media wasn’t asking why they didn’t ask why that man had beaten and abused his partner. So people shared real stories… But also a pizza company used it – though they apologised and admitted not researching first. Some found the hashtag traumatic… But others shared resources for other women here…

So, I wanted to talk about how these spaces are creating these networked publics, and they do have power to deal with changes. I also think that idea of openness, of lack of control, and the consequences of that openness. #Yesallwomen has lost its meaning to an extent, and is now a very white hashtag. But if we look at these and think of them with social theories we can think about what this means for future movements and publicness.

Internet Rules, Media Logics and Media Grammar: New Perspectives on the Relation between Technology, Organization and Power – Caja Thimm, University of Bonn, Germany

I’m going to report briefly on a long term project on Twitter funded by a range of agencies. There is also a book coming on Twitter and the European Election. So, where do we start… We have Twitter. And we have tweets in French – e.g. from Marine Le Pen – but we see Tweets in other languages too – emoticons, standard structures, but also visual storytelling – images from events.

We have politicians, witnesses, and we see other players, e.g. the police. So first of all we wanted a model for Tweets and how we can understand them. So we used the Functional Operator Model (Thimm et al 2014) – but thats descriptive – great for organising data but not for analysing and understanding platforms.

So, we started with a conference on Media Logic, an old concept from the 1970s. Media Logic offers an approach to develop parameters for a better analysis of such new forms of “media”. It defines players, objectives and power. And how players interact and what do they do (e.g. how do others conquer a hashtag for instance). Consequently you can consider media logics that are to be considered as a network of parameters.

So, what are the parameters of Media Logics that we should understand?

  1. Media Logic and communication cultures. For instance how politicians and political parties take into account media logic of television – production routines, presentation formats (Schulz 2004)
  2. Media Logic and media institutions – institutional and technological modus operandi (Hjarvard 2014)
  3. Media Grammar – a concept drawn from analogy of language.

So, lets think about constituents of “Media Grammar”? Periscope came out of a need, a gap… So you have Surface Grammar – visible and accessible to the user (language, semiotic signs, sounds etc). Surface Grammar is (sometimes) open to the creativity of users. It guides use through media.

(Constitutive) Property Grammar is difference. They are constitutive for the medium itself, determines the rules the functional level of the surface power. Constitutes of algorithms (not exclusively). Not accessible but for the platform itself. And surface grammar and property grammar form a reflexive feedback loop.

We also see van Dijk and Poell (2013) talking about social media as powerful institutions, so the idea of connecting social media grammar here to understand that… This opens up the focus on the open and hidden properties of social media and its interplay with communicative practices. Social media are differentiated, segmented and diverse to such a degree that it seems necessary to focus in more to gain a better idea of how we understand them as technology and society…

Panel Q&A

Q1) A general question to start off. You presented a real range of methodologies, but I didn’t hear a lot about practices and what people actually do, and how that fits into your models.

A1 – Caja) We have a six year project, millions of tweets, and we are trying to find patterns of what they do, and who does what.  There are real differences in usage but still working on what those means.

A1 – Jacqueline) I think that when you look at very large hashtags, even #blacklivesmatter, you do see community participation. But the tags I’m looking at are really personal, not “Political”, these are using hashtags as a momentary act in some way, but is not really a community of practice in a sustainable movements, but some are triggering bigger movements and engagement though…

A1 – Eugenia) We see hate speech being gamed… People put outrageous posts out there to see what will happen, if they will be taen down…

Q2) I’ve been trying to find an appropriate framework… The field is so multidisciplinary… For a study I did on native american activists. We saw interest groups – discursive groups – were loosely stitched together with #indigenous. I’m sort of using the phrase “translator” to capture this. I was wondering if you had any thoughts on how we navigate this…

A2 – Caja) It’s a good question… This conference is very varied, there are so many fields… Socio-linguistics has some interesting frameworks for accommodations in Twitter. No-one seems to have published on that.

A2 – Jacqueline) I think understanding the network, the echo chamber effects, mapping of that network and how the hashtag moves, might be the way in there…

Q2) That’s what we did, but that’s also a problem… But hashtag seems to have a transformative impact too…

Q3) I wonder if we say Social Media Logic, do we loose sight of the overarching issue…

A3 – Caja) I think that Media Logic is in really early stages… It was founded in the 1970s when media was so different. But there are real power symmetries… And I hope we find a real way to bridge the two.

Q4) Many of these arguments come down to how much we trust the idea of the structure in action. Eugenia talks about creating rules iteratively around the issue. Jacqueline talked about the contested rules of play… It’s not clear of who defines those terms in the end…

A4 – Eugenia) There are certain media logics in place now… But they are fast moving as social media move to monetise, to develop, to change. Twitter launches Periscope, Facebook then launches Facebook Live! The grammar keeps on moving, aimed towards the users… Everything keeps moving…

A4 – Caja) But that’s the model. The dynamics are at the core. I do believe that media grammar on the level of small nitpicks that are magic – like the hashtag which has transgressed the platform and even the written form. But it’s about how they work, and whether there are logics inscribed.

A4 – Stefania) There is, of course, attempts made by the platform to hide the logic, and to hide the dynamics of the logic… Even at a radical activist conference who cannot imagine their activism without the platform – and that statement also comes from a belief that they understand the platform.

Q5) I study hate speech too… I came with my top five criticisms but you covered them all in your presentation! You talked about location (IP address) as a factor in hate speech, rather than jurisdiction.

A5 – Eugenia) I think they (nameless social platform) take this approach in the same way that they do for take down notices… But they only do that for France and Germany where hate speech law is very different.

A5 – Caja) There is a study that has been taking place about take downs and the impact of pressure, politics, and effect across platforms when dealing with issues in different countries.

A5 – Eugenia) Twitter has a relationship with NGOs. and have a priority to deal with their requests, sometimes automatically. But they give guidance on how to do that, but they are outsourcing that process to these users…

Q6) I was thinking about platform logics and business logics… And how the business models are part of these logics. And I was wondering if you could talk to some of the methodological issues there… And the issue of the growing powers of governments – for instance Benjamin Netanahu meeting Mark Zuckerberg and talking to him about taking down arabic journalists.

A6 – Eugenia) This is challenging… We want to research them and we want to critique them… But we don’t want to find ourselves blacklisted for doing this. Some of the people I spoke to are very sensitive about, for instance, Palestinian content and when they can take it down. Sometimes though platforms are keen to show they have the power to take down content…

Q7) For Eugenia, you had very good access to people at these platforms. Not surprised they are reluctant to be quoted… But that access is quite difficult in our experience – how did you do it.

A7) These people live in Dublin so you meet them at conferences, there are cross overs through shared interests. Once you get in it’s easier to meet and speak to them… Speaking is ok, quoting and identifying names in our work is different. But it’s not just in social media

Comment) These people really are restricted in who they can talk to… There are PR people at one platform… You ask for comparative roles and use that as a way in… You can start to sidle inside. But mainly it’s the PR people you can access… I’ve had some luck referring to role area at a given company, rather than by name.

Q8 – Stefania) I was wondering about our own roles, in this room, and the issue of agency and publics…

A8 – Jacqueline) I don’t think publics take agency away, in the communities I look at these women benefit from the publics, and of sharing… But actually what we understand as publics varies… So in some publics some talk about exclusion of, e.g. women or people of public, but there are counter publics…

A8 – Caja) Like you were saying there are mini publics and they can be public, and extend out into media and coverage. I think we have to look beyond the idea of the bubble… It’s really fragmented and we shouldn’t overlook that…

And with that, the conference is finished. 

You can read the rest of my posts from this week here:

Thanks to all at AoIR for a really excellent week. I have much to think about, lots of contacts to follow up with, and lots of ideas for taking forward my own work, particularly our new YikYak project

Oct 072016
 

PS-15: Divides (Chair: Christoph Lutz)

The Empowered Refugee: The Smartphone as a Tool of Resistance on the Journey to Europe – Katja Kaufmann

For those of you from other continents we had a great deal of refugees coming to Europe last year, from Turkey, Syria, etc. who were travelling to Germany, Sweden, and Vienna – where I am from – was also a hub. Some of these refugees had smartphones and that was covered in the (right wing) press about this, criticising this group’s ownership of devices but it was not clear how many had smartphones, how they were being used and that’s what I wanted to look at.

So we undertook interviews with refugees to see if they used them, how they used them. We were researching empowerment by mobile phones, following Svensson and Wamala Larsson (2015) on the role of the mobile phone in transforming capacilities of users. Also with reference to N. Kabeer (1999), A. Sen (1999) etc. on meanings of empowerment in these contexts. Smith, Spend and Rashid (2011) describe mobiles and their networs altering users capability sets, and about phone increasing access to flows of information (Castell 2012).

So, I wanted to identify how smartphones were empowering refugees through: gaining an advantage in knowledge by the experiences of other refugees; sensory information; cross-checking information; and capabilities to opposse actions of others.

In terms of an advantage in knowledge refugees described gaining knowledge from previous refugees on reports, routes, maps, administrative processes, warnings, etc. This was through social networks and Facebook groups in particular. So, a male refugee (age 22) described which people smugglers cannot be trusted, and which can. And another (same age) felt that smart phones were essential to being able to get to Europe – because you find information, plan, check, etc.

So, there was retrospective knowledge here, but also engagement with others during their refugee experience and with those ahead on their journey. This was mainly in WhatsApp. So a male refugee (aged 24) described being in Macedonia and speaking to refugees in Serbia, finding out the situation. This was particularly important last year when approaches were changes, border access changed on an hour by hour basis.

In terms of Applying Sensory Abilities, this was particularly manifested in identifying own GPS position – whilst crossing the Aegean or woods. Finding the road with their GPS, or identifying routes and maps. They also used GPS to find other refugees – friends, family members… Using location based services was also very important as they could share data elsewhere – sending GPS location to family members in Sweden for instance.

In terms of Cross-checking information and actions, refugees were able to track routes whilst in the hand of smugglers. A male Syrian refugee (aged 30) checked information every day whilst with people smugglers, to make sure that they were being taken in the right direction – he wanted to head west. But it wasn’t just routes, it was also weather condiions, also rumous, and cross-checking weather conditions before entering a boat. A female Syrian refugee downloaded an app to check conditions and ensure her smuggler was honest and her trip would be safer.

In terms of opposing actions of others, this was about being capable of opposing actions of others – orders of authorities, potential acts of (police) violence, risks, fraud attempts, etc. Also disobedience by knowledge – the Greek government gave orders about the borders, but smartphones allowed annotated map sharing that allowed orders to be disobeyed. And access to timely information – exchange rates for example – a refugee described negotiating price of changing money down by Google searching for this. And opposition was also about a means to apply pressure – threatening with or publishing photos. A male refugee (aged 25) described holding up phones to threaten to document policy violence, and that was impactful. Also some refugees took pictures of people smugglers as a form of personal protection and information exchange, particularly with publication of images as a threat held in case of mistreatment.

So, in summary the smartphones

Q&A

Q1) Did you have any examples of privacy concerns in your interviews, or was this a concern for later perhaps?

A1) Some mentioned this, some felt some apps and spaces are more scrutinised than others. There was concern that others may have been identified through Facebook – a feeling rather than proof. One said that they do not send their parents any pictures in case she was mistaken by Syrian government as a fighter. But mostly privacy wasn’t an immediate concern, access to information was – and it was very succesful.

Q2) I saw two women in the data here, were there gender differences?

A2) We tried to get more women but there were difficulties there. On the journey they were using smartphones in similar ways – but I did talk to them and they described differences in use before their journey and talked about picture taking and sharing, the hijab effect, etc.

Social media, participation, peer pressure, and the European refugee crisis: a force awakens? – Nils Gustafsson, Lund university, Sweden

My paper is about receiving/host nations. Sweden took in 160,000 refugees during the crisis in 2015. I wanted to look at this as it was a strange time to live in. A lot of people started coming in late summer and early autumn… Numbers were rising. At first response was quite enthusiastic and welcoming in host populations in Germany, Austria, Sweden. But as it became more difficult to cope with larger groups of people, there were changes and organising to address challenge.

And the organisation will remind you of Alexander (??) on the “logic of collective action” – where groups organise around shared ideas that can be joined, ideas, almost a brand, e.g. “refugees welcome”. And there were strange collaborations between government, NGOs, and then these ad hoc networks. But there was also a boom and bust aspect here… In Sweden there were statements about opening hearts, of not shutting borders… But people kept coming through autumn and winter… By December Denmark, Sweden, etc. did a 180 degree turn, closing borders. There were border controls between Denmark and Sweden for the first time in 60 years. And that shift had popular support. And I was intrigued about this. And this work is all part of a longer 3 year project on young people in Sweden and their political engagement – how they choose to engage, how they respond to each other. We draw on Bennett & Segerberg (2013), social participation, social psychology, and the notion of “latent participation” – where people are waiting to engage so just need asking to mobilise.

So, this is work in progress and I don’t know where it will go… But I’ll share what I have so far. And I tried to focus on recruitment – I am interested in when young people are recruited into action by their peers. I am interested in peer pressure here – friends encouraging behaviours, particularly important given that we develop values as young people that have lasting impacts. But also information sharing through young people’s networks…

So, as part of the larger project, we have a survey, so we added some specific questions about the refugee crisis to that. So we asked, “you remember the refugee crisis, did you discuss it with your friends?” – 93.5% had, and this was not surprising as it is a major issue. When we asked if they had discussed it on social media it was around 33.3% – much lower perhaps due to controversy of subject matter, but this number was also similar to those in the 16-25 year old age group.

We also asked whether they did “work” around the refugee crisis – volunteering or work for NGOs, traditional organisations. Around 13.8% had. We also asked about work with non-traditional organisations and 26% said that they had (and in 16-25% age group, it was 29.6%), which seems high – but we have nothing to compare this too.

Colleagues and I looked at Facebook refugee groups in Sweden – those that were open – and I looked at and scraped these (n=67) and I coded these as being either set up as groups by NGOs, churches, mosques, traditional organisations, or whether they were networks… Looking across autumn and winter of 2015 the posts to these groups looked consistent across traditional groups, but there was a major spike from the networks around the crisis.

We have also been conducting interviews in Malmo, with 16-19 and 19-25 year olds. They commented on media coverage, and the degree to which the media influences them, even with social media. Many commented on volunteering at the central station, receiving refugees. Some felt it was inspiring to share stories, but others talked about their peers doing it as part of peer pressure, and critical commenting about “bragging” in Facebook posts. Then as the mood changed, the young people talked about going to the central station being less inviting, on fewer Facebook posts… about feeling that “maybe it’s ok then”. One of our participants was from a refugee background and ;;;***

Q&A

Q1) I think you should focus on where interest drops off – there is a real lack of research there. But on the discussion question, I wasn’t surprised that only 30% discussed the crisis there really.

A1) I wasn’t too surprised either here as people tend to be happier to let others engage in the discussion, and to stand back from posting on social media themselves on these sorts of issues.

Q2) I am from Finland, and we also helped in the crisis, but I am intrigued at the degree of public turnaround as it hasn’t shifted like that in Finland.

A2) Yeah, I don’t know… The middleground changed. Maybe something Swedish about it… But also perhaps to do with the numbers…

Q2) I wonder… There was already a strong anti-immigrant movement from 2008, I wonder if it didn’t shift in the same way.

A2) Yes, I think that probably is fair, but I think how the Finnish media treated the crisis would also have played a role here too.

An interrupted history of digital divides – Bianca Christin Reisdorf, Whisnu Triwibowo, Michael Nelson, William Dutton, Michigan State University, United States of America

I am going to switch gears a bit with some more theoretical work. We have been researching internet use and how it changes over time – from a period where there was very little knowledge of or use of the internet to the present day. And I’ll give some background than talk about survey data – but that is an issue of itself… I’ll be talking about quantitative survey data as it’s hard to find systematic collection of qualitative research instruments that I could use in my work.

So we have been asking about internet use for over 20 years… And right now I have data from Michigan, the UK, and the US… I have also just received further data from South Africa (this week!).

When we think about Digital Inequality the idea of the digital divide emerged in the late 1990s – there was government interest, data collection, academic work. This was largely about the haves vs. have-nots; on vs. off. And we saw a move to digital inequalities (Hargittai) in the early 2000s… Then it went quite aside from work from Neil Selwyn in the UK, from Helsper and Livingstone… But the discussion has moved onto skills…

Policy wise we have also seen a shift… Lots of policies around digital divide up to around 2002, then a real pause as there was an assumption that problems would be solved. Then, in the US at least, Obama refocused on that divide from 2009.

So, I have been looking at data from questionnaires from Michigan State of the State Survey (1997-2016); questionnaires from digital future survey in the US (2000, 2002, 2003, 2014); questionnaires from the Oxford Internet Surveys in the UK (2003, 2005, 2007, 2009, 2013); Hungarian World Internet Project (2009); South African World Internet Project (2012).

Across these data sets we have looked at questionnaires and frequency of use of particular questions here on use, on lack of use, etc. When internet penetration was less high there was a lot of explanation in questions, but we have shifted away from that, so that we assume that people understand that… And we’ve never returned to that. We’ve shifted to devices questions, but we don’t ask other than that. We asked about number of hours online… But that increasingly made less sense, we do that less as it is essentially “all day” – shifting to how frequently they go online though.

Now the State of the State Survey in Michigan is different from the other data here – all the others are World Internet Project surveys but SOSS is not looking at the same areas as not interent researchers neccassarily. In Hungary (2009 data) similar patterns of question use emerged, but particular focus on mobile use. But the South African questionnaire was very different – they ask how many people in the household is using the internet – we ask about the individual but not others in the house, or others coming to the house. South Africa has around 40% penetration of internet connection (at least in 2012 when we have data here), that is a very different context. There they ask for lack of access and use, and the reasons for that. We ask about use/non-use rather than reasons.

So there is this gap in the literature, there is a need for quantitative and qualitative methods here. We also need to understand that we need to consider other factors here, particularly technology itself being a moving target – in South Africa they ask about internet use and also Facebook – people don’t always identify Facebook as internet use. Indeed so many devices are connected – maybe we need

Q&A

Q1) I have a question about the questionnaires – do any ask about costs? I was in Peru and lack of connections, but phones often offer free WhatsApp and free Pokemon Go.

A1) Only the South African one asks that… It’s a great question though…

Q2) You can get Pew questionnaires and also Ofcom questionnaires from their website. And you can contact the World Internet Project directly… And there is an issue with people not knowing if they are on the internet or not – increasingly you ask a battery of questions… and then filtering on that – e.g. if you use email you get counted as an internet user.

A2) I have done that… Trying to locate those questionnaires isn’t always proving that straightforward.

Q3) In terms of instruments – maybe there is a need to developmore nuanced questionnaires there.

A3) Yes.

Levelling the socio-economic playing field with the Internet? A case study in how (not) to help disadvantaged young people thrive online – Huw Crighton Davies, Rebecca Eynon, Sarah Wilkin, Oxford Internet Institute, United Kingdom

This is about a scheme called the “Home Access Scheme” and I’m going to talk about why we could not make it work. The origins here was a city council’s initiative – they came to us. DCLG (2016) data showed 20-30% of the population were below the poverty line, and we new around 7-8% locally had no internet access (known through survey responses). And the players here were researchers, local government, schools, and also an (unnamed) ISP.

The aim of the scheme was to raise attainment in GCSEs, to build confidence, and to improve employability skills. The Schools had a responsibility to identify students in need at school, to procure laptops, memory sticks and software, provide regular, structured in-school pastoral skills and opportunities – not just in computing class. The ISP was to provide set up help, technical support, free internet connections for 2 years.

This scheme has been running two years, so where are we? Well we’ve had successes: preventing arguments and conflict; helped with schoolwork, job hunting; saved money; and improved access to essential services – this is partly as cost cutting by local authorities have moved transactions online like bidding for council housing, repeat prescription etc. There was also some intergenerational bonding as families shared interests. Families commented on the success and opportunities.

We did 25 interiews, 84 1-1 sessions in schools, 3 group workshops, 17 ethnographic visits, plus many more informal meet ups. So we have lots of data about these families, their context, their lives. But…

Only three families had consistent internet access throughout. Only 8 families are still in the programme. It fell apart… Why?

Some schools were so nervous about use that they filtered and locked down their laptops. One school used the scheme money to buy teacher laptops, gave students old laptops instead. Technical support was low priority. Lead teachers left/delegated/didn’t answer emails. Very narrow use of digital technology. No in-house skills training. Very little cross-curriculum integration. Lack of ICT classes after year 11. And no matter how often we asked about it we got no data from schools.

The ISP didn’t set up collections, didn’t support the families, didn’t do what they had agreed to. They tried to bill families and one was threatened with debt collectors!

So, how did this happen? Well maybe these are neoliberalist currents? I use that term cautiously but… We can offer an emergent definition of neoliberalism from this experience.

There is a neoliberalist disfigurement of schools: teachers under intense pressue to meet auditable targets; the scheme’s students subject to a range of targets used to problematise a school’s performance – exclusions, attendance, C grades; the scheme shuffled down priorities; ICT not deemed academic enough under Govian school changes; and learning is stribbed back to narrow range of subjects and focus towards these targets.

There were effects of neoliberalism on the city council: targets and “more for less” culture; scheme disincentivised; erosion of authority of democratic institutional councils – schools beyond authority controls, and high turn over of staff.

There were neoliberalist practices at the ISP: commodifying philanthropy; couldn’t not treat families as customers. And there were dysfunctional mini-markets: they subcontracted delivery and set up; they subcontracted support; they charged for support and charged for internet even if they couldn’t help…

Q&A

Q1) Is the problem digital divides but divides… Any attempt to overcome class separation and marketisation is working against the attempts to fix this issue here.

A1) We have a paper coming and yes, there were big issues here for policy and a need to be holistic… We found parents unable to attend parents evening due to shift work, and nothing in the school processes to accommodate this. And the measure of poverty for children is “free school meals” but many do not want to apply as it is stigmatising, and many don’t qualify even on very low incomes… That leads to children and parents being labelled disengaged or problematic

Q2) Isn’t the whole basis of this work neoliberal though?]

A2) I agree. We didn’t set the terms of this work..

Panel Q&A

Q1/comment) RSE and access

A1 – Huw) Other companies the same

Q2) Did the refugees in your work Katja have access to Sim cards and internet?

A2 – Katja) It was a challenge. Most downloaded maps and resources… And actually they preferred Apple to Android as the GPS is more accurate without an internet connection – that makes a big difference in the Aegean sea for instance. So refugees shared sim cards, used power banks for the energy.

Q3) I had a sort of reflection on Nils’ paper and where to take this next… It occurs to me that you have quite a few different arguements… You have this survey data, the interviews, and then a different sort of participation from the Facebook groups… I have students in Berlin here looking at the boom and bust – and I wondered about that Facebook group work being worth connecting up to that type of work – it seems quite separate to the youth participation section.

A3 – Nils) I wasn’t planning on talking about that, but yes.

Comment) I think there is a really interesting aspect of these campaigns and how they become part of social media and the everyday life online… The way they are becoming engaged… And the latent participation there…

Q3) I can totally see that, though challenging to cover in one article.

Q4) I think it might be interesting to talk to the people who created the surveys to understand motivations…

A4) Absolutely, that is one of the reasons I am so keen to hear about other surveys.

Q5) You said you were struggling to find qualitative data?

A5 – Katja) You can usually download quantitative instruments, but that is harder for qualitative instruments including questions and interview guides…

XP-02: Carnival of Privacy and Security Delights – Jason Edward Archer, Nathanael Edward Bassett, Peter Snyder, University of Illinois at Chicago, United States of America

Note: I’m not quite sure how to write up this session… So these are some notes from the more presentation parts of the session and I’ll add further thoughts and notes later… 

Nathanial: We have prepared three interventions for you today and this is going to be kind of a gallery exploring space. And we are experimenting with wearables…

Fitbits on a Hamster Wheel and Other Oddities, oh my!

Nathanial: I have been wearing a FitBit this week… but these aren’t new ideas… People used to have beads for counting, there are self-training books for wrestling published in the 16th Century. Pedometers were conceived of in Leonardo di Vinci’s drawings… These devices are old, and tie into ideas of posture, and mastering control of physical selves… And we see the pedometer being connected with regimes of fitness – like the Manpo-Meter (“10,000 steps meter) (1965). This narrative takes us to the 1970s running boom and the idea of recreational discipline. And now the world of smart devices… Wearables are taking us to biometric analysis as a mental model (Neff – preprint).

So, these are ways to track, but what happens with insurance companies, with those monitoring you. At Oriel Roberts university students have to track their fitness as part of their role as students. What does that mean? I encourage you all to check out “unfitbit” – interventions to undermine tracking. Or we could, rather than going to the gym with a FitBit, give it to Terry Crews – he’s going anyway! – and he could earn money… Are fitness slaves in our future?

So, use my FitBit – it’s on my account

And so, that’s the first part of our session…

?: Now, you might like to hear about the challenges of running this session… We had to think about how to make things uncomfortable… But then how do you get people to take part… We considered a man-in-the-middle site that was ethically far too problematic! And no-one was comfortable participating in that way… Certainly raising the privacy and security issue… But as we talk of data as a proxy for us… As internet researchers a lot of us are more aware of privacy and security issues than the general population, particularly around metadata. But this would have been one day… I was curious if people might have faked your data for that one day capture…

Nathanial: And the other issue is why we are so much more comfortable sharing information with FitBit, and other sharing platforms, faceless entities versus people you meet at a conference… And we didn’t think about a gender aspect here… We are three white guys here and we are less sensitive to that being publicised rather than privatised. Men talk about how much they can benchpress… but personal metadata can make you feel under scrutiny

Me: I wouldn’t want to share my data and personal data collection tools…

Borrowing laptop vs borrowing phone…

?: In the US there have been a few cases where FitBits have been submitted as evidence in court… But that data is easier to fake… In one case a woman claimed to have been raped, and they used her FitBit to suggest that

Nathanial: You talked about not being comfortable handing someone your phone… It is really this blackbox… Is it a wearable? It has all that stuff, but you wear it on your body…

??: On cellphones there is FOMO – Fear Of Missing Out… What you might mix…

Me: Device as security

Comment: Ableism embedded in devices… I am a cancer survivor and I first used step counts as part of a research project on chemotherapy and activity… When I see a low step day on my phone now… I can feel this stress of those triggers on someone going through that stress…

Nathanial: FitBit’s vibrate when you have/have not done a number of steps… Trying to put you in an ideological state apparatus…

Jh: That nudge… That can be good for able bodied… But if you can’t move that is a very different experience… How does that add to their stress load.

Interperspectival Goggles

Again looking at the condition of virtuality – Hayles 2006(?)

Vision is constructed… Thinking of higher resolution… From small phone to big phone… Lower resolution to higher resolution TV… We have spectacles, quizzing glasses and monocles… And there is the strange idea of training ourselves to see better (William Horation Bates, 1920s)… And emotional state interfering with how you do something… Rgeb we have optomitry and x-rays as a concept of seeing what could not be seen before… And you have special goggles and helmets… LIke the idea of the Image Accumulator in Videodrome (1985?), or the idea of the Memory recorder and playback device in Brainstorm (1983). We see embodied work stations – Da Vinci Surgery Robot (2000) – divorcing what is seen, from what is in front of them…

There are also playful ideas: binocular football; the Decelerator Helmet; Meta-perceptional Helmet (Cleary and Donnelly 2014); and most recently Google Glass – what is there and also extra layers… Finally we have Oculus Rift and VR devices – seeing something else entirely… We can divorce what we see from what we are perceiving… We want to swap people’s vision…

1. Raise awareness about the complexity of electronic privacy and security issues.

2. Identify potential gaps in the research agenda through playful interventions, subversions, and moments of the absurd.

3. Be weird, have fun!

Mathius

“Cell phones are tracking devices that make phonecalls” (Applebaum, 2012)

I am interested in IMSI catcher which masquerades as a wireless base station, prompting phones to communicate with it. They are used by police, law inforcement, etc. They can be small and handheld, or they can be drone mounted. And they can track people, people in crowds, etc. There is always a different way to use it – you can scan for people in crowds. So if you know someone is there you can scan for it in a different way. So, these tools are simple and disruptive and problematic, especially in activism contexts.

But these tools are also capable of caturing transmitted content, and all the data in your phone. These devices are problematic and have raised all sorts of issues about their use, who and how you use them. I’d like to think of this a different way… Is there a right to protest? And to protest anonymously? We do have anti-masking laws in some places – that suggests no right to anonymous protest. But that’s still a different privacy right – covering my face is different from participating at all…

Protests are generally about a minority persuading a majoruty about some sort of change. There is no legal rights to protest anonymously, but there are lots of protected anoymous spaces. So, in the 19th century there was big debate on whether or not the voting ballot should be anonymous – democracy is really the C19th killer app. So there is a lovely quote here about the “The Australian system” by Bernheim (1889) and the introduction of anonymous voting. It wasn’t brought in to preserve privacy. At the time politicians brought votes – buying a keg of beer or whatever – and anonymity was there to stop that, not to preserve individual privacy. But Jill LePore (2008) writes about how our forebears considered casting a “secret ballot” to be “cowardly, underhanded and dispicable”.

So, back to these devices… There can be an idea that “if you have nothing to fear, you have nothing to hide”, but many of us understand that it is not true. And this type of device silences uncomfortable discourse.

Mathias Klang, University of Massachusetts Boston

Q1) How do you think that these devices fit into the move to allow law inforcement to block/”switch off” the camera on protestors/individuals’ phones?

A1) Well people can resist these surveillance efforts, and you will see subversive moves. People can cover cameras, conceal devices etc. But with these devices it may be that the phone becomes unusable, requiring protestors to disable phones or leave phones at home… And phones are really popular and well used for coordinating protests

Bryce Newell, Tilburg Institute for Law, Technology, and Society

I have been working on research in Washington Stat, working with law enforcement on license plate recognition systems and public disclosure law. And looking at what you can tell. So, here is a map of license plate data from Seattle, showing vehicle activity. In Minneapolis similar data being released led to mapping of the governer’s registered vehicles..

The second area is about law enforcement and body cameras. Several years ago peaceful protestors at UC Davis were pepper sprayed. Even in the cropped version of that image you can see a vast number of phones out, recording the event. And indeed there are a range of police surveillance apps that allow you to capture police encounters without that being visible on the phone, including: ACLU Police Tape, Stop and Frisk Watch; OpenWatch; CopRecorder2. And some of these apps upload the recording to the cloud right away to ensure capture. And there have certainly been a number of incidents from Rodney King to Oscar Grant (BART), Eric Garner, Ian Tomlinson, Michael Brown. Of these only the Michael Brown case featured law enforcement with bodycams. There has been a huge call for more cameras on law enforcement… During a training meeting some officers told me “Where’s the direct-to-YouTube button?” and “If citizens can do it, why can’t we also benefit from the ability to record in public places?”. There is a real awareness of control and of citizen videos. I also heard a lot of there being “a witch hunt about to begin…”.

So, I’m in the middle of focused coding on police attitudes to body cameras. Police are concerned that citizen video is edited, out of context, distorting. And they are concerned that it doesn’t show wider contexts – when recording starts, perspective, the wider scene, the fact that provocation occurs before filming usually. But there is also the issue of control, and immediate physical interaction, framing, disclosure, visibility – around their own safety, around how visible they are on the web. They don’t know why it is being recorded, where it will go…

There have been a number of regulatory responses to this challenge: (1) restrict collection – not many, usually budgetary and rarely on privacy; (2) restrict access – going back to the Minneapolis case, within two weeks of the map of governer vehicles being published in the paper they had an exemption to public disclosure law which is now permanent for this sort of data. In the North Carolina protests recently the call was “release the tapes” – and they released only some – then the cry was “release all the tapes”… But on 1st October law changed to again restrict access to this type of data.

But different state provide different access. Some provide access. In Oakland, California, data was released on how many license plates had been scanned. In Seattle data on scans can, because the data for many scans of one licence plates over 90 days is quite specific, you can almost figure out the householder. But granularity varies.

Now, we do see body cameras of sobriety tests, foot chases, and a half hour long interview with prostitute that discloses a lot of data. Washington shares a lot of video to YouTube. We see that in Rotterdam, Netherlands police doing this too.

But one patrol office told me that he would never give his information to an officer with a camera. Another noted that police choose when to start recording with little guidance on when and how to do this.

And we see a “collatoreal visibility” issue for police around these technologies.

Q&A

Q1) Is there any process where police have to disclose that they are filming with a body cam?

A1) Interesting question… Initially they didn’t know. We used to have two party consent process – as for tapings – to ensure consent/implied consent. But the State attorney general described this as outside of that privacy regulation, saying that a conversation with a police officer is a public conversation. But police are starting to have policies that officers should disclose that they have cameras – partly as they hope and sometimes it may reduce violence to police.

Data Privacy in commercial users of municipal location data – Meg Young, University of Washington

My work looks at how companies use Seattle’s location data. I wanted to look at how data privacy is enacted by Seattle municipal government? And I am drawing on the work of Annemarie Mol and John Law (2004), an ethnographer working on health, that focuses on the lived experience. My data is drawing on ethnographic as as well as focus groups, interviews with municipal government and local civic technology communities. I really wanted to present the role of commercial actors in data privacy in city government.

We know that cities collect location data to provide services, and so share it for third parties to do so. In Washinton we have a state freedom of information (FOI) law, which states “The people of this state do not yield their sovereignty to the government…”, making data requestable.

In Seattle the traffic data is collected by a company called Acyclica. The city is growing and the infrastructure is struggling, so they are gathering data to deal with this, to shape traffic signals. This is a large scale longitudinal data collection process. Acyclica are doing that with wi-fi sensors sniff MAC addresses, the location traces sent to Acyclica (MAC salted). The data is aggregated and sent to the city – they don’t see the detailed creepy tracking, but the company does. And this is where the FOI law comes in. The raw data is on the company side here. If the raw data was a public record, it would be requestable. The company becomes a shield for collecting sensitive data – it is proprietizing.

So you can collect data, have service needs met, but without it becoming public to you and I. But analysing the contract the terms do not preclude the resale of data – though a Seattle Dept. of Transport (DOT) worker notes that right now people trust companies more than government. Now I did ask about this data collection – not approved elsewhere – and was told that having wifi settings on in public making you open to data collection – as it is in public space.

My next example is the data from parking meters/pay stations. This shows only the start, end, no credit card #s etc. The DOT is happy to make this available via public records requests. But you can track each individual, and they are using this data to model parking needs.

The third example is the Open Data Portal for Seattle. They pay Socrata to host that public-facing data portal. They also sell access to cleaned, aggregated data to companies through a separate API called the Open Data Network. The Seattle Open Data Manager didn’t see this situation as different from any other reseller. But there is little thought about third party data users – they rarely come up in converations – who may combine this data with other data sets for data analysis.

So, in summary, municipal government data is no less by and for commercial actors as it is the public. Proprietary protections around data are a strategy for protecting sensitive data. Government transfers data to third party

Q&A

Q1) Seattle has a wifi for all programme

A1) Promisingly this data isn’t being held side by side… But the routers that we connect to collect so much data… Seeing an Oracle database of the websites fokls

Q2) What are you policy recommendations based on your work?

A2) We would recommend licensing data with some restrictions on use, so that if the data is used inappropriately their use could be cut off…

Q2) So activists could be blocked by that recommendation?

A2) That is a tension… Activists are keen for no licensing here for that reason… It is challenging, particularly when data brokers can do problematic profiling…

Q2) But that restricts activists from questioning the state as well.

Response – Sandra Braman

I think that these presentations highlight many of the issues that raise questions about values we hold as key as humans. And I want to start from an aggressive position, thinking about how and why you might effectively be an activist in this sort of environment. And I want to say that any concerns about algorithmically driven processes should be evaluated in the same way as we would social process. So, for instance we need think about how the press and media interrogate data and politicians

? “Decoding the social” (coming soon) is looking at social data and analysis of social data in the context of big data. She argues that social life is too big and complex than predicatable data. Everything that people who use big data “do” to understand patterns, are things that activists can do too. We can be just as sophisticated as corporations.

The two things I am thinking about are how to mask the local, and how to use the local… When I talk of masking the local I look back to work I did several years back on local broadcasting. There is mammoth literature on TV as locale, and production and how that is separate, misrepresenting, and the assumptions versus the actual information provided vs actual decision making. My perception is that social activism is that there is some brilliant activity taking place – brilliance at moments, specific apps often. And I think that if you look at the essays that Julian Assange before he founded WikiLeaks, particularly n weak links and how those work… He uses sophisticated social theory in a political manner.

But anonymity is practicably impossible… What can we learn from local broadcast? You can use phones in organised ways – there was training for phone cameras for the Battle of Seattle for instance. You can fight with indistinguishable actions – all doing the same things. Encryption is cat and mouse… Often we have activists presenting themselves as mice, although we did see an app discussed at the plenary on apps to alert you to protest and risk. And I have written before on tactical memory.

In terms of using the local… If you know you will be sensed all the time, there are things you can do as an activist to use that. It is useful to think about how we can conceive of ourselves as activists as part of the network. And I was inspired by US libel laws – if a journalist has transmission/recording devices but are a neutral observer, you are not “repeating” the libel and can share that footage. That goes back to 1970s law, but that can be useful to us.

We are at risk of being censored, but that means that you have choices about what to share, being deliberate in giving signals. We have witnessing, which can be taken as a serious commitment. That can happen with people with phones, you can train witnessing. There are many moments were leakage can be an opportunity – maybe not with volume or content of Snowden, but we can do that. There are also ways to learn and shape learning. But we can also be routers, and be critically engaged in that – what we share, the acceptable error rate. National Security are concerned about where in the stream they should target the misinformation – activists can adopt that too. The server functions – see my strategic memory piece. We certainly have community-based wifi, MESH networks, and that is useful politically and socially. We have responsibilities to build the public that is appropriate, and the networking infrastructure that enables those freedom. We can use more computational power to resolve issues. Information can be an enabler as well as influencing your own activism. Thank you to Anne and her group in Amsterdam for triggering thinking here, but big data we should be engaging critically. If you can’t make decisions in some way, there’s no point to doing it.

I think there needs to be more robustness in managing and working with data. If you go far then you need a very high level of methodological trust. Information has to stand up in court, to respect activist contributions to data. Use as your standard, what would be acceptable in court. And in a Panspectrum (not Panopticon) environment, when data is collected all the times, you absolutely have to ask the right questions.

Panel Q&A

Q1) I was really interested in that idea of witnessing as being part of being a modern digital citizens… Is there more on protections or on that which you can say

A1 – Sandra) We’ve seen all protections for whistle blowing in government disappear under Bush (II)… We still have protections for private sector whistle blowers. But there would be an interesting research project in there…

Q2) I wondered about that idea of cat and mouse use of technology… Isn’t that potentially making access a matter of securitisation…?

A2) I don’t think that “securitisation” makes you a military force… One thing I forgot to say was about network relations… If a system is interacting with another system – the principle of requisite variety – they have to be as complex as the system you are dealing with. You have to be at least as sophisticated as the other guy…

Q3) For Bryce and Meg, there are so many tensions over when data should be public and when it should be private, and tensions there… And police desires to show the good things they do. Also Meg, this idea of privatising data to ensure privacy of data – it’s problematic for us to collect data, but now a third party can do that.

A3 – Bryce) One thing I didn’t explain well enough is that video online comes from police, and from activists – it depends on the video here. Some videos are accessed via public records requests and published to YouTube channel – in fact in Washington you can make requests for free and you can do it anonymously. Police department does public video. Whilst they did a pilot in 2014 they had a hackathon to consider how to deal with redaction issues… detect faces, blur them, etc.. And proactive posting of – only some – video. The narrative of sharing everything, but that isn’t the case. The rhetoric has been about being open, by privacy rights and the new police chief. A lot of it was administrative cost concerns… In the hackathon they asked if posting in a blurred form, it would do away with blanket requests to focus requests. At that time they dealt with all requests for email. They were receiving so many emails and under state law they had to give up all the data and for free. But state law varies, in Charlotte they gave up less data. In some states there is a a differnet approach with press conferences, narratives around the footage as they release parts of videos…

A3 – Meg) The city has worked on how to release data… They have a privacy screening process. They try to provide data in a way that is embedded. They still have a hard core central value that any public record is requestable. Collection limitation is an important and essential part of what cities should be doing… In a way private companies collecting data results in large data sets that will end up insecure in those data sets… Going back to what Bryce was saying, the bodycam initiative was really controversial… There was so much footage and unclear what should be public and when… And the faultlines have been pretty deep. We have the Coalition for Open Government advocates for full access, the ACLU worried that these become surveillance cameras… This was really contentious… They passed a version of a compromise but the bottom line is that the PRA is still a core value for the state.

A3 – Bryce) Much of the ACLU, nationally certainly, was to support bodycams, but individuals and local ACLUs change and vary… They were very pro, then backing off, then local variance… It’s a very different picture hence that variance.

Q4) For Matthias, you talked about anti-masking laws. Are there cases where people have been brought in for jamming signals under that law.

A4 – Matthias) Right now the American cases is looking for keywords – manufacturers of devices, the ways data is discussed. I haven’t seen cases like that, but perhaps it is too new… I am a Swedish lawyer and that jamming would be illegal in protest…

A4 – Sandra) Would that be under antimasking or under jamming law.

A4 – Matthias) It would be under hacking laws…

Q4) If you counter with information… But not if switching phone off…

A4 – Matthias) That’s still allowed right now.

Q5) Do you do work comparing US and UK bodycamera?

A5 – Bryce) I don’t but I have come across the Rotterdam footage. One of my colleagues has looked at this… The impetus for adoption in the Netherlands has been different. In the US it is transparancy, in the Netherlands it was protection of public servants as the narrative. A number of co-authors have just published recently on the use of cameras and how they may increase assault on officers… Seeing some counter-intuitive results… But the why question is interesting.

Comment) Is there any aspect of cameras being used in higher risk areas that makes that more likely perhaps?

A5 – Sandra) It’s the YouTube on-air question – everyone imagines themselves on air.

Q6) Two speakers quoted individuals accused of serious sexual assault… And I was wondering how we account for the fact that activists are not homogenous here… Particularly when tech activists are often white males, they can be problematic…

A6) Techies don’t tend to be the most politically correct people – to generalise a great deal…

A6 – Sandra) I think they are separate issues, if I didn’t engage with people whose behaviour is problematic it would be hard to do any job at all. Those things have to be fought, but as a woman you should also challenge and call those white male activists on their actions.

Q7 – me) I was wondering about the retention of data. In Europe there is a lot of use of CCTV and the model  there is record everything, and retain any incident. In the US CCTV is not in widespread use I think and the bodycam model is record incidents in progress only… So I was wondering about that choice in practice and about the retention of those videos and the data after capture.

A7 – Bryce) The ACLU has looked at retention of data. It is a state based issue. In Washington there are mandatory minimu periods… They are interesting as due to findings in conduct they are under requirements to keep everything for as long as possible so auditors from DOJ can access and audit. Bellingham and Spokane, officers can flag items, and supervisors can… And that is what dictates retention schedule. There are issues there of course. Default when I was there was 2 years. If it is publicly available and hits YouTube then that will be far more long lasting, can pop up again… Perpetual memory there… So actual retention schedule won’t matter.

A7 – Sandra) A small follow up – you may have answered with that metadata… Do they treat bodycam data like other types of police data, or is it a separate class of data?

A7 – Bryce) Generally it is being thought of as data collection… And there is no difference from public disclosure, but they are really worried about public access. And how they share that with prosecutors… They could share on DVD… And wanted to use share function of software… But they didn’t want emails to be publicly disclosable with that link… So being thought about as like email.

Q8 – Sandra) On behalf of colleagues working on visual evidence in course.

Comment – Micheal) There is work on video and how it can be perceived as “truth” without awareness of potential for manipulation.

A8 – Bryce) One of the interesting things in Bellingham was release of that video I showed of a suspect running away… The footage was following a police pick up for suspected drug dealing but the footage showed evasion of arrest and the whole encounter… And in that case, whether or not he was guilty of the drug charge, that video told a story of the encounter. In preparing for the court case the police shared the video with his defence team and almost immediately they entered a guilty plea in response to that… And I think we will see more of that kind of invisible use of footage that never goes to court.

And with that this session ends… 

PA-31:Caught in a feedback loop? Algorithmic personalization and digital traces (Chair: Katrin Weller)

Wiebke Loosen1, Marco T Bastos2, Cornelius Puschmann3, Uwe Hasebrink1, Sascha Hölig1, Lisa Merten1, Jan­-Hinrik Schmidt1, Katharina E Kinder­-Kurlanda4, Katrin Weller4

1Hans Bredow Institute for Media Research; 2; 3Alexander von Humboldt Institute for Internet and Society; 4GESIS Leibniz Institute for the Social Sciences

?? – Marco T Bastos, University of California, Davis  and Cornelius Puschmann, Alexander von Humboldt Institute for Internet and Society

Marco: This is a long-running project that Cornelius and I have been working on. At the time we started, in 2012, it wasn’t clear what impact social media might have on the filtering of news, but they are now huge mediators of news and news content in Western countries.

Since then there is some challenge and conflict between journalists, news editors and audiences and that raises the issue of how to monitor and understand that through digital trace data. We want to think about which topics are emphasized by news editors, and which are most shared by social media, etc.

So we will talk about taking two weeks of content from the NYT and The Guardian across a range of social media sites – that’s work I’ve been doing. And Cornelius has tracked 1.5/4 years worth of content from four German newspapers (Suddeutsche Zeitung, Die Zeit, FAZ, Die Welt).

With the Guardian we accessed data from the API which tells you which articles were published in print, and which have not – that is baseline data for the emphasis editors place on different types of content.

So, I’ll talk about my data from the NY Times and the Guardian, from 2013, though we now have 2014 and 2015 data too. This data from two weeks is about 16k+ articles. The Guardian runs around 800 articles per day, the NYT does around 1000. And we could track the items on Twitter, Facebook, Google+, Delicious, Pinterest and Stumbleupon. We do that by grabbing the unique identifyer for the news article, then use the social media endpoints of social platforms to find sharing. But we had a challenge with Twitter – in 2014 they killed the end point we and others had been using to track sharing of URLs. The other sites are active, but relatively irrelevant in the sharing of news items! And there are considerable differences across the ecosystems, some of these social networks are not immediately identifiable as social networks – will Delicious or Pinterest impact popularity?

This data allows us to contrast the differences in topics identified by news editors and social media users.

So, looking at the NYT there is a lot of world news, local news, opinion. But looking at the range of articles Twitter maps relatively well (higher sharing of national news, opinion and technology news), but Facebook is really different – there is huge sharing of opinion, as people share what lies with their interests etc. We see outliers in every section – some articles skew the data here.

If we look at everything that appeared in print, we can look at a horrible diagram that shows all shares… When you look here you see how big Pinterest is, but in fashion in lifestyle areas. The sharing there doesn’t reflect ratio of articles published really though. Google+ has sharing in science and technology in the Guardian, in environment, jobs, local news, opinion and technology in the NYT.

Interestingly news and sports, which are real staples of newspapers but barely feature here. Economics are even worse. Now the articles are english-speaking but they are available globally… But what about differences in Germany… Over to Cornelius…

Cornelius: So Marcos’ work is ahead of mine – he’s already published some of this work. But I have been applying his approach to German newspapers. I’ve been looking at usage metrics and how that relationship between audiences and publishers, and how that relationship changes over time.

So, I’ve looked at Facebook engagement with articles in four German newspapers. I have compared comments, likes and shares and how contribution varies… Opinion is important for newspapers but not necessarily where the action is. And I don’t think people share stories in some areas less – in economics they like and comment, but they don’t share. So interesting to think about the social perception of sharability.

So, a graph here of Die Zeit here shows articles published and the articles shared on Facebook… You see a real change in 2014 to greater numbers (in both). I have also looked at type of articles and print vs. web versions.

So, some observations: niche social networks (e.g. Pinterest) are more relevant to news sharing than expected. Reliance on FB at Die Zeit grew suddenly in 2014. Social nors of liking, sharing and discussing differ significantly across news desks. Some sections (e.g. sports) see a mismatch of importance and use versus liking and sharing.

In the future we want to look at temporal shifts in social media feedback and newspapers coverage. Monitoring

Q&A

Q1) Have you accounted for the possibility of bots sharing content?

A1 – Marcus) No, we haven’t But we are looking across the board but we cannot account for that with the data we have.

Q2) How did you define or find out that an article was shared from the URLs

A2) Tricky… We wrote a script for parsing shortened URLs to check that.

A2 – Cornelius) Read Marco’s excellent documentation.

Q3) What do you make of how readers are engaging, what they like more, what they share more… and what influences that?

A3 – Cornelius) I think it is hard to judge. There are some indications, and have some idea of some functions that are marketed by the platforms being used in different ways… But wouldn’t want to speculate.

Twitter Friend Reportoires: Inferring sources of information management from digital traces – Jan-Hinri Schmidt; Lisa Merton, Wiebke Loosen, Uwe, Kartin?

Our starting point was to think about shifting the focus of Twitter Research. Many studies are on Twitter – explicitly or implicitly – as a broadcast paradigm, but we want to conceive of it as an information tool, and the concept of “Twitter Friend Reportoires” – using “Friend” in the Twitter terminology – someone I follow. We ware looking for patterns in composition of friend sets.

So we take a user, take their friends list, and compare to list of accounts identified previously. So our index has 7,528 Twitter account of media outlets (20.8%) of organisations (political parties, companies, civil society organisations (53.4%) and of individuals (politicians, celebrities and journalists, 25.8%) – all in Germany. We take our sample, compare with a relational table, and then to our master index. And if the account isn’t found in the master index, we can’t say anything about them yet.

To demonstrate the answers we can find with this approach…. We have looked at five different samples:

  • Audience_TS – sample following PSB TV News
  • Audience_SZ – sample following quality daily newspapers
  • MdB – members of federal parliament
  • BPK – political journalists registerd for the bundespressekonferenz
  • Random – random sample of German Twitter users (via Axel Bruns)

We can look at the friends here, and we can categorise the account catagories. In our random sample 77.8% are not identifiable, 22.2% are in our index (around 13% are individual accounts). That is lower than the percentages of friends in our index for all other audiences – for MdB and BPK a high percentage of their friends are in our index. Across the groups there is less following of organisational accounts (in our index) – with the exception of the MdB and political parties. If we look at the media accounts we can see that with the two audience samples they have more following of media accounts than others, including MdB and BPK… When it comes to individual public figures in our indexes, celebrities are prominent for audiences, much less so for MdB and BPK, but MdB follow other politicians, and journalists tend to follow other politicians. And journalists do follow politicians, and politicians – to a less extent – follow journalists.

In terms of patterns of preference we can suggest a model of a fictional user to understand preference between our three categories (organisational account, media account, individual account). And we can use that profile example and compare with our own data, to see how others behaviours fit that typology. So, in our random sample over 30% (37,9%) didn’t follow any organisational accounts. Amongst MdB and BPK there is a real preference for individual accounts.

So, this is what we are measuring right now… I am still not quite happy yet. It is complex to explain, but hard to also show the detail behind that… We have 20 categories in our master index but only three are shown here… Some frequently asked questions that I will ask and answer based on previous talks…

  1. Around 40% identified accounts is not very must is it?
    Yes and no! We have increased this over time. But initially we did not include international accounts, if we did that we’d increase share, especially with celebrities, also international media outlets. However, there is always a trade off, there will also be a long tail… And we are interested in specific categorisations and in public speakers as sources on Twitter.
  2. What does friending mean on Twitter anyway?
    Good question! More qualitative research is needed to understand that – but there is some work on journalists (only). Maybe people friend people for information management reasons, reciprocity norms, public signal of connection, etc. And also how important are algorithmic recommendations in building your set of friends?

Q&A

Q1 – me) I’m glad you raised the issue of recommendation algorithms – the celebrity issue you identified is something Twitter really pushes as a platform now. I was wondering though if you have been looking at how long the people you are looking at have been on Twitter – as behavioural norms

A1) It would be possible to collect it, but we don’t now. We do, for journalists and politicians we do gather list of friends of each month to get longitudinal idea of changes. Over a year, there haven’t been many changes yet…

Q2) Really interesting talk, could you go further with the reportoire? Could there be a discrepancy between the reportoire and their use in terms of retweeting, replying etc.

A2) We haven’t so far… Could see which types of tweets accounts are favouriting or retweeting – but we are not there yet.

Q3) A problem here…

A3) I am not completely happy to establish preference based on indexes… But not sure how else to do this, so maybe you can help me with it. 

Analysing digital traces: The epistemological dimension of algorithms and (big) internet data – Katharine Kinder-Kuranda and Katrin Weller

Katherine: We are interested in the epistemiological aspects of algorithms, so how we research these. So, our research subjects are researchers themselves.

So we are seeing real focus on algorithms in Internet Research, and we need to understand the (hidden) influence of algorithms on all kinds of research, including researchers themselves. So we have researchers interested in algorithms… And in platforms, users and data… But all of these aspects are totally intertwined.

So lets take a Twitter profile… A user of Twitter gets recommendations of who to follow in a given moment of time, and they see newsfeeds at a given moment of time. That user has context that as a researcher I cannot see or interpret the impact of that context on the user’s choice of e.g. who they then follow.

So, algorithms observe, count, sort and rank information on the basis of a variety of different data sources – they are highly heterogeneous and transient. Online data can be user-generated content or activity, traces or location data from various internet platforms. That promises new possibilities, but also raises significant challenge, including because of its heterogeneity.

Social media data has uncertain origins, about users and their motivations; often uncertain provenance of the data. The “users that we see are not users” but highly structured profiles and the result of careful image-management. And we see renewed discussion of methods and epistemology, particularly within the social sciences, for instance suggestions include “messiness” (Knupf 2014), and ? (Kitchen 2012).

So, what does this mean for algorithms? Algorithms operate on an uncertain basis and present real challenges for internet research. So I’m going to now talk about work that Katrin and I did in a qualitative study of social media researchers (Kinder-Kurlanda and Weller 2014). We conducted interviews at conferences – highly varied – speaking to those working with data obtained from social media. There were 40 interviews in total and we focused on research data management.

We found that researchers found very individual ways to address epistemological challenges in order to realise the potential of this data for research. And there were three real concerns here: accessibility, methodology, research ethics.

  1. Data access and quality of research

Here there were challenges of data access, restrictions on privacy of social media data, technical skills; adjusting research questions due to data availability; struggle for data access often consumes much effort. Researchers talks about difficulty in finding publicatio outlets, recognition, jobs in the disciplinary “mainstream” – it is getting better but a big issue. There was also comment on this being a computer science dominated fields – which had highly formalised review processes, few high ranking conferences, and this enforces highly strategic planning of resources and research topics. So researchers attempts to acieve validity and good research quality are constrained. So, this is really challenging for researchers.

2. New Methodologies for “big data”

Methodologies in this research often defy traditional ways of achieveing research validity – through ensuring reproducability, sharing of data sets (ethically not possible). There is a need to find patterns in large data sets by analysis of keywords, or automated analysis. It is hard for others to understand process and validate it. Data sets cannot be shared…

3. Research ethics

There is a lack of users informed consent to studies based on online data (Hutton and Henderson 2015). There are ethical complexity. Data cannot really be anonymised…

So, how do algorithms influence our research data and what does this mean for researchers who want to learn something about the users? Algoritms influence what content users interact with, for example: How to study user networks without knowing the algorithms behind follower/friend suggestions? How to study populations?

To get back to the question of observing algorithms? Well the problem is that various actors in the most diverse situations react out of different interests to the results of algorithic calculations, and may even try to influence algorithms. You see that with tactics around trending hashtags as part of protest for instance. The results of algorithmic analyses presented to internet users with information on how algorithms take part.

In terms of next steps. researchers need to be aware that online environments are influenced by algorithms and so are the users and the data they leave behind. It may mean capturing the “look and feel” of the platform as part of research.

Q&A

Q1) One thing I wasn’t sure about… Is your sense when you were interviewing researchers that they were unaware of algorithmic shaping… Or was it about not being sure how to capture that?

A1) Algorithms wasn’t the terminology when we started our work… They talked about big data… the framing and terminology is shifting… So we are adding the algorithms now… But we did find varying levels of understanding of platform function – some were very aware of platform dynamics, but some felt that if they have a Twitter dataset that’s a representation of the real world.

Q1) I would think that if we think about recognising how algorithms and platform function come in as an object… Presumably some working on interfaces were aware but others looking at, e.g. friendship group, took data and weren’t thinking about platform function, but that is something they should be thinking about…

A1) Yes.

Q2) What do you mean by the term “algorithm” now, and how that term is different from previously…

A2) I’m sure there is a messyness of this term. I do believe that looking at programmes, wouldn’t solve that problem. You have the algorithm in itself, gaining attention… From researchers and industry… So you have programmers tweaking algorithms here… as part of different structures and pressures and contexts… But algorithms are part of a lot of peoples’ everyday practice… It makes sense to focus on those.

Q3) You started at the beginning with an illustration of the researcher in the middle, then moved onto the agency of the user… And the changes to the analytical capacities working with this type of data… But how much is the awareness amongst researchers of how the data, the tools they work with, and how they are inscribed into the research…

A3) Thank you for making that distinction here. The problem in a way is that we saw what we might expect – highly varied awareness… This was determined by disciplinary background – whether STS researchers in sociology, or whether a computer scientist, say. We didn’t find too many disciplinary trends, but we looked across many disciplines…. But there were huge ranges of approach and attitude here – our data was too broad.

Panel Q&A

Q1 – Cornelius) I think that we should say that if you are wondering about “feedback” here, it’s about thinking about metrics and how they then feedback into practice, if there is a feedback loop… From very different perspectives… I would like to return to that – maybe next year when research has progressed. More qualitative understanding is needed. But a challenge is that stakeholder groups vary greatly… What if one finding doesn’t hold for other groups…

Q2) I am from the Wikimedia Foundation… I’m someone who does data analysis a lot. I am curious if in looking at these problems you have looked at recommender systems research which has been researching this space for 10 years, work on messy data and cleaning messy data… There are so many tiny differences that can really make a difference. I work on predictive algorithms, but that’s a new bit of turbulence in a turbulent sea… How much of this do you want to bring this space…

A2 – Katrin) These communities have not come together yet. I know people who work in socio-technical studies who do study interface changes… There is another community that is aware that this exists… And is not aware so closely… But see it as tiny bits of the same puzzle… And can be harder to understand for historical data… And getting an idea of what factors influence your data set. In our data sets we have interviewees more like you, and some with people at sessions like this… There is some connection, but not all of those areas coming together…

A2 – Cornelius) I think that there is a clash between computational social science data work, and this stuff here… That predictable aspect screws with big claims about society… Maybe an awareness but not a keenness. In terms of older computer science research that we are not engaging in, but should be… But often there is a conflict of interests sometimes… I saw a presentation that showed changes to the interface, changing behaviour… But companies don’t want to disclose that manipulation…

Comment) We’ve gone through a period, disheartened to see it is still there, that researchers are so excited to trace human activities, that they treat hashtags as the political debate… This community helpfully problematises or contextualises this… But I think that these papers are raising the question of people orientating practices towards the platform, from machine learning… I find it hard to talk about that… And how behaviour feeds into machine learn… Our system tips to behaviour, and technology shifts and reacts to that which is hard.

Q3) I wanted to agree with that idea of the  need to document. But I want to push at your implicit position that this is messy and difficult and hard to measure… But I think that applies to *any* methods… Standards of data removal, arise elsewhere, messiness occurs elsewhere… Some of those issues apply across all kinds of research…

A3 – Cornelius) Christian would have had an example on his algorithm audit work that might have been helpful there.

Comment) I wanted to comment on social media research versus traditional social science research… We don’t have much power over our data set – that’s quite different in comparison with those running surveys, undertaking interviews… and I have control of that tool… And I think that argument isn’t just about survey analysis, but other qualitative analysis… Your research design can fit your purposes…

 

Twitter recommend algorithms, celebrities and noise. Time on twitter. Overall follower/following counts? Does friend suggest influence?

Advertistors? and role in shaping content in news

Time:
Friday, 07/Oct/2016:

4:00pm – 5:30pm

Session Chair:

Location: HU 1.205
Humboldt University of Berlin Dorotheenstr. 24 Building 1, second floor 80 seats
Show help for 'Increase or decrease the abstract text size'

Presentations

Wiebke Loosen1, Marco T Bastos2, Cornelius Puschmann3, Uwe Hasebrink1, Sascha Hölig1, Lisa Merten1, Jan­-Hinrik Schmidt1, Katharina E Kinder­-Kurlanda4, Katrin Weller4

1Hans Bredow Institute for Media Research; 2University of California, Davis; 3Alexander von Humboldt Institute for Internet and Society; 4GESIS Leibniz Institute for the Social Sciences

Oct 062016
 

Today I am again at the Association of Internet Researchers AoIR 2016 Conference in Berlin. Yesterday we had workshops, today the conference kicks off properly. Follow the tweets at: #aoir2016.

As usual this is a liveblog so all comments and corrections are very much welcomed. 

PA-02 Platform Studies: The Rules of Engagement (Chair: Jean Burgess, QUT)

How affordances arise through relations between platforms, their different types of users, and what they do to the technology – Taina Bucher (University of Copenhagen) and Anne Helmond (University of Amsterdam)

Taina: Hearts on Twitter: In 2015 Twitter moved from stars to hearts, changing the affordances of the platform. They stated that they wanted to make the platform more accessible to new users, but that impacted on existing users.

Today we are going to talk about conceptualising affordances. In it’s original meaning an affordance is conceived of as a relational property (Gibson). For Norman perceived affordances were more the concern – thinking about how objects can exhibit or constrain particular actions. Affordances are not just the visual clues or possibilities, but can be felt. Gaver talks about these technology affordances. There are also social affordances – talked about my many – mainly about how poor technological affordances have impact on societies. It is mainly about impact of technology and how it can contain and constrain sociality. And finally we have communicative affordances (Hutchby), how technological affordances impact on communities and communications of practices.

So, what about platform changes? If we think about design affordances, we can see that there are different ways to understand this. The official reason for the design was given as about the audience, affording sociality of community and practices.

Affordances continues to play an important role in media and social media research. They tend to be conceptualised as either high-level or low-level affordances, with ontological and epistemological differences:

  • High: affordance in the relation – actions enabled or constrained
  • Low: affordance in the technical features of the user interface – reference to Gibson but they vary in where and when affordances are seen, and what features are supposed to enable or constrain.

Anne: We want to now turn to platform-sensitive approach, expanding the notion of the user –> different types of platform users, end-users, developers, researchers and advertisers – there is a real diversity of users and user needs and experiences here (see Gillespie on platforms. So, in the case of Twitter there are many users and many agendas – and multiple interfaces. Platforms are dynamic environments – and that differentiates social media platforms from Gibson’s environmental platforms. Computational systems driving media platforms are different, social media platforms adjust interfaces to their users through personalisation, A/B testing, algorithmically organised (e.g. Twitter recommending people to follow based on interests and actions).

In order to take a relational view of affordances, and do that justice, we also need to understand what users afford to the platforms – as they contribute, create content, provide data that enables to use and development and income (through advertisers) for the platform. Returning to Twitter… The platform affords different things for different people

Taking medium-specificity of platforms into account we can revisit earlier conceptions of affordance and critically analyse how they may be employed or translated to platform environments. Platform users are diverse and multiple, and relationships are multidirectional, with users contributing back to the platform. And those different users have different agendas around affordances – and in our Twitter case study, for instance, that includes developers and advertisers, users who are interested in affordances to measure user engagement.

How the social media APIs that scholars so often use for research are—for commercial reasons—skewed positively toward ‘connection’ and thus make it difficult to understand practices of ‘disconnection’ – Nicolas John (Hebrew University of Israel) and Asaf Nissenbaum (Hebrew University of Israel)

Consider this… On Facebook…If you add someone as a friend they are notified. If you unfriend them, they do not. If you post something you see it in your feed, if you delete it it is not broadcast. They have a page called World of Friends – they don’t have one called World of Enemies. And Facebook does not take kindly to app creators who seek to surface unfriending and removal of content. And Facebook is, like other social media platforms, therefore significantly biased towards positive friending and sharing actions. And that has implications for norms and for our research in these spaces.

One of our key questions here is what can’t we know about

Agnotology is defined as the study of ignorance. Robert Proctor talks about this in three terms: native state – childhood for instance; strategic ploy – e.g. the tobacco industry on health for years; lost realm – the knowledge that we cease to hold, that we loose.

I won’t go into detail on critiques of APIs for social science research, but as an overview the main critiques are:

  1. APIs are restrictive – they can cost money, we are limited to a percentage of the whole – Burgess and Bruns 2015; Bucher 2013; Bruns 2013; Driscoll and Walker
  2. APIs are opaque
  3. APIs can change with little notice (and do)
  4. Omitted data – Baym 2013 – now our point is that these platforms collect this data but do not share it.
  5. Bias to present – boyd and Crawford 2012

Asaf: Our methodology was to look at some of the most popular social media spaces and their APIs. We were were looking at connectivity in these spaces – liking, sharing, etc. And we also looked for the opposite traits – unliking, deletion, etc. We found that social media had very little data, if any, on “negative” traits – and we’ll look at this across three areas: other people and their content; me and my content; commercial users and their crowds.

Other people and their content – APIs tend to supply basic connectivity – friends/following, grouping, likes. Almost no historical content – except Facebook which shares when a user has liked a page. Current state only – disconnections are not accounted for. There is a reason to not know this data – privacy concerns perhaps – but that doesn’t explain my not being able to find this sort of information about my own profile.

Me and my content – negative traits and actions are hidden even from ourselves. Success is measured – likes and sharin, of you or by you. Decline is not – disconnections are lost connections… except on Twitter where you can see analytics of followers – but no names there, and not in the API. So we are losing who we once were but are not anymore. Social network sites do not see fit to share information over time… Lacking disconnection data is an idealogical and commercial issue.

Commercial users and their crowds – these users can see much more of their histories, and the negative actions online. They have a different regime of access in many cases, with the ups and downs revealed – though you may need to pay for access. Negative feedback receives special attention. Facebook offers the most detailed information on usage – including blocking and unliking information. Customers know more than users, or Pages vs. Groups.

Nicholas: So, implications. From what Asaf has shared shows the risk for API-based research… Where researchers’ work may be shaped by the affordances of the API being used. Any attempt to capture negative actions – unlikes, choices to leave or unfriend. If we can’t use APIs to measure social media phenomena, we have to use other means. So, unfriending is understood through surveys – time consuming and problematic. And that can put you off exploring these spaces – it limits research. The advertiser-friends user experience distorts the space – it’s like the stock market only reporting the rises except for a few super wealthy users who get the full picture.

A biography of Twitter (a story told through the intertwined stories of its key features and the social norms that give them meaning, drawing on archival material and oral history interviews with users) – Jean Burgess (Queensland University of Technology) and Nancy Baym (Microsoft Research)

I want to start by talking about what I mean by platforms, and what I mean by biographies. Here platforms are these social media platforms that afford particular possibilities, they enable and shape society – we heard about the platformisation of society last night – but their governance, affordances, are shaped by their own economic existance. They are shaping and mediating socio-cultural experience and we need to better to understand the values and socio-cultural concerns of the platforms. By platform studies we mean treating social media platforms as spaces to study in their own rights: as institutions, as mediating forces in the environment.

So, why “biography” here? First we argue that whilst biographical forms tend to be reserved for individuals (occasionally companies and race horses), they are about putting the subject in context of relationships, place in time, and that the context shapes the subject. Biographies are always partial though – based on unreliable interviews and information, they quickly go out of date, and just as we cannot get inside the heads of those who are subjects of biographies, we cannot get inside many of the companies at the heart of social media platforms. But (after Richard Rogers) understanding changes helps us to understand the platform.

So, in our forthcoming book, Twitter: A Biography (NYU 2017), we will look at competing and converging desires around e.g the @, RT, #. Twitter’s key feature set are key characters in it’s biography. Each has been a rich site of competing cultures and norms. We drew extensively on the Internet Archives, bloggers, and interviews with a range of users of the platform.

Nancy: When we interviewed people we downloaded their archive with them and talked through their behaviour and how it had changed – and many of those features and changes emerged from that. What came out strongly is that noone knows what Twitter is for – not just amongst users but also amongst the creators – you see that today with Jack Dorsey and Anne Richards. The heart of this issue is about whether Twitter is about sociality and fun, or is it a very important site for sharing important news and events. Users try to negotiate why they need this space, what is it for… They start squabling saying “Twitter, you are doing it wrong!”… Changes come with backlash and response, changed decisions from Twitter… But that is also accompanied by the media coverage of Twitter, but also the third party platforms build on Twitter.

So the “@” is at the heart of Twitter for sociality and Twitter for information distribution. It was imported from other spaces – IRC most obviously – as with other features. One of the earliest things Twitter incorporated was the @ and the links back.. You have things like originally you could see everyone’s @ replies and that led to feed clutter – although some liked seeing unexpected messages like this. So, Twitter made a change so you could choose. And then they changed again to automatically not see replies from those you don’t follow. So people worked around that with “.@” – which created conflict between the needs of the users, the ways they make it usable, and the way the platform wants to make the space less confusing to new users.

The “RT” gave credit to people for their words, and preserved integrity of words. At first this wasn’t there and so you had huge variance – the RT, the manually spelled out retweet, the hat tip (HT). Technical changes were made, then you saw the number of retweets emerging as a measure of success and changing cultures and practices.

The “#” is hugely disputed – it emerged through hashtag.org: you couldn’t follow them in Twitter at first but they incorporated it to fend off third party tools. They are beloved by techies, and hated by user experience designers. And they are useful but they are also easily coopted by trolls – as we’ve seen on our own hashtag.

Insights into the actual uses to which audience data analytics are put by content creators in the new screen ecology (and the limitations of these analytics) – Stuart Cunningham (QUT) and David Craig (USC Annenberg School for Communication and Journalism)

The algorithmic culture is well understood as a part of our culture. There are around 150 items on Tarleton Gillespie and Nick Seaver’s recent reading list and the literature is growing rapidly. We want to bring back a bounded sense of agency in the context of online creatives.

What do I mean by “online creatives”? Well we are looking at social media entertainment – a “new screen ecology” (Cunningham and Silver 2013; 2015) shaped by new online creatives who are professionalising and monetising on platforms like YouTube, as opposed to professional spaces, e.g. Netflix. YouTube has more than 1 billion users, with revenue in 2015 estimated at $4 billion per year. And there are a large number of online creatives earning significant incomes from their content in these spaces.

Previously online creatives were bound up with ideas of democratic participative cultures but we want to offer an immanent critique of the limits of data analytics/algorithmic culture in shaping SME from with the industry on both the creator (bottom up) and platform (top down) side. This is an approach to social criticism exposes the way reality conflicts not with some “transcendent” concept of rationality but with its own avowed norms, drawing on Foucault’s work on power and domination.

We undertook a large number of interviews and from that I’m going to throw some quotes at you… There is talk of information overload – of what one might do as an online creative presented with a wealth of data. Creatives talk about the “non-scalable practices” – the importance and time required to engage with fans and subscribers. Creatives talk about at least half of a working week being spent on high touch work like responding to comments, managing trolls, and dealing with challenging responses (especially with creators whose kids are engaged in their content).

We also see cross-platform engagement – and an associated major scaling in workload. There is a volume issue on Facebook, and the use of Twitter to manage that. There is also a sense of unintended consequences – scale has destroyed value. Income might be $1 or $2 for 100,000s or millions of views. There are inherent limits to algorithmic culture… But people enjoy being part of it and reflect a real entrepreneurial culture.

In one or tow sentences, the history of YouTube can be seen as a sort of clash of NoCal and SoCal cultures. Again, no-one knows what it is for. And that conflict has been there for ten years. And you also have the MCNs (Multi-Contact Networks) who are caught like the meat in the sandwich here.

Panel Q&A

Q1) I was wondering about user needs and how that factors in. You all drew upon it to an extent… And the dissatisfaction of users around whether needs are listened to or not was evident in some of the case studies here. I wanted to ask about that.

A1 – Nancy) There are lots of users, and users have different needs. When platforms change and users are angry, others are happy. We have different users with very different needs… Both of those perspectives are user needs, they both call for responses to make their needs possible… The conflict and challenges, how platforms respond to those tensions and how efforts to respond raise new tensions… that’s really at the heart here.

A1 – Jean) In our historical work we’ve also seen that some users voices can really overpower others – there are influential users and they sometimes drown out other voices, and I don’t want to stereotype here but often technical voices drown out those more concerned with relationships and intimacy.

Q2) You talked about platforms and how they developed (and I’m afraid I didn’t catch the rest of this question…)

A2 – David) There are multilateral conflicts about what features to include and exclude… And what is interesting is thinking about what ideas fail… With creators you see economic dependence on platforms and affordances – e.g. versus PGC (Professionally Generated Content).

A2 – Nicholas) I don’t know what user needs are in a broader sense, but everyone wants to know who unfriended them, who deleted them… And a dislike button, or an unlike button… The response was strong but “this post makes me sad” doesn’t answer that and there is no “you bastard for posting that!” button.

Q3) Would it be beneficial to expose unfriending/negative traits?

A3 – Nicholas) I can think of a use case for why unfriending would be useful – for instance wouldn’t it be useful to understand unfriending around the US elections. That data is captured – Facebook know – but we cannot access it to research it.

A3 – Stuart) It might be good for researchers, but is it in the public good? In Europe and with the Right to be Forgotten should we limit further the data availability…

A3 – Nancy) I think the challenge is that mismatch of only sharing good things, not sharing and allowing exploration of negative contact and activity.

A3 – Jean) There are business reasons for positivity versus negativity, but it is also about how the platforms imagine their customers and audiences.

Q4) I was intrigued by the idea of the “Medium specificity of platforms” – what would that be? I’ve been thinking about devices and interfaces and how they are accessed… We have what we think of as a range but actually we are used to using really one or two platforms – e.g. Apple iPhone – in terms of design, icons, etc. and the possibilities of interface is, and what happens when something is made impossible by the interface.

A4 – Anne) When the “medium specificity” we are talking about the platform itself as medium. Moving beyond end user and user experience. We wanted to take into account the role of the user – the platform also has interfaces for developers, for advertisers, etc. and we wanted to think about those multiple interfaces, where they connect, how they connect, etc.

A4 – Taina) It’s a great point about medium specitivity but for me it’s more about platform specifity.

A4 – Jean) The integration of mobile web means the phone iOS has a major role here…

A4 – Nancy) We did some work with couples who brought in their phones, and when one had an Apple and one had an Android phone we actually found that they often weren’t aware of what was possible in the social media apps as the interfaces are so different between the different mobile operating systems and interfaces.

Q5) Can you talk about algorithmic content and content innovation?

A5 – David) In our work with YouTube we see forms of innovation that are very platform specific around things like Vine and Instagram. And we also see counter-industrial forms and practices. So, in the US, we see blogging and first person accounts of lives… beauty, unboxing, etc. But if you map content innovation you see (similarly) this taking the form of gaps in mainstream culture – in India that’s stand up comedy for instance. Algorithms are then looking for qualities and connections based on what else is being accessed – creating a virtual circle…

Q6) Can we think of platforms as instable, about platforms having not quite such a uniform sense of purpose and direction…

A6 – Stuart) Most platforms are very big in terms of their finance… If you compare that to 20 years ago the big companies knew what they were doing! Things are much more volatile…

A6 – Jean) That’s very common in the sector, except maybe on Facebook… Maybe.

PA-05: Identities (Chair: Tero Jukka Karppi)

The Bot Affair: Ashley Madison and Algorithmic Identities as Cultural Techniques – Tero Karppi, University at Buffalo, USA

As of 2012 Ashley Madison is the biggest online dating site targeted at those already in a committed relationship. Users are asked to share their gender, their sexuality, and to share images. Some aspects are free but message and image exchange are limited to paid accounts.

The site was hacked in 2016, stealing site user data which was then shared. Security experts who analysed the data assessed it as real, associated with real payment details etc. The hacker intention was to expose cheaters but my paper is focused on a different aspect of the aftermath. Analysis showed 43 male bots, and 70k female bots and that is the focus of my paper. And I want to think about this space and connectivity by removing the human user from the equation.

The method for me was about thinking about the distinction between human and non-human user, the individual and the bot. Eminating from germination theory I wanted to use cultural techniques – with materials, symbolic values, rules and places. So I am seeking elements of difference of different materials in the context of the hack and the aftermath.

So, looking at a news items: “Ashley madison, the dating website for cheaters, has admitted that some women on its site were virtual computer programmes instead of real women.” (CNN money), which goes onto say that users thought that they were cheating, but they weren’t after all! These bots interacted with users in a variety of ways from “winking” to messaging, etc. The role of the bot is to engage users in the platform and transform them into paying customers. A blogger talked about the space as all fake – the men are cheaters, the women are bots and only the credit card payments are real!

The fact that the bots are so gender imbalanced tells us the difference in how the platform imagines male and female users. In another commentary they comment on the ways in which fake accounts drew men in – both by implying real women were on the site, and by using real images on fake accounts… The lines between what is real and what is fake have been blurred. Commentators noted the opaqueness of connectivity here, and of the role of the bots. Who knows how many of the 4 million users were real?

The bots are designed to engage users, to appear as human to the extent that we understand human appearance. Santine Olympo talked about bots whilst others looking at algorithmic spaces and what can be imagined and created from our wants and needed. According to Ashley Madison employees the bots – or “angels” – were created to match the needs of users, recycling old images from real user accounts. This case brings together the “angel” and human users. A quote from a commentator imagines this as a science fiction fantasy where real women are replaced by perfect interested bots. We want authenticity in social media sites but bots are part of our mundane everyday existence and part of these spaces.

I want to finish by quoting from Ashley Madison’s terms and conditions, in which users agree that “some of the accounts and users you may encounter on the site may be fiction”.

Facebook algorithm ruins friendship – Taina Bucher, University of Copenhagen

“Rachel”, a Facebook user/informant states this in a tweet. She has a Facebook account that she doesn’t use much. She posts something and old school friends she has forgotten comment on it. She feels out of control… And what I want to focus on today are ordinary affects of algorithmic life taking that idea from ?’s work and Catherine Stewart’s approach to using this in the context of understanding the encounters between people and algorithmic processes. I want to think about the encounter and how the encounter itself becoming generative.

I think that the fetish could be one place to start in knowing algorithms… And how people become attuned to them. We don’t want to treat algorithms as a fetish. The fetishist doesn’t care about the object, just about how the object makes them feel. And so the algorithm as fetish can be a mood maker, using the “power of engagement”. The power does not reside in the algorithm, but in the types of ways people imagine the algorithm to exist and impact upon them.

So, I have undertaken a study of people’s personal algorithm stories, looking at people’s personal algorithm stories about Facebook algorithm; monitoring and querying Twitter for comments and stories (through keywords) relating to Facebook algorithms. And a total of 25 interviews were undertaken via email, chat and Skype.

So, when Rachel tweeted about Facebook and friendship, that gave me the starting point to understand stories and the context for these positions through interviews. And what repeatedly arose was the uncanny nature of Facebook algorithms. Take, for instance Micheal, a musician in LA. He shares a post and usually the likes come in rapidly, but this time nothing… He tweets that the algorithm is “super frustrating” and he believes that Facebook only shows paid for posts. Like others he has developed his own strategy to show posts more clearly. He says:

“If the status doesn’t build buzz (likes, comments, shares) within the first 10 minutes or so it immediately starts moving down the news feed and eventually gets lost.”

Adapting behaviour to social media platforms and their operation can be seen as a form of “optimisation”. Users aren’t just updating their profile or hoping to be seen, they are trying to change behaviours to be better seen by the algorithm. And this takes us to the algorithmic imaginary, the ways of thinking about what algorithms are, what they should be, how they function, and what these imaginations in turn make possible. Many of our participants talked about changing behaviours for the platform. Rachel talks about “clicking every day to change what will show up on her feed” is not only her using the platform, but thinking and behaving differently in the space. Adverts can also suggest algorithmic intervention and, no matter whether the user is profiled or not (e.g. for anti-wrinkle cream), users can feel profiled regardless.

So, people do things to algorithms – disrupting liking practices, comment more frequently to increase visibility, emphasise positively charged words, etc. these are not just interpreted by the algorithm but also shape that algorithm. Critiquing the algorithm is not enough, people are also part of the algorithm and impact upon its function.

Algorithmic identity – Michael Stevenson, University of Groningen, Netherlands

Michael is starting with a poster of Blade Runner… Algorithmic identity brings to mind cyberpunk and science fiction. But day to day algorithmic identity is often about ads for houses, credit scores… And I’m interested in this connection between this clash of technological cool vs mundane instruments of capitalism.

For critics the “cool” is seen as an ideological cover for the underlying political economy. We can look at the rhetoric around technology – “rupture talk”, digital utopianism as that covering of business models etc. Evgeny Morozov writes entertainingly of this issue. I think this critique is useful but I also think that it can be too easy… We’ve seen Morozov tear into Jeff Jarvis and Tim O’Reilly, describing the latter as a spin doctor for Silicon Valley. I think that’s too easy…

My response is this… An image of Christopher Walken saying “needs more Bourdieu”. I think we need to take seriously the values and cultures and the effort it takes to create those. Bourdieu talks about the new media field with areas of “web native”, open, participatory, transparant at one end of the spectrum – the “autonomous pole”; and the “heteronomous pole” of mass/traditional media, closed, controlled, opaque. The idea is that actors locate themselves between these poles… There is also competition to be seen as the most open, the most participatory – you may remember a post from a few years back on Google’s idea of open versus that of Facebook. Bourdieu talks of the autonomous pole as being about downplaying income and economic value, whereas the heteronomous pole is much more directly about that…

So, I am looking at “Everything” – a site designed in the 1990s. It was built by the guys behind Slashdot. It was intended as a compendium of knowledge to support that site and accompany it – items of common interest, background knowledge that wasn’t news. If we look at the site we see implicit and explicit forms of impact… Voting forms on articles (e.g. “I like this write up”), and soft links at the bottom of the page – generated by these types of feedback and engagement. This was the first version in the 1990s. Then in 1999 Nathan Dussendorf(?) developed the Everything2 built with the Everything Development Engine. This is still online. Here you see that techniques of algorithmic identity and datafication of users, this is very explicitly presented – very much unlike Facebook. Among the geeks here the technology is put on top, showing reputation on the site. And being open source, if you wanted to understand the recommendation engine you could just look it up.

If we think of algorithms as talk makers, and we look back at 1999 Everything2, you see the tracking and datafication in place but the statement around it talks about web 2.0/social media type ideas of democracy, meritocracy, conflations of cultural values and social actions with technologies and techniques. Aspects of this are bottom up and you also talk about the role of cookies, and the addressing of privacy. And it directly says “the more you participate, the greater the opportunity for you to mold it your way”.

Thinking about Field Theory we can see some symbolic exclusion – of Microsoft, of large organisations – as a way to position Everything2 within the field. This continues throughout the documentation across the site. And within this field “making money is not a sin” – that developers want to do cool stuff, but that can sit alongside making money.

So, I don’t want to suggest this is a utopian space… Everything2 had a business model, but this was of its time for open source software. The idea was to demonstrate capabilities of the development framework, to get them to use it, and to then get them to pay for services… But this was 2001 and the bubble burst… So the developers turned to “real jobs”. But Everything2 is still out there… And you can play with the first version on an archived version if you are curious!

The Algorithmic Listener – Robert Prey, University of Groningen, Netherlands

This is a version of a paper I am working on – feedback appreciated. And this was sparked by re-reading Raymond Williams, who talks about “there are in fact no masses, but only ways of seeing people as masses” (1958/2011). But I think that in the current environment Williams might now say “there are in fact no individuals, but only ways of seeing people as individuals”. and for me I’m looking at this through the lens of music platforms.

In an increasingly crowded and competitive sector platforms like Spotify, SoundCloud, Apple Music, Deezer, Pandora, Tidel, those platforms are increasingly trying to differentiate themselves through recommendation engines. And I’ll go on to talk about recommendations as individualisation.

Pandora internet radio calls itself the “music genome project” and sees music as genes. It seeks to provide recommendatoins that are outside the distorting impact of cultural information, e.g. you might like “The colour of my love” but you might be put off by the fact that Celine Dion is not cool. They market themselves against the crowd. They play on the individual as the part separated from the whole. However…

Many of you will be familiar with Spotify, and will therefore be familiar with Discover Weekly. The core of Spotify is the “taste profile”. Every interaction you have is captured and recorded in real time – selected artists, songs, behaviours, what you listen to and for how long, what you skip. Discover weekly uses both the taste profile and aspects of collaborative filtering – selecting songs you haven’t discovered that fits your taste profile. So whilst it builds a unique identity for each user, it also relies heavily on other peoples’ taste. Pandora treats other people as distortion, Spotify sees it as more information. Discover weekly does also understands the user based on current and previous behaviours. Ajay Kalia (Spotify) says:

“We believe that it’s important to recognise that a single music listener is usually many listeners… [A] person’s preference will vary by the type of music, by their current activity, by the time of day, and so on. Our goal then is to come up with the right recommendation…”

This treats identity as being in context, as being the sum of our contexts. Previously fixed categories, like gender, are not assigned at the beginning but emerge from behaviours and data. Pagano talks about this, whilst Cheney-Lippold (2011) talks about “cybernetic relationship to individual” and the idea of individuation (Simondon). For Simondon we are not individuals, individuals are an effect of individuation, not the cause. A focus on individuation transforms our relationship to recommendation systems… We shouldn’t be asking if they understand who we are, but the extent to which the person is an effect of personalisation. Personalisation is seen as about you and your need. From a Simondonian perspective there is no “you” or “want” outside of technology. In taking this perspective we have to acknowledge the political economy of music streaming systems…

And the reality is that streaming services are increasingly important to industry and advertisers, particularly as many users use the free variants. And a developer of Pandora talks about the importance for understanding profiles for advertisers. Pandora boasts that they have 700 audience segments to data. “Whether you want to reach fitness-driven moms in Atlanta or mobile Gen X-er… “. The Echo Nest, now owned by Spotify, had created highly detailed consumer profiling before it was brought up. That idea isn’t new, but the detail is. The range of segments here is highly granular… And this brings us to the point that we need to take seriously what Nick Seaver (2015) says we need to think of: “contextualisation as a practice in its own right”.

This matters as the categories that emerge online have profound impacts on how we discover and encounter our world.

Panel Q&A

Q1) I think it’s about music category but also has wider relevance… I had an introduction to the NLP process of Topic Modelling – where you label categories after the factor… The machine sorts without those labels and takes it from the data. Do you have a sense of whether the categorisation is top down, or is it emerging from the data? And if there is similar top down or bottom up categorisation in the other presentations, that would be interesting.

A1 – Robert) I think that’s an interesting question. Many segments are impacted by advertisers, and identifying groups they want to reach… But they may also

Micheal) You talked about the Ashley Madison bots – did they have categorisation, A/B testing, etc. to find successful bots?

Tero) I don’t know but I think looking at how machine learning and machine learning history

Micheal) The idea of content filtering from the bottom to the top was part of the thinking behind Everything…

Q2) I wanted to ask about the feedback loop between the platforms and the users, who are implicated here, in formation of categories and shaping platforms.

A2 – Taina) Not so much in the work I showed but I have had some in-depth Skype interviews with school children, and they all had awareness of some of these (Facebook algorithm) issues, press coverage and particularly the review of the year type videos… People pick up on this, and the power of the algorithm. One of the participants emails me since the study noting how much she sees writing about the algorithm, and about algorithms in other spaces. Awareness is growing much more about the algorithms shaping spaces. It is more prominent than it was.

Q3) I wanted to ask Michael about that idea of positioning Everything2 in relation to other sites… And also the idea of the individual being transformed by platforms like Spotify…

A3 – Michael) I guess the Bourdieun vision is that anyone who wants to position themselves on the spectrum, they can. With Everything you had this moment during the Internet Bubble, a form of utopianism… You see it come together somewhat… And the gap between Wired – traditional mass media – and smaller players but then also a coming together around shared interests and common enemies.

A3 – Robert) There were segments that did come from media, from radio and for advertisers and that’s where the idea of genre came in… That has real effects… When I was at High School there were common groups around particular genres… But right now the move to streaming and online music means there are far more mixed listening and people self-organise in different ways. There has been de-bunking of Bourdieu, but his work was at a really different time.

Q4) I wanted to ask about interactions between humans and non-human. Taina, did people feel positive impacts of understanding Facebook algorithms… Or did you see frustrations with the Twitter algorithms. And Tero, I was wondering how those bots had been shaped by humans.

A4 – Taina) The human and non-human, and whether people felt more or less frustrated by understanding the algorithm. Even if they felt they knew, it changes all the time, their strategies might help but then become obsolete… And practices of concealment and misinformation were tactics here. But just knowing what is taking place, and trying to figure it out, is something that I get a sense is helpful… But maybe that is’t the right answer to it. And that notion of a human and a non human is interesting, particularly for when we see something as human, and when we see things as non-human. In terms of some of the controversies… When is an algorithm blamed versus a human… Well there is no necessary link/consistency there… So when do we assign humanness and non-humanness to the system and does it make a difference?

A4 – Tero) I think that’s a really interesting questions…. Looking at social media now from this perspective helps us to understand that, and the idea of how we understand what is human and what is non-human agency… And what it is to be a human.

Q5) I’m afraid I couldn’t here this question

A5 – Richard) Spotify supports what Deleuze wrote about in terms of the individual and how aspects of our personality are highlighted at the points that is convenient. And how does that effect help us regulate. Maybe the individual isn’t the most appropriate unit any more?

A5 – Taine) For users the exposure that they are being manipulated or can be summed up by the algorithm, that is what can upset or disconcert them… They don’t like to feel summed up by that…

Q6) I really like the idea of the imagined… And perceptions of non-human actors… In the Ashley Madison case we assume that men thought bots were real… But maybe not everyone did that. I think that moment of how and when people imagine and ascribe human or non-human status here. In one way we aren’t concerned by the imaginary… And in another way we might need to consider different imaginaries – the imaginary of the platform creators vs. users for instance.

A6 – Tero) Right now I’m thinking about two imaginaries here… Ashley Madison’s imaginary around the bots, and the users encountering them and how they imagine those bots…

A6 – Taine) A good question… How many imaginaries o you think?! It is about understanding more who you encounter, who you engage with. Imaginaries are tied to how people conceive of their practice in their context, which varies widely, in terms of practices and what you might post…

And with that session finished – and much to think about in terms of algorithmic roles in identity – it’s off to lunch… 

PS-09: Privacy (Chair: Michael Zimmer)

Unconnected: How Privacy Concerns Impact Internet Adoption – Eszter Hargittai, Ashley Walker, University of Zurich

The literature in this area seems to target the usual suspects – age, socio-economic status… But the literature does not tend to talk about privacy. I think one of the reasons may be the idea that you can’t compare users and non-users of the internet on privacy. But we have located a data set that does address this issue.

The U.S. Federal Communication Commission’s issued a National Consumer’s Broadband Service Capability Service in 2009 – when about 24% of Americans were still not yet online. This work is some years ago but our insterest is in the comparison rather than numbers/percentages. And this questioned both internet users and non-users.

One of the questions was: “It is too easy for my personal information to be stolen online” and participants were asked if they Strongly agreed, somewhat agreed, somewhat disagreed, disagreed. We looked at that as bivariate – strongly agreed or not. And analysing that we found that among internet users 63.3% said they strongly agreed versus 81% of non internet users. Now we did analyse demographically… It is what you expect generally – more older people are not online (though interestingly more female respondents are online). But even then the internet non-users again strongly agreed about that privacy/concern question.

So, what does that mean? Well getting people online should address people’s concerns about privacy issues. There is also a methodological takeaway – there is value to asking non-users about internet-related questions – as they may explain their reasons.

Q&A

Q1) Was it asked whether they had previously been online?

A1) There is data on drop outs, but I don’t know if that was captured here.

Q2) Is there a differentiation in how internet use is done – frequently or not?

A2) No, I think it was use or non-use. But we have a paper coming out on those with disabilities and detailed questions on internet skills and other factors – that is a strength of the dataset.

Q3) Are there security or privacy questions in the dataset?

A3) I don’t think there are, or we would have used them. It’s a big national dataset… There is a lot on type of internet connection and quality of access in there, if that is of interest.

Note, there is more on some of the issues around access, motivations and skills in the Royal Society of Edinburgh Spreading the Benefits of Digital Participation in Scotland Inquiry report (Fourman et al 2014). I was a member of this inquiry so if anyone at AoIR2016 is interested in finding out more, let me know. 

Enhancing online privacy at the user level: the role of internet skills and policy implications – Moritz Büchi, Natascha Just, Michael Latzer, U of Zurich, Switzerland

Natascha: This presentation is connected with a paper we just published and where you can read more if you are interested.

So, why do we care about privacy protection? Well there is increased interest in/availability of personal data. We see big data as a new asset class, we see new methods of value extraction, we see growth potential of data-driven management, and we see platformisation of internet-based markets. Users have to continually balance the benefits with the risks of disclosure. And we see issues of online privacy and digital inequality – those with fewer digital skills are more vulnerable to privacy risks.

We see governance becoming increasingly important and there is an issue of understanding appropriate measures. Market solutions by industry self-regulation is problematic because of a lack of incentives as they benefit from data. At the same time states are not well placed to regulate because of their knowledge and the dynamic nature of the tech sector. There is also a route through users’ self-help. Users self-help can be an effective method to protect privacy – whether opting out, or using privacy enhancing technology. But we are increasingly concerned but we still share our data and engage in behaviour that could threaten our privacy online. And understanding that is crucial to understand what can trigger users towards self-help behaviour. To do that we need evidence, and we have been collecting that through a world internet study.

Moritz: We can imperically address issues of attitudes, concerns and skills. The literature finds these all as important, but usually at most two factors covered in the literature. Our research design and contributions look at general population data, nationally representative so that they can feed into policy. The data was collected in the World Internet Project, though many questions only asked in Switzerland. Participants were approached on landline and mobile phones. And our participants had about 88% internet users – that maps to the approx. population using the internet in Switzerland.

We found a positive effect of privacy attitudes on behaviours – but a small effect. There was a strong effect of privacy breaches and engaging in privacy protection behaviours. And general internet skills also had an effect on privacy protection. Privacy breaches – learning the hard way – do predict privacy self-protection. Caring is not enough – that pro-privacy attitudes do not really predict privacy protection behaviours. But skills are central – and that can mean that digital inequalities may be exacerbated because users with low general internet skills do not tend to engage in privacy protection behaviour.

Q&A

Q1) What do you mean by internet skills?

A1 – Moritz): In this case there were questions that participants were asked, following a model by Alexander von Durnstern and colleagues developed, that asks for agreement or disagreement

Navigating between privacy settings and visibility rules: online self-disclosure in the social web – Manuela Farinosi1,Sakari Taipale2, 1: University of Udine; 2: University of Jyväskylä

Our work is focused on self-disclosure online, and particularly whether young people are concerned about privacy in relation to other internet users, privacy to Facebook, or privacy to others.

Facebook offers complex privacy settings allowing users to adopt a range of strategies in managing their information and sharing online. Waters and Ackerman (2011) talk about the practice of managing privacy settings and factors that play a role including culture, motivation, risk-taking ratio, etc. And other factors are at play here. Fuchs (2012) talks about Facebook as commercial organisation and concerns around that. But only some users are aware of the platform’s access to their data, may believe their content is (relatively) private. And for many users privacy to other people is more crucial than privacy to Facebook.

And there are differences in privacy management… Women are less likely to share their phone number, sexual orientation or book preferences. Men are more likely to share corporate information and political views. Several scholars have found that women are more cautious about sharing their information online. Nosko et al (2010) found no significant difference in information disclosure except for political informaltion (which men still do more of).

Sakari: Manuela conducted an online survey in 2012 in Italy with single and multiple choice questions. It was issued to university students – 1125 responses were collected. We focused on 18-38 year old respondents, and only those using facebook. We have slightly more female than male participants, mainly 18-25 years old. Mostly single (but not all). And most use facebook everyday.

So, a quick reminder of Facebook’s privacy settings… (a screenshot reminder, you’ve seen these if you’ve edited yours).

To the results… We found that the data that are most often kept private and not shared are mobile phone number, postal address or residence, and usernames of instant messaging services. The only data they do share is email address. But disclosure is high of other types of data – birth date for instance. And they were not using friends list to manage data. Our research also confirmed that women are more cautious about sharing their data, and men are more likely to share political views. The only not gender related issues were disclosure of email and date of birth.

Concerns were mainly about other users, rather than Facebook, but it was not substantially different in Italy. We found very consistent gender effects across our study. We also checked factors related to concerns but age, marital status, education, and perceived level of expertise as Facebook user did not have a significant impact. The more time you spend on Facebook, the less likely you are to care about privacy issues. There was also a connection between respondents’ privacy concerns were related to disclosures by others on their wall.

So, conclusions, women are more aware of online privacy protection than men, and protection of private sphere. They take more active self protection there. And we speculate on the reasons… There are practices around sense of security/insecurity, risk perception between men and women, and the more sociological understanding of women as maintainers of social labour – used to taking more care of their material… Future research needed though.

Q&A

Q1) When you asked users about privacy settings on Facebook how did you ask that?

A1) They could go and check, or they could remember.

WHOSE PRIVACY? LOBBYING FOR THE FREE FLOW OF EUROPEAN PERSONAL DATA – Jockum Philip Hildén, University of Helsinki, Finland

My focus is related to political science… And my topic is lobbying for the free flow of European Personal Data – and how the General Data Protection Regulation come into being and which lobbyists influenced the legislators. This is a new piece of regulation coming in next year. It was the subject of a great deal of lobbying – it became visible when the regulation was in parliament, but the lobbying was much earlier than that.

So, a quick description of EU law making. There is the European Commission which proposes legislation and that goes to both the Council of Europe and also to the Parliament. Both draw up regulations based on the proposal and then that becomes final regulation. In this particular case there was public consultation before the final regulation so I looked at a wide range of publicly available position pages. Looking across here I could see 10 types of stakeholders offering replies to the position papers – far more in 2011 than to the first version in 2009. Companies in the US participated to a very high degree – almost as much as those in the UK and France. That’s interesting… And that’s partly to do with the extended scope of this new regulation that covers EU but also service providers in the US and other locations. This idea is not exclusive to this regulation, known as “the Brussels effect”.

In terms of sector I have categorised the stakeholders so I have divided IP and Node communications for instance, to understand their interests. But I am interested in what they are saying, so I draw on Kluver (2013) and the “preference attainment model” to compare policy preferences of interest groups with the Commissions preliminary draft proposal, the Commission’s final proposal, and the final legislative act adopted by the council. So, what interests did the council take into account? Well almost every article changed – which makes those changes hard to pin down. But…

There is an EU Power Struggle. The Commission draft contained 26 different cases where it was empowered to adopt delegated acts. All but one of these articles were removed from the Council’s draft. And there were 48 exceptions for member states, most of them are “in the public interest”… But that could mean anything! And thus the role of nation states comes into question. The idea of European law is to have consistent policy – that amount of variance undermines that.

We also see a degree of User disempowerment. Here we see responses from Digital Europe – a group of organisations doing any sort of surveillance; But we also see the American Chambers of Commerce submitting responses. In these responses both are lobbying for “implicit consent” – the original draft requested explicit consent. And the Commission sort of brought into this, using a concept of unambiguous consent… Which is itself very ambiguous. Looking at the Council vs Free Data Advocates and then compared to Council vs Privacy Advocates. The Free Data Advocates are pro free movement of data, and privacy – as that’s useful to them too, but they are not keen on greater Commission powers. Privacy Advocates are pro privacy and more supportive of Commission powers.

In Search of Safe Harbors – Privacy and Surveillance of Refugees in Europe – Paula Kift, New York University, United States of America

Over 2015 a million refugees and migrants arrived at the borders of Europe. One of the ways in which the EU attempted to manage this influx was to gather information on these peoples. In particular satellite surveillance and data collection on individuals on arrival.   
The EU does acknowledge that biometric data does raise privacy issues, but that satellites and drones is not personally identifiable or an issue here. I will argue that the right to privacy does not require presence of Personally Identifiable Information.
As background there are two pieces of legislation, Eurosur – regulations to gather and share satelite and drone data across Member States. Although the EU justifies this on the basis of helping refugees in distress, it isn’t written into the regulation. Refugee and human rights organisations say that this surveillance is likely to enable turning back of migrants before they enter EU waters.
If they do reach the EU, according to Eurodac (2000) refugees must give fingerprints (if over 14 years old) and can only apply for asylum status in one country. But in 2013 this regulation has been updated so that fingerprinting can be used in law enforcement – that goes again EU human rights act and Data Protection law. It is also demeaning and suggests that migrants are more likely to be criminal, something not backed up by evidence. They have also proposed photography and fingerprinting be extended to everyone over 6 years old. There are legitimate reasons for this… Refugees come into Southern Europe where opportunities are not as good, so some have burned off fingerprints to avoid registration there, so some of these are attempts to register migrants, and to avoid losing children once in the EU.
The EU does not dispute that biometric data is private data. But with Eurodac and Eurosur the right to data protection does not apply – they monitor boats not individuals. But I argue that the Right to Private Life is jeapodised here, through prejudice, reachability and classifiability… The bigger issue may actually be the lack of personal data being collected… The EU should approach boats and identify those with asylum claim, and manage others differently, but that is not what is done.
So, how is big data relevant? Well big data can turn non personally identifiable information into PII through aggregation and combination. And classifying individuals also has implications for the design of Data Protection Laws. Data protection is a procedural right, but privacy is a substantive right, less dependent on personally identifiable information. Ultimately the right to privacy protects the person, rather than the integrity of the data.
Q&A
Q1) In your research have you encountered any examples of when policy makers have engaged with research here?
A1 – Paula) I have not conducted any on the ground interviews or ethnographic work with policy makers but I would suggest that the increasing focus on national security is driving this activity, whereas data protection is shrinking in priority.
A1 – Jockum) It’s fairly clear that the Council of Europe engaged with digital rights groups, and that the Commission did too. But then for every one of those groups, there are 10 lobby groups. So you have Privacy International and European Digital Rights who have some traction at European level, but little traction at national level. My understanding is that researchers weren’t significantly consulted, but there was a position paper submitted by a research group at Oxford, submitted by lawers, but their interest was more aligned with national rather than digital rights issues.
Q2) You talked about the ? being embedded in the new legislation… You talk about information and big data… But is there any hope? We’ve negotiated for 4 years, won’t be in force until 2018…
A2 – Paula) I totally agree… You spend years trying to come up with a framework, but it all rests on PII…. And so how do we create Data Protection Act that respects personal privacy without being dependent on PII? Maybe the question is not about privacy but about profiles and discrimination.
A2 – Jockum) I looked at all the different sectors to look at surveillance logic, to understand why surveillance is related to regulation. The problem with Data Protection regulation is inherently problematic as it has opposing goals – to protect individuals and to enable the sharing of data… So, in that sense, surveillance logic is informing this here.
Q3) Could you outline again the threats here beyond PII?
A3 – Paula) Refugees who are aware of these issues don’t take their phones – but that reduces chance of identification but also stops potential help calls and rescues. But the risk is also about profiling… High ranking job offers are more likely to be made to women than men… Google thinks I am between 60 and 80 years old and Jewish, I’m neither, they detect who I am… And that’s where the risk is here… profiling… e.g. transactions being blocked through proposals.
Q4) Interesting mixture of papers here… Many people are concerned about social side of privacy… But know little of institutional privacy concerns. Some become more cynical… But how can we improve literacy… How can we influence people here about Data Protection laws, and privacy measures…
A4 – Esther) It varies by context. In the US the concern is with government surveillance, the EU it’s more about corporate surveillance… You may need to target differently. Myself and a colleague wrote a paper on apathy of privacy… There are issues of trust, but also work on skills. There are bigger conversations, not just with users, to be had. There are conversations to have generally with the population… Where do you infuse that, I don’t know… How do you reach adults, I don’t know?
A4 – Natascha) Not enough to strengthen awareness and rights… Skills are important here too… That you really need to ensure that skills are developed to adapt to policies and changes. Skills are key.
Q5) You talked about exclusion and registration,,, And I was wondering how exclusion to and exclusion of registration (e.g. the dead are not registered).
A5 – Paula) They collect how many are registered… But that can lead to threat inflation and very flawed data. In terms of data that is excluded there is a capacity issue… That may be the issue with deaths. The EU isn’t responsible for saving lives, but doesn’t want to be seen as responsible for those deaths either.
Q6) I wanted to come back to what you see as the problematic implications of the boat surveillance.
A6 – Paula) For many data collection is fine until something happens to you… But if you know it takes place it can have an impact on your behaviours… So there is work to be done to understand if refugees are aware of that surveillance. But the other issue here is about the use of drone surveillance to turn people back then that has clear impact on private lives, particularly as EU states have bilateral agreements with nations that have not all ratified refugee law – meaning turned back boats may result in significantly different rights and opportunities.
RT-07: IR (Chair: Victoria Nash)

The Politics of Internet Research: Reflecting on the challenges and responsibilities of policy engagement

Victoria Nash (University of Oxford, United Kingdom), Wolfgang Schulz (Hans-Bredow-Institut für Medienforschung, Germany), Juan-Carlos De Martin (Politecnico di Torino, Italy), Ivan Klimov, New Economic School, Russia (not attending), Bianca C. Reisdorf (representing Bill Dutton, Quello Center, Michigan Statue University), Kate Coyer, Central European University, Hungary (not attending)

Victoria: I am Vicky Nash and I have convened a round table of members of the international network of internet research centres.

Juan-Carlos: I am director of the Nexa Center for Internet and Society in Italy and we are mainly computer scientists like myself, and lawers. We are ten years old.

Wolfgang: I am associated with two centres, in Humboldt primarily and our interest is in governance and surveillance primarily. We are celebrating our five birthday this year. I also work with the Hans-Bredow-Institut a traditional media institute, multidisciplinary, and we increasingly focus on the internet and internet studies as part of our work.

Bianca: I am representing Bill Dutton. I am Assistant Director of the Quello Center at Michigan State University centre. We were more focused on traditional media but have moved towards internet policy in the last few years as Bill moved to join us. There are three of us right now, but we are currently recruiting for a policy post-doc.

Victoria: Thanks for that, I should talk about the department I am representing… We are in a very traditional institution but our focus has explicitly always been involvement in policy and real world impact.

Victoria: So, over the last five or so years, it does feel like there are particular challenges arising now, especially working with politicians. And I was wondering if other types of researchers are facing those same challenges – is it about politics, or is it specific to internet studies. So, can I kick off and ask you to give me an example of a policy your centre has engaged in, how you were involved, and the experience of that.

Juan-Carlos: There are several examples. One with the regional government in our region of Italy. We were aware of data and participatory information issues in Europe. We reached out and asked if they were aware. We wanted to make them aware of opportunities to open up data, and build on OECD work, but we were also doing some research ourselves. Everybody agreed in the technical infrastructure and on political level… We assisted them in creating the first open data portal in Italy, and one of the first in Europe. And that was great, it was satisfying at the time. Nothing was controversial, we were following a path in Europe… But with a change of regional government that portal has somewhat been neglected so that is frustrating…

Victoria: What motivated that approach you made?

JC: We had a chance to do something new and exciting. We had the know-how and the way it could be, at least in Italy, and that seemed like a great opportunity.

Wolfgang: My centres, I’m kind of an outsider in political governance as I’m concerned with media. But in internet governance it feels like this is our space and we are invested in how it is governed – more so than in other areas. The example I have is from more traditional media work… And that’s from the Hans-Bredow-Institute. We were asked to investigate for a report on usage patterns changes, technology changes, and puts strain on governance structures in Germany… And where there is a need for solutions to make federal and state law in Germany more convergent and able to cope with those changes. But you have to be careful when providing options, because of course you can make some options more appealing than others… So you have to be clear about whether you will be and present it as neutral, or whether you prefer an option and present it differently. And that’s interesting and challenging as an academic and with the role of an academic and institution.

Victoria: So did you consciously present options you did not support?

Wolfgang: Yes, we did. And there were two reasons for this… They were convinced we would come up with a suggestion and basis to start working with… And they accepted that we would not be specifically taking a side – for the federal or local government. And also they were confident we wouldn’t attempt to mess up the system… We didn’t present the ideal but we understood other dependencies and factors and trusted us to only put in suggestions to enhance and practically work, not replace the whole thing…

Victoria: And did they use your options?

Wolfgang: They ignored some suggestions, but where they acted they did take our options.

Bianca: I’ll talk about a semi-successful project. We were looking at detailed postcode level data on internet access and quality and reasons for that. We submitted to the National Science Foundation, it was rejected, then two weeks later we were invited to an event on just that topic by the NPIA. So we are collectively drafting suggestions from the NPIA and from a wide range of many research centres, and we are drafting that now. It was nice to be invited by policy makers… and interesting to see that idea picked up through that process in some way…

Victoria: That’s maybe an unintended consequences aspect there… And that suggestion to work with others was right for you?

Bianca: We were already keen to work with other research centres but actually we also now have policy makers and other stakeholders around the table and that’s really useful.

Victoria: those were all very positive… Maybe you could reflect on more problematic examples…

JC: Ministers often want to show that they are consulting on policy but often that is a gesture, a political move to listen but then policy made an entirely different way… After a while you get used to that. And then you have to calculate whether you participate or not – there is a time aspect there.

Victoria: And for conflict of interest reasons you pay those costs of participating…

JC: Absolutely, the costs are on you.

Wolfgang: We have had contact from ministeries in Germany but then discovered they are interested in the process as a public relations tool rather than as a genuine interest in the outcome. So now we assess that interest and engage – or don’t – accordingly. We try to say at the beginning “no, please speak to someone else” when needed. At Humboldt is reluctant to engage in policy making, and that’s a historical thing, but people expect us to get involved. We are one of the few places that can deliver monitoring on the internet, and there is an expectation to do that… And when ministeries design new programmes, we are often asked to be engaged and we have learned to be cautious about when we engage. Experience helps but you see different ways to approach academia – can be PR, sometimes you want support for your position or support politically, or you can actually be engaged in research to learn and have expertise and information. If you can see what approach it is, you can handle it appropriately.

Victoria: I think as a general piece of advice – to always question “why am I being approached” in the framing of “what are their motivations?”, that is very useful.

Wolfgang: I think starting in terms of research questions and programmes that you are concerned with gives you a counterpoint in your own thinking to dealing with requests. Then when good opportunities come up you can take it and make use of it… But academic value can be limited of some approaches so you need a good reason to engage in those projects and they have to align with your own priorities.

Bianca: My bad example is related to that. The Net Neutrality debate is a big part of our work… There are a lot of partisan opinions on that, and not a lot of neutral research there. We wanted to do a big project there but when we try to get funding for that we have been steered to stay away. We’ve been steered that talking about policy with policy makers is very negative, it is taken poorly. This debate has been bouncing around for 10 years, we want to see where Net Neutrality is imposed if we see changes in investment… But we need funding to do that… And funders don’t want to do it and are usually very cosy with policy makers…

Victoria: This is absolutely an issue, these concerns are in the minds of policy makers as well and that’s important.

Wolfgang: When we talk about research in our field and policy makers, it’s not just about when policy makers approach you to do something… You have a term like Net Neutrality at the centre that requires you to be either neutral or not neutral, that really shapes how you handle that as an academic… You can become, without wanting it, someone promoting one side sometimes. On a minor protection issue we did some work on co-regulation with Australia that seemed to solve a problem… But then after this debate in Germany and started drafting the inter-state treaty on media regulation, the policy makers were interested… And then we felt that we should support it… and I entered the stage but it’s not my question anymore… So you have opinion about how you want something done…

JC: As a coordinator of a European project there was a call that included a topic of “Net Neutrality” – we made a proposal but what happened afterwards clearly proved that that whole area was topic. It was in the call… But we should have framed it differently. Again at European level you see the Commission funds research, you see the outcomes, and then they put out a call that entirely contradicts the work that they funded for political reasons. There is such a drive for evidence-based policy making that it is important that they frame that way… It is evidence-based when it fits their agenda, not when it doesn’t.

Victoria: I did some work with the Department of Media, Culture and Sport last year, again on minor protection, and we were told at the offset to assume porn caused harm to minors. And the frames of reference was shaped to be technical – about access etc. They did bring in a range of academic expertise but the terms of reference really constrained the contribution that was possible. So, there are real bear traps out there!

Wolfgang: A few years back the European Commission asked researchers to look at broadcasters and interruptions to broadcasts and the role of advertising, even though we need money we do not do that, it isn’t answering interesting research questions for us.

Victoria: I raised a question earlier about the specific stakes that academia has in the internet, it isn’t just what we study. Do you want to say more about that.

Wolfgang: Yes, at the pre-conference we had an STS stream… People said “of course we engage with policy” and I was wondering why that is the main position… But the internet comes from academia and there is a long standing tradition of engagement in policy making. Academics do engage with media policy, but they would’t class it as “our domain”, but they were not there are part of the beginning – academia was part of that beginning of the internet.

Q&A

Q1) I wonder if you are mistaking the “of-ness” with the fact that the internet is still being formed, still in the making. Broadcast is established, the internet is in constant construction.

A1 – Wolfgang) I see that

Q1) I don’t know about Europe but in the US since the 1970s there have been deliberate efforts to reduce the power of decision makers and policy makers to work with researchers…

A1 – Bianca) The Federal Communications Commission is mainly made of economists…

Q1) Requirements and roles constrain activities. The assumption of evidence-based decisions is no longer there.

Q2) I think that there is also the issue of shifting governance. Internet governance is changing and so many academics are researching the governance of the internet, we reflect greatly on that. The internet and also the governance structure are still in the making.

Victoria: Do you feel like if you were sick of the process tomorrow, you’d still want to engage with policy making?

A2 – Phoebe) We are a publicly funded university and we are focused on digital inequalities… We feel real responsibility to get involved, to offer advice and opinions based on our advice. On other topics we’d feel less responsible, depending on the impact it would have. It is a public interest thing.

A2 – Wolfgang) When we look at our mission at the Hans-Bredow-Institute we have a vague and normative mission – we think a functioning public sphere is important for democracy… Our tradition is research into public spheres… We have a responsibility there. But we also have a responsibility that the evaluation of academic research becomes more and more important but there is no mechanism to ensure researchers answer the problems that society has… We have a completely divided set of research councils and their yardsticks are academic excellence. State broadcasters do research but with no peer review at all… There are some calls from the Ministry of Science that are problem-orientated but on the whole there isn’t that focus on social issues and relevance in the reward process, in the understanding of prestige.

Victoria: In the UK we have a bizarre dichotomy where research is measured against two measures: impact – where policy impact has real value – and that applies in all fields; but there is also regulation that you cannot use project funds to “lobby” government – which means you potentially cannot communicate research to politicians who disagree. This happened because a research organisation (not a university) opposed government policy with research funded by them… Implications for universities is currently uncleared.

JC: Italy is implementing a similar system to the UK. Often there is no actual mandate on a topic, so individuals come up with ideas without numbers and plans… We think there is a gap – but it is government and ministries work. We are funded to work in the national interest… But we need resources to help there. We are filling gaps in a way that is not sustainable in the long term really – you are evaluated on other criteria.

Q3) I wanted to ask about policy research… I was wondering if there is policy research we do not want to engage in. In Europe, and elsewhere, there is increasing need to attract research… What are the guidelines or principles around what we do or do not go for funding wise.

A3 – Bianca) We are small so we go for what interests us… But we have an advisory board that guides us.

A3 – Wolfgang) I’m not sure that there are overarching guidelines – there may be for other types of special centres – but it’s an interesting thing to have a more formalised exchange like we have right now…

A3 – JC) No, no blockers for us.

A3 – Victoria) Academic freedom is vigorously held up at Oxford but that can mean we have radically different research agendas in the same centre.

Q4) With that lack of guidance, isn’t there a need for academics to show that they have trust, especially in the public sphere, especially when getting funding from, say, Google or Microsoft. And how can you embed that trust?

A4 – Wolfgang) I think peer review as a system functions to support that trust. But we have to think about other institutional settings, and that there is enough oversight… And many associations, like Liebneiz, requires an institutional review board, to look over the research agenda and ensure some outside scrutiny. I wouldn’t say every organisation or research centre needs that – it can be helpful but costly in terms of time in particular. And you cannot trust the general public to do that, you need it to be peers. An interesting question though, especially as Humboldt has national funding from Google… In this network academics play a role, and organisations play a role, and you have to understand the networks and relationships of partners you work with, and their interests.

A4 – Bianca) That’s a question that we’ve faced recently… That concern that corporate funding may sway result and the best way to face that is to publish methodology, questionnaires, process… to ensure the work is understood in that context that enables trust in the work.
A4 – JC) We spent years trying to deal with the issue of independence and it is very important as academia has responsibility to provide research that is independent and unbiased by funding etc. And not just about the work itself, but also perceptions of the work… It is quite a local/contextual issue. So, getting money from Google is perceived differently in different countries, and at different times…
Victoria: This is something we have to have more conversations about this. In medicine there is far more conversation about codes of conduct around funding. I am also concerned that PhD funding is now requiring something like a third of PhDs to be co-funded by industry, without any understanding from UK Government about what that means and what that means for peer review… That’s something we need to think about far more stringently.
Q5) For companies there are requirements to review outputs before publications to check for proprietary information and ensure it is not released. That makes industry the final arbiter here. In Canada our funding is also increasingly coming from industry and there that means that proprietary data gives them final say…
A5 – Bianca) Sometimes it has to be about negotiating contracts and being clear what is and is not acceptable.
Victoria) That’s my concern with new PhD funding models, and also with use of industry data. It will be non-negotiable that the research is not compromised but how you make that process clear is important.
Q6) What are your models here – are you academic or outside academia?
A6 – JC) Academic and policy are part of the work we are funded to do.
A6 – Bianca) We are 99% Endowment funded, hence having a lot of freedom but also advisory board guidance.
A6 – Wolfgang) Our success is assessed by academic publication. The Humboldt Institute is funded largely by private companies but a range of them, but also from grants. The Hans-Bredow-Institute is mainly directly funded by the Hamburg Ministry of Science but we’d like to be funded from other funders across Germany.
A6 – Victoria) Our income is research income, teaching income from masters degrees… We are a department of the university. Our projects are usually policy related, but not always government related.
Q7) I was wondering if others in the room have been funded for policy work – my experience has been that policy makers had expectations and an idea of how much control they wanted… By contrast money from Google comes with a “research something on the internet” type freedom. This is not what I would have expected so I just wondered how others experiences compared.
Comment) I was asked to do work across Europe with public sector broadcasters… I don’t know how well my report was seen by policy makers but it was well received by the public sector broadcaster organisations.
Comment) I’ve had public sector funding, foundation funding… But I’ve never had corporate money… My cynical take is that corporations maybe are doing this as PR, hence not minding what you work on!
Comment) I receive money from funding agencies, I did a joint project that I proposed to a think tank… Which was orientated to government… But a real push for impact… Numbers needed to be in the title. I had to be an objective researcher but present it the right way… And that worked with impact… And then the government offered me a contract to continue the research – working for them not against them. The funding was coming from a position close to my own idea… I felt it was a bit instrumentalised in this way…
A7 – Wolfgang) I think that it is hard to generalise… Companies as funders do sometimes make demands and expect control of publishing of results… And whether it is published or not. We don’t do that – our work is always public domain. It’s case by case… But there is one aspect we haven’t talked about and that is the relationship between the individual researcher and their political engagement (or not) and how that impacts upon the neutrality of the organisation. As a lawyer I’m very aware of that… For instance if giving expert evidence in court, the importance of being an individual not the organisation. Especially if partners/funders before or in the future are on the opposite side. I was an expert for Germany in a court case, with private broadcasters on the other side, and you have to be careful there…
A7 – JC) There is so little money for research in Italy… Regarding corporations… We got some money from Google to write an open source library, it’s out there, it’s public… There was no conflict there. But money from companies for policy work is really difficult. But lots of case by case issues in-between.
Q8) But companies often fund social science work that isn’t about policy but has impact on policy.
A8 – JC) We don’t do social science research so we don’t face that issue.
A8 – Victoria) Finding ways to make that work that guarantees independence is often the best way forward – you cannot and often do not want to say no… But you work with codes of conduct, with advisory board, with processes to ensure appropriate freedoms.
JC: A question to the audience… A controversial topic arises, one side owns the debate and a private company approaches to support your voice… Do you take their funding?
Comment) I was asked to do that and I kind of stalled so that I didn’t have to refuse or take part, but in that case I didn’t feel
Comment) If having your voice in the public triggers the conversation, you do make it visible and participate, to progress the issue…
Comment) Maybe this comes down to personal versus institutional points of view. And I would need to talk to colleagues to help me make that decision, to decide if this would be important or not… Then I would say yes… Better solution is to say “no, I’m talking in a private capacity”.
JC) I think that the point of separating individual and centres here is important. Generally centres like ours do not take a position… And there is an added element that if a corporation wants to be involved, a track record of past behaviour makes it less troublesome. Saying something for 10 years gives you credibility in a way that suddenly engaging does not.
Wolfgang) In Germany it is general practice that if your arguments are not being heard, then you engage expertise – it is general practice in German legal academic practice. It is ok I think.
Comment) In the Bundestag they bring in experts… But of course the choice of expert reflects values and opinions made in articles. So you have a range of academics supporting politics… If I am invited to talk to parliament, I say what I always say “this is not a problem”.
Victoria: And I think that nicely reminds us why this is the politics of internet research! Thank you.
Plenary Panel: Who Rules the Internet? Kate Crawford (Microsoft Research NYC), Fieke Jansen (Tactical Tech), Carolin Gerlitz (University of Siegen) – Chair: Cornelius Puschmann
Jennifer Stromer-Galley, President of the Association of Internet Researchers: For those of you who are new to the AoIR, this is our 17th conference and we are an international organisation that looks at issues around the internet – now including those things that have come out of the internet including mobile apps. And our panel today we will be focusing on governance issues. Before that I would like to acknowledge this marvellous city of Berlin, and to thank all of my colleagues in Germany who have taken such care, and to Humboldt University for hosting us in this beautiful venue. And now, I’d like to handover to Herr Matthias Graf von Kielmansegg, representing Professor Dr Elizabeth Wacker, Federal Minister of Labour and Social Affairs.
Matthias Graf von Kielmansegg: is here representing Professor Wacker, who takes a great interest in internet and society, including the issues that you are looking at here this week. If you are not familiar with our digitisation policy, the German government published a digital agenda for the first time two years ago, covering all areas of government operation. In terms of activities it concentrates on the term 2013-2017, and it needs to be extended, and it reaches strategically far into the next decade. Additionally we have a regular summit bringing together the private sector, unions, government and the academic world looking at key issues.
You all know that digital is a fundamental gamechanger, in the way goods and services are used, the ways we communicate and collaborate, and digital loosens our ties to time and place… And we aren’t at the end but at the middle of this process. Wikipedia was founded 16 years ago, the iPhone launched 9 years ago, and now we talk about Blockchain… So we do not know where we will be in 10 or 20 years time. And good education and research are key to that. And we need to engage proactively. In Germany we are incorporating Internet of Things into our industries. In Germany we used to have a technology-driven view of these things, but now we look at economic and cultural contexts or ecosystems to understand digital systems.
Research is one driver, the other is that science, education, and research are users in their own right. Let me focus first on education… Here we must answer some major issues – what will drive change here, technology or pedagogy? Who will be the change agents? And what of the role of teachers and schools? They must take the lead in change and secure the dominance of pedagogy, using digital tools to support our key education goals – and not vice versa. And that means digital education must offer more opportunities, flexibilities, and better preparation for tomorrow’s world of work. With this in mind we plan to launch a digital education campaign to help young people find their place in an ever changing digital world, and to be ready to adapt to the changes that arise. How education can support our economic model and higher education. And we will need to address issues of technical infrastructure, governance – and for us how this plays out with our 60 federal states. Closer to your world is the world of science. Digital tools create huge amounts of new data and big data. The challenges organisations face is not just infrastructure but how to access and use this data. We call our approach Securing the Life Cycle of Data, concerned with aceess, use, reuse, interoperability. And how will be decide what we save, and what we delete? And who will decide how third parties use this data. And big data goes alongside other aspects such as high powered computing. We plan to launch an initiative of action in this area next year. To oversee this we have a Scientific Oversight Body with stakeholders. We are also keen to embrace Open Data and the resources to support that. We have added new conditions to our own funding conditions – any publication based on research funded by us, must be published open access.
We need to know more about internet and society need to be known, and there is research to be done. So, the federal government has decided to establish a German Internet Institute. It will address a number of areas of importance: access and use of the digital world; work and value creation and our democracy. We want an interdisciplinary team of social scientists, economists, and information scientists. The competitive selection process is just underway, and we expect the winner to be announced next spring. There is readiness to spend up to €15M over the first five years. And this highlights the importance of the digital world in Germany.
Let me just make one comment. The overall title of this conference is Internet Rules! It is still up to us to be the fool or the wise… We need to understand what might happen is politics, economics and society do not find the answers to the challenges we face. And so hopefully we will find that it’s not the internet that rules, but that democracy rules!
Kate Crawford
When Cornelius asked me to look at the idea of “Who rules the internet?” I looked up at my bookshelf, and found lots of books written by people in this community, many of you in this room, looking at just this question. And we have moved from the ’90s utopianism to the world of infrastructure, socio-technical aspects, the Internet of Things layer – and zombie web cams being coopted by hackers. So many of you have enhanced my understanding of this issue.
Right now we see machine learning and AI being rapidly build into our world without implications being fully understood… I am talking narrowly about AI here… Sometimes they have lovely feminine names: Siri, Alexa, etc… But these systems are embedded in our phones, we have AI analysing images on Facebook. It will never be separate from humans, but it is distinct and significant, and we see AI beyond the internet and into systems – on who gets released from jail, on hospital stays, etc. I am sure all of us were surprised by the fact that Facebook, last month, censored a Pulitzer Prize winning image of a girl being napalmed in Vietnam… We don’t know the processes that triggered this, though an image of a nude girl likely triggers these processes… Now that had attention, the Government of Norway accused Facebook or erasing our shared history. The image was restored but this is the tip of the iceberg – and most images and actions are not so apparent to us…
This lack of visibility is important but it isn’t new… There are many organisational and procedural aspects that are opaque… I think we are having a moment around AI where we don’t know what is taking place… So what do we do?
We could make them transparent… But this doesn’t seem likely to work. A colleague and I have written about the history of transparency and that process and availability code does not necessarily tell you exactly what is happening and how this is used. Y Combinator has installed a system, called HAL 9000 brilliantly, and have boasted that they don’t know how it filters applications, only the system could do that. That’s fine until that system causes issues, denies you rights, gets in your way…
So we need to understand these algorithms from the outside… We have to poke them… And I think of Christian Salmand(?)’s work on algorithmic auditing. Christian couldn’t be here this evening and my thoughts are with him. But he is also part of a group who are trying to pursue legal rights to enable this type of research.
And there are people that say that AI can fix this system… This is something that the finance sector talks about. They have an environment of predatory machine learning hunting each other – Terry Cary has written about this. It’s tempting to create a “police AI” to watch these… I’ve been going back to the 1970s books on AI, and the work of Joseph Weizenbaum who created ELIZA. And he suggested that if we continue to ascribe AI to human acting systems it might be a slow acting poison. It is a reminder to not be seduced by these new forms of AI.
Carolin Gerlitz, University of Siegen
I think after the last few days the answer to the question of “who rules the internet?”, I think the answer is “platforms”!
Their rules of who users are, what they can do, can seem very rigid. Before Facebook introduced the emotions, the Like button was used in a range of ways. With the introduction of emotions they have rigidly defined responses, creating discreet data points to be advertiser ready and available to be recombined.
There are also rules around programmability, that dictate what data can be extracted, how, by whom, in what ways… And platforms also like to keep the interpretation of data in control, and adjust the rules of APIs. Some of you have been working to extract data from platforms where things are changing rapidly – Twitter API changes, Facebook API and Research changes, Instagram API changes, all increasingly restricting access, all dictating who can participate. And limiting the opportunity to hold platforms to account, as my colleague Anne Helmond argues.
Increasingly platforms are accessed indirectly through intermediaries which create their own rules, a cascade of rules for users to engage with. Platforms don’t just extend to platforms but also to apps… As many of you have been writing about in regard to platforms and apps… And Christian, if he were here today, would talk about the increasing role of platforms in this way…
And platforms reach out not only to users but also non-users. They these spaces are also contextual – with place, temporality and the role of commercial content all important here.
These rules can be characterised in different ways… There is a dichotomy of openness and closedness. Much of what takes place is hidden and dictated by cascading sets of rule sets. And then there is the issue of evaluation – what counts, for whom, and in what way? Tailorism refers to the mass production of small tasks – and platforms work in these fine grained algorithmed way. But platforms don’t earn money from users’ repetitive actions… Or from use of platform data by third parties. They “put life to work” (Lazlo) by using data points raising questions of who counts and what counts.
Fieke Jansen, Tactical Tech
I work at an NGO, on the ground in real world scenarios. And we are concerned with the Big Five: Apple, Amazon, Google, Microsoft and Facebook. How did we get like this? People we work with are uncomfortable with this. When we ask activists and ask them to draw the internet, they mostly draw a cloud. We asked at a session “what happens if the government bans Facebook” and they cannot imagine it – and if Facebook is beyond government then where are we at here? And I work with an open source company who use Google Apps for Business – and that seems like an odd situation to me…
But I’ll leave the Big Five for now and turn to BitNik… They used the dark net shopper and brought random stuff for $50… And then placed them in a gallery… They did
Iced T watch… After Wikileaks an activist in Berlin found all the NSA services spying on this and worked out who was working for the secret service… But that triggers a real debate… There was real discussion of being anti-patriotic, and puts people in data… But the data he used, from LinkedIn, is sold every day…. He just used it in a way that raised debate. We allow that selling use… But this coder’s work was not… Isn’t that debate needed.
So, back to the Big Five. In 2014 Google (now Alphabet) was the second biggest company in the world – with equivalent GDP bigger than Austria. We choose to use many of their services every day… But many of their services are less in our face. In the sensor world we have fewer choices about data… And with the big companies it is political too… In Brussels you have to register lobbists – there are 9 for Google, 7 used to work for the European Parliament… There is a revolving door here.
There is also an issue of skill… Google has wealth and power and knowledge that are very large to counter. Facebook have, around 400m active users a month, 300m likes a day, they are worth $190m… And here we miss the political influence. They have an enormous drive to conquer the global south… They want to roll out Facebook Sero as “the internet”…
So, who rules the internet? It’s the 1% of the 1%… It is the Big Five, but also the venture capitalists who back them… Sequoia and Kleiner Perkins Caufield & Byers, and you have Peter Thiel… It is very few people behind many of the biggest companies including some of the Big Five…
People use these services that work well, work easily… I only use open source… Yes, it is harder… Why are so few questioning and critiquing that? We feed the beast on an every day basis… It is our universities – also moving to decentralised Big Five platforms in preference to their own, it is our government… and if we are not critical what happens?
Panel Discussion
Cornelius: Many here study internet governance… So I want to ask, Kate, does AI rule the internet?
Kate: I think it is really hard to think about who rules the internet. The interesting thing about automated decision making networks have been with us for a while… It’s less about ruling, and who… And it’s more about the entanglements, fragmentation and governance. We talk about the Big Five… I would probably say there are Seven companies here, deciding how we get into university, healthcare, housing, filtering far beyond the internet… And governments do have a role to play.
Cornelius: How do we govern what we don’t understand?
Kate: That’s a hard question… That keeps me up at night that question… Governments look to us academics, technology sectors, NGOs, trying to work out what to do. We need really strong research groups to look at this – we tried to do this with AI Now. Interdisciplinary is crucial – these issues cannot be solved by computer science alone or social science alone… This is the biggest challenge of the next 50 years.
Cornelius: What about how national governments can legislate for Facebook, say? (I’m simplifying a longer question that I didn’t catch in time here, correction welcome!)
Carolyn: I’m not sure about Facebook but in our digital methods workshop we talked about how on Twitter content can be deleted, that can then be exposed in other locations via the API. And it is also the case that these services are specific and localised… We expect national governments to have some governance, when what you understand and how you access information varies by location… Increasing that uncanny notion. I also wanted to comment on something you asked Kate – thinking about the actors here, they all require engagement of users – something Fieke pointed to. Those actors involved in rulers are dependent on actions of other actors.
Cornelius: So how else we be running these things? The Chinese option, the Russion options, are there better options?
Carolyn: I think I cannot answer – I’d want to put it to these 570 smart people for the next two days. My answer would be to acknowledge distributedness to which we have to respond and react… We cannot understand algorithms and AI without understanding context…
Carolyn: Fieke, what you talked about… Being extreme… Are we whining because as Europeans we are being colonised by other areas of the world, even as we use and are obsessed by our devices and tools – complaining then checking our iPhones. I’m serious… If we did care that much, maybe actions would change… You said people have the power here, maybe it’s not a big enough issue…
Fieke: Is it Europeans concerned about Americans from a libertarian point of view? Yes. I work mainly in non-European parts of the world and particularly in the North America… For many the internet is seen as magical and neutral – but those who research it we know it is not. But when you ask why people use tools, it’s their friends or community. If you ask them who owns it, that raises questions that are framed in a relevant way. The framing has to fit people’s reality. In South America talk of Facebook Sero as the new colonialism, you will have a political conversation… But we also don’t always know why we are uncomfortable… It can feel abstract, distant, and the concern is momentary. Outside of this field, people don’t think about it.
Kate: Your provocation that we could just step away, and move to open source. The reality includes opportunity costs to employment, to friends and family… But even if you do none of those things then you walk down the streets and you are tracked by sensors, by other devices…
Fieke: I absolutely agree. All the data collected beyond our control is the concern… But we can’t just roll over and die, we have to try and provoke and find mechanisms to play…
Kate: I think that idea of what the political levers may be… Those conversation of legal, ethical, technical parameters seem crucial, more than consumer choice. But I don’t think we have sufficient collective models of changing information ecologies… and they are changing so rapidly.
Q&A
Q1) Thank you for this wonderful talk and perspectives here. You talked about the infrastructure layer… What about that question. You say this 1% of 1% own the internet, but do they own the infrastructure? Facebook is trying to balloon in the internet so that they cannot be cut off… It also – second question – used to be that YOU owns the internet that changed the dominance of big companies… This happens in history quite often… So what about that?
A1 – Fieke) I think that Kate talked about the many levels of ownership… Facebook piggy backs on other infrastructures, Google does the balloons. It used to be that government owned the infrastructure. There are new cables rolling out… EU funding, governments, private companies, rich people… The infrastructure is mainly owned by companies now.
A1 – Kate) I think infrastructure studies has been extraordinarily rich – work of Nicole Serafichi for instance – but also we have art responses. Infrastructure is very of the moment… But what happens next… It is not just about infrastructures and their ownerships, but also surveillance access to these. There are things like MESH networks… And there are people working here in Berlin to flag up faux police networks during protests to help protestors protect themselves.
A1 – Carolyn) I think that platforms would have argued differently ten years ago about who owned the internet – but “you” probably wouldn’t have been the answer…
Q2) I wonder if the real issue is that we are running on very vague ideas of government that have been established for a very different world. People are responding to elections and referenda in very irrational ways that suggest that model is not fit for purpose. Is there a better form of governance or democracy that we should move towards? Can AI help us there?
A2 – Kate) What a beautiful and impossible to answer question! Obviously I cannot answer that properly but part of the reason I do AI research is to try to inform and shape that… Hence my passion for building research in this space. We don’t have much data to go on but the imaginative space here has been dominated by those with narrow ideas. I want to think about how communities can develop and contribute to AI, and what potential there is.
Q3) Do we need to rethink what we mean by democratic control and regulations… Regulations are closely associated with nation states, but that’s not the context that most of the internet operates. Do we need to re-engage with the question of globalisation again.
A3) As Carolyn said, who is the “you” in web 2.0, and whose narrative is there. Globalisation is similar. I pay taxes to a nation state that has rules of law and governance… By denying that they buy into the narrative of mainly internet companies and huge multinational organisations.
Cornelius: I have the declaration of independence of the internet by Perry Barlow which I was tempted to quote you… But it is interesting to reflect on how we have moved from utopian positions to where we are today.
Q4 – participant from Google!) There is an interesting question here… If this question was pointing to deeper truth… A clear ruler, an internet, would allow this question of who rules to be answered. I would ask how we have agency over how the proliferation of internet technologies and how we benefit from them… ?
A4 – Kate) A great title, but long for the programme! But your phrasing is so interesting – if it is so diverse and complex then how we engage is crucial. I think that is important but, the optimistic part, I think we can do this.
A4 – Carolyn) One way to engage is through descent… and negotiating on a level that ensures platforms work beyond economic values…
Q5) The last time I was forced to give away my data was by the Australian state (where I live) in completing the census… I had to complete it or I would be fined over $1000 AUS – Facebook, Twitter, etc. never did that… I rule this kind of internet, I am still free in my choices. But on the other hand why is it that states that are best at governing platforms are the ones I want to live in the least. Maybe without the platforms no-one would use the internet so we’d have one problem less… If we as academics think about platforms in these mythic ways, maybe we end up governing in a way that is more controlled and has undesirable effects.
A5 – Kate) Many questions there, I’ll address two of those. On the census I’d refer you to articles
University of Cambridge study showed huge accuracy in determining marital status, sexuality and whether a drug or alcohol user based on Facebook likes… You may feel free but those data patterns are being built. But we have to move beyond thinking that only by active participation do you contribute to these platforms…
A5 – Fieke) The Census issue you brought up is interesting… In the UK, US and Australia the contractor for the Census is conducted by one of the world’s biggest arms manufacturers… You don’t give data to the Big Five… But…  So, we do need to question the politics behind our actions… There is also a perception that having technical skills makes you superior to those without, and if we do that we create a whole new class system and that raises whole new questions.
Q6) The question of internet raises issues of boundaries, and how we do governance and work of governance and rule-making. Ideally when we do that governance and rule-making there are values behind that… So what are the values that you think need to underlie those structures and systems…
A6 – Carolyn) I think values that do not discriminate people through algorithmic processing, AI, etc. Those tools should allow people to not be discriminated on the basis of things they have done in the past… But that requires understanding of how that discrimination is taking place now…
A6 – Kate) I love that question… All of these layers of control come with values baked in, we just don’t know what they are… I would be interested to see what values drop out of those systems, that don’t fit the easy metricisation of our world. Some great things to fall out of feminist and race theory and values from that…
A6 – Fieke) I would add that values should not just be about the individual, and should ensure that the collective is also considered…
Cornelius: Thank you for offering a glimmer of hope! Thank you all!
Oct 052016
 

If you’ve been following my blog today you will know that I’m in Berlin for the Association of Internet Researchers AoIR 2016 (#aoir2016) Conference, at Humboldt University. As this first day has mainly been about workshops – and I’ve been in a full day long Digital Methods workshop – we do have our first conference keynote this evening. And as it looks a bit different to my workshop blog, I thought a new post was in order.

As usual, this is a live blog post so corrections, comments, etc. are all welcomed. This session is also being videoed so you will probably want to refer to that once it becomes available as the authoritative record of the session. 

Keynote: The Platform Society – José van Dijck (University of Amsterdam) with Session Chair: Jennifer Stromer-Galley

We are having an introduction from Wolfgang (?) from Humboldt University, welcoming us and noting that AoIR 2016 has made the front page of a Berlin newspaper today! He also notes the hunger for internet governance information, understanding, etc. from German government and from Europe.

Wolfgang: The theme of “Internet Rules!” provides lots of opportunities for keynotes, discussions, etc. and it allows us to connect the ideas of internet and society without deterministic structures. I will now hand over to the session chair Cornelius Puschmann.

Cornelius: It falls to me to do the logistical stuff… But first we have 570 people registered for AoIR 2016  so we have a really big conference. And now the boring details… which I won’t blog in detail here, other than to note the hashtag list:

  • Official: #aoir2016
  • Rebel: #aoir16
  • Retro: #ir17
  • Tim Highfield: #itisthesevebeenthassociationofinternetresearchersconferenceanditishappeningin2016

And with that, and a reminder of some of the more experimental parts of the programme to come.

Jennifer: Huge thanks to all of my colleagues here for turning this crazy idea into this huge event with a record number of attendees! Thank you to Cornelius, our programme chair.

Now to introduce our speaker… Jose van Dijck, professor at the University of Amsterdam as well as visiting work across the world. She is the first woman to hold the Presidency of the Royal Academy of Arts, Science and Research in The Netherlands. Her most recent book is the Culture of Connectivity: A History of Social Media. It takes a critical look back at social media and social networking, not only as social spaces but as business spaces. And her lecture tonight will give a preview of her forthcoming work on the Public Values in a Platform Society.

Jose: It is lovely to be here, particularly on this rather strange day…. I became President of the Royal Academy this year and today my colleague won the Nobel Prize in Chemistry – so instead of preparing for my keynote today I was dealing with press inquiries, so it is nice to focus back on my real job…

So a few years ago Thomas Poell wrote an article on the politics of social platforms. His work on platforms inspired my work on networked platforms being interwoven into an ecology economically and socially. Since I wrote that book, the last chapter is on platforms, many of which have now become the main players… I talked about Google (now Alphabet), Facebook, Amazon, Microsoft, LinkedIn (now owned by Microsoft), Apple… And since then we’ve seen other players coming in and creating change – like Uber, AirBnB, Coursera. These platforms have become the gateways to our social life… And they have consolidated and expanded…

So a Platform is an online site that deploys automated technologies and business models to organise data streams, economic interactions, and social exchanges between users of the internet. That’s the core of the social theory I am using. Platforms ARE NOT simple facilitators, and they are not stand alone systems – they are interconnected.

And a Platform Ecosystem is an assemblage of networked platforms, governed by its own dynamics and operating on a set of mechanisms…

Now a couple of years ago Thomas and I wrote about platform mechanisms and the very important idea of “Datafication”. Commodification – a platform’s business model and governance defines the way in which datafied information is transformed into (economic, societal) value. There are many business models and many governance models – they vary but governance models are maybe more important than business models, and they can be hard to pin down. Selection are about data flows filtered by algorithms and bots, allowing for automated selection such as personalisation, rankings, reputation. Those mechanisms are not visible right now, and we need to make those explicit so that we can talk about them and their implications. Can we hold Facebook accountable for Newsfeed in the ways that traditional media are accountable? That’s an important question for us to consider…

The platform ecosystem is not a level playing field. They are gaining traction not through money but through the number of users. And network effects mean that user numbers are the way we understand the size of the network. There is Platformisation (thanks Anna?) across sectors… And that power is gained through cross ownership and cross platform, but also through true architecture and shared platforms. In our book we’ll give both private and public sectors and how they are penetrated by platform ecosystems. We used to have big oil companies, or big manufacturing companies… But now big companies operate across sectors.

So transport for instance… Uber is huge, partly financed by Google and also in competition with Google. If we look at News as a sector we have Huffington Post, Buzzfeed, etc. they are also used as content distribution and aggregators for Google, Facebook, etc.

In health – a second becoming most proliferated – we see fitness and health apps, with Google and Apple major players here. And in your neighbourhood there are apps available, some of these are global apps localised to your neighbourhoods, sitting alongside massive players.

In Education we’ve seen the rise of Massive Online Open Courses, with Microsoft and Google investing heavily alongside players like EdX, Coursera, Udacity, FutureLearn, etc.

All of the sectors are undergoing platformisation… And if you look across them all, all areas of private and public life the activity is revolving around the big five: Google, Facebook. Apple, Amazon, with LinkedIn and Twitter also important. And take, for example, AirBnB

Platform society is a society which social, economic and interpersonal traffic is largely channeled by an (overwhelmingly corporate) global online platform ecosystem that is driven by algorithms and fuelled by data. That’s not a revolution, it’s something we are part of and see every day.

Now we have promises of “participatory culture” and the euphoria of the idea of web 2.0, and of individuals contributing. More recently that idea has shifted to the idea of the “sharing economy”… But sharing has shifted in it’s meaning too. It is about sharing resources or services for some sort of fee, that’s a transaction based idea. And from 2015 we see awareness of the negative sides of the sharing economy. So a Feb 2015 Time cover read: “Strangers crashed my car, ate my food and wore my pants. Tales from the sharing economy” – about the personal discomfort of the downsides. And we see Technology Quarterly writing about “When it’s not so good to share” – from the perspective of securing the property we share here. But there is more at stake than personal discomfort…

We have started to see disruptive protest against private platforms, like posters against AirBnB. City Councils have to hire more inspectors to regulate AirBnB hosts for safety reasons – a huge debate in Amsterdam now, and the public values changing as a consequence of so many AirBnB hosts in this city. And there are more protests about changing values… Saying people are citizens not entrepreneurs, that the city is not for sale…

In another sector we see Uber protests, by various stakeholders. We see these from licenced taxi drivers, accusing them of safety issues and social values; but also protests by drivers. Uber do not call themselves a “transportation” company, instead calling themselves a connectivity company. Now Uber drivers have complained that Uber don’t pay insurance or pensions…

So, AirBnB and Uber are changing public values, they haven’t anchored existing values in their own design and development. There are platform promises and paradoxes here… They offer personalised services whilst contributing to the public good… The idea is that they are better at providing services than existing players. They promote community and connectedness whilst bypassing cumbersome institutions – based on the idea that we can do without big government or institutions, and without those values. These platforms also emphasize public values, whilst obscuring private gain. These are promises claiming that they are in the public interest… But that’s a paradox with hidden private gains.

And so how do we anchor collective, public values in a platform society and how do we govern this. ? has the idea of governance of platforms as opposed to governance by platforms. Our government is mainly concerned with governing platforms – regulations, privacy etc. and that is appropriate but there are public values like fairness, like accuracy, like safety, like privacy, like transparency, like democracy… Those values are increasingly being governed by platforms, and that governance is hidden from us in the algorithms and design decisions…

Who rules the platform society? Who are the stakeholders here? There are many platform societies of course, but who can be held accountable? Well it is an intense ideological battleground… With private stakeholders like (global) corporations, businesses, (micro-)entrepreneurs; consumer groups; consumers. And public stakeholders like citizens; co-ops and collectives, NGOs, public institutions, governments, supra-national bodies… And matching those needs up is never going to happen really…

Who uses health apps here? (many do) In 2015 there were 165,000 health apps in the Google Play store. Most of them promise personalised health and, whilst that is in the future, they track data… They take data right from individual to companies, bi-passing other actors and health providers… They manage a wide variety of data flows (patients, doctors, companies). There is a variety of business models, particularly unclear. There is a site called “Patients like me” which says that it is “not just for profit” – so it is for profit, but not just for profit… Data has become currency in our health economy. And that private gain is hiding behind the public good arguement. A few months ago in Holland we started to have insurance discounts (5%) if you send FitBit scores… But I thin the next step will be paying more if you do not send your scores… That’s how public values change…

Finally we have regulation – government should be regulating security, safety, accuracy, and privacy. It takes the Dutch FDA 6 months to check the safety and accuracy of one app – and if it is updated, you have to start again! In the US the US Dept of Health and Human Services, Office of National Coordinator for Health Information Technology (ONC), Office for Civil Rights (OCR) and Food and Drug Administration (FDA) released a guide called “Developing a mobile health app?” providing guidance on which federal laws need to be followed. And we see not just insurance using apps, but insurance and healthcare providers having to buy data services from providers and that changing the impact of these apps. You have things like 23 and Me, and those are global – and raises global regulation issues – so hard to govern around that issue. But our platform ecosystem is transnational, and governments are national. We also see platforms coming from technology companies – Phillips was building physical kit, MRI machines, but it now models itself as a data company. What you see here is that the big five internet and technology players are also big players in this field – Google Health and 23 and Me (financed by Sergei Brin, run by his ex-wife), Apple HealthKit, etc. And even then you have small independent apps like mPower but they are distributed via the app stores, led by big players and again, hard to govern.

 

We used to build trust in society through institutions and institutional norms and codes, which were subject to democratic controls. But these are increasingly bi-passed… And that may be subtle but it is going uncontrolled. So, how can we build trust in a platformed world? Well, we have to understand who rules the platform ecosystem, and by understanding how it is governed. And when you look at this globally you see competing ideological hemispheres… You see the US model of commercial values, and those are literally imposed on others. And you have Yandex and the Chinese model, and that that’s an interesting model…

I think coming back to my main question: what do we do here to help? We can make visible how this platformised society works… So I did a presentation a few weeks ago and shared recommendations there for users:

  • Require transparency in platforms
  • Do not trade convenience for public values
  • Be vigilant, be informed

But can you expect individuals to understand how each app works and what its implications are? I think government have a key role to protect citizens rights here.

In terms of owners and developers my recommendations are:

  • Put long-term trust over short-term gain
  • Be transparent about data flows, business models, and governance structure
  • Help encode public values in platform architecture (e.g. privacy by design)

A few weeks back the New York Times ran an article on holding algorithms accountable, and I think that that is a useful idea.

I think my biggest recommendations are for governments, and they are:

  • Defend public values and common good; negotiate public interests with platforms. What it could also do is to, for instance, legislate to manage demands and needs in how platforms work.
  • Upgrade regulatory institutions to deal with the digital constellations we are facing.
  • Develop (inter)national blueprint for a democratic platform society.

And we, as researchers, we can help expose and share the platform society so that it is understaood and engaged with in a more knowledgeable way. Governments have a special responsibility to govern the networked society – right now it is a Wild West. We are struggling to resolve these issues, so how can we help govern the platforms to shape society, when the platforms themselves are so enormous and powerful. In Europe we see platforms that are mainly US-based private sector spaces, and they are threatening public sector organisations.. It is important to think about how we build trust in that platform society…

Q&A

Q1) You talked about private interests being concealed by public values, but you didn’t talk about private interests of incumbents…

A1) That is important of course. Those protests that I mentioned do raise some of those issues – undercutting prices by not paying for insurance, pensions etc. of taxi drivers. In Europe those costs can be up to 50% of costs, so what do we do with those public values, how do we pay for this? We’ll pay for it one way or the other. The incumbents do have their own vested interests… But there are also social values there… If we want to retain those values though we need to find a model for that… European economic values have had collective values inscribed in… If that is outmoded, than fine, but how do we build those in in other ways…

Q2) I think in my context in Australia at least the Government is in cahoots with private companies, with public-private partnerships and security arms of government heavily benefitting from data collection and surveillance… I think that government regulating these platforms is possible, I’m not sure that they will.

A2) A lot of governments are heavily invested in private industries… I am not anti-companies or anti-government… My first goal is to make them aware of how this works… I am always surprised how little governments are aware of what runs underneath the promises and paradoxes… There is reluctance to work with companies from regulators but there is also exhaustion and a lack of understanding about how to update regulations and processes. How can you update health regulations with 165k health apps out there? I probably am an optimist… But I want to ensure governments are aware and understand how this is transforming society. There is so much ignorance in the field, and there is nievete about how this will play out. Yes, I’m an optimist. But no, there is something we can do to shape the direction that the platform society will develop.

Q3) You have great faith in regulation, but there are real challenges and issues… There are many cases where governments have colluded with industry to inflate the costs of delivery. There is the idea of regulatory capture. Why should we expect regulators to act in public interest when historically they act in the interest of private companies.

A3) It’s not that I put all my trust there… But I’m looking for a dialogue with whoever is involved in this space, in the contested play of where we start… It is one of many actors in this whole contested battlefield. I don’t think we have the answers, but it is our job to explain the underlying mechanisms… And I’m pretty shocked by how little they know about the platforms and the underlying mechanisms there. Sometimes it’s hard to know where to start… But you have to make a start somewhere…

Oct 052016
 

After a few weeks of leave I’m now back and spending most of this week at the Association of Internet Researchers (AoIR) Conference 2016. I’m hugely excited to be here as the programme looks excellent with a really wide range of internet research being presented and discussed. I’ll be liveblogging throughout the week starting with today’s workshops.

This is a liveblog so all corrections, updates, links, etc. are very much welcomed – just leave me a comment, drop me an email or similar to flag them up!

I am booked into the Digital Methods in Internet Research: A Sampling Menu workshop, although I may be switching session at lunchtime to attend the Internet rules… for Higher Education workshop this afternoon.

The Digital Methods workshop is being chaired by Patrik Wikstrom (Digital Media Research Centre, Queensland University of Technology, Australia) and the speakers are:

  • Erik Borra (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Axel Bruns (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Jean Burgess (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Carolin Gerlitz (University of Siegen, Germany),
  • Anne Helmond (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Ariadna Matamoros Fernandez (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Peta Mitchell (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Richard Rogers (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Fernando N. van der Vlist (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Esther Weltevrede (Digital Methods Initiative, University of Amsterdam, the Netherlands).

I’ll be taking notes throughout but the session materials are also available here: http://tinyurl.com/aoir2016-digmethods/.

Patrik: We are in for a long and exciting day! I won’t introduce all the speakers as we won’t have time!

Conceptual Introduction: Situating Digital Methods (Richard Rogers)

My name is Richard Rogers, I’m professor of new media and digital culture at the University of Amsterdam and I have the pleasure of introducing today’s session. So I’m going to do two things, I’ll be situating digital methods in internet-related research, and then taking you through some digital methods.

I would like to situate digital methods as a third era of internet research… I think all of these eras thrive and overlap but they are differentiated.

  1. Web of Cyberspace (1994-2000): Cyberstudies was an effort to see difference in the internet, the virtual as distinct from the real. I’d situate this largely in the 90’s and the work of Steve Jones and Steve (?).
  2. Web as Virtual Society? (2000-2007) saw virtual as part of the real. Offline as baseline and “virtual methods” with work around the digital economy, the digital divide…
  3. Web as societal data (2007-) is about “virtual as indication of the real. Online as baseline.

Right now we use online data about society and culture to make “grounded” claims.

So, if we look at Allrecipes.com Thanksgiving recipe searches on a map we get some idea of regional preference, or we look at Google data in more depth, we get this idea of internet data as grounding for understanding culture, society, tastes.

So, we had this turn in around 2008 to “web as data” as a concept. When this idea was first introduced not all were comfortable with the concept. Mike Thelwell et al (2005) talked about the importance of grounding the data from the internet. So, for instance, Google’s flu trends can be compared to Wikipedia traffic etc. And with these trends we also get the idea of “the internet knows first”, with the web predicting other sources of data.

Now I do want to talk about digital methods in the context of digital humanities data and methods. Lev Manovich talks about Cultural Analytics. It is concerned with digitised cultural materials with materials clusterable in a sort of art historical way – by hue, style, etc. And so this is a sort of big data approach that substitutes “continuous change” for periodisation and categorisation for continuation. So, this approach can, for instance, be applied to Instagram (Selfiexploration), looking at mood, aesthetics, etc. And then we have Culturenomics, mainly through the Google Ngram Viewer. A lot of linguists use this to understand subtle differences as part of distance reading of large corpuses.

And I also want to talk about e-social sciences data and method. Here we have Webometrics (Thelwell et al) with links as reputational markers. The other tradition here is Altmetrics (Priem et al), which uses online data to do citation analysis, with social media data.

So, at least initially, the idea behind digital methods was to be in a different space. The study of online digital objects, and also natively online method – methods developed for the medium. And natively digital is meant in a computing sense here. In computing software has a native mode when it is written for a specific processor, so these are methods specifically created for the digital medium. We also have digitized methods, those which have been imported and migrated methods adapted slightly to the online.

Generally speaking there is a sort of protocol for digital methods: Which objects and data are available? (links, tags, timestamps); how do dominant devices handle them? etc.

I will talk about some methods here:

1. Hyperlink

For the hyperlink analysis there are several methods. The Issue Crawler software, still running and working, enable you to see links between pages, direction of linking, aspirational linking… For example a visualisation of an Armenian NGO shows the dynamics of an issue network showing politics of association.

The other method that can be used here takes a list of sensitive sites, using Issue Crawler, then parse it through an internet censorship service. And variations on this that indicate how successful attempts at internet censorship are. We do work on Iran and China and I should say that we are always quite thoughtful about how we publish these results because of their sensitivity.

2. The website as archived object

We have the Internet Archive and we have individual archived web sites. Both are useful but researcher use is not terribly signficant so we have been doing work on this. See also a YouTube video called “Google and the politics of tabs” – a technique to create a movie of the evolution of a webpage in the style of timelapse photography. I will be publishing soon about this technique.

But we have also been looking at historical hyperlink analysis – giving you that context that you won’t see represented in archives directly. This shows the connections between sites at a previous point in time. We also discovered that the “Ghostery” plugin can also be used with archived websites – for trackers and for code. So you can see the evolution and use of trackers on any website/set of websites.

6. Wikipedia as cultural reference

Note: the numbering is from a headline list of 10, hence the odd numbering… 

We have been looking at the evolution of Wikipedia pages, understanding how they change. It seems that pages shift from neutral to national points of view… So we looked at Srebenica and how that is represented. The pages here have different names, indicating difference in the politics of memory and reconciliation. We have developed a triangulation tool that grabs links and references and compares them across different pages. We also developed comparative image analysis that lets you see which images are shared across articles.

7. Facebook and other social networking sites

Facebook is, as you probably well know, is a social media platform that is relatively difficult to pin down at a moment in time. Trying to pin down the history of Facebook find that very hard – it hasn’t been in the Internet Archive for four years, the site changes all the time. We have developed two approaches: one for social media profiles and interest data as means of stufying cultural taste ad political preference or “Postdemographics”; And “Networked content analysis” which uses social media activity data as means of studying “most engaged with content” – that helps with the fact that profiles are no longer available via the API. To some extend the API drives the research, but then taking a digital methods approach we need to work with the medium, find which possibilities are there for research.

So, one of the projects undertaken with in this space was elFriendo, a MySpace-based project which looked at the cultural tastes of “friends” of Obama and McCain during their presidential race. For instance Obama’s friends best liked Lost and The Daily Show on TV, McCain’s liked Desperate Housewives, America’s Next Top Model, etc. Very different cultures and interests.

Now the Networked Content Analysis approach, where you quantify and then analyse, works well with Facebook. You can look at pages and use data from the API to understand the pages and groups that liked each other, to compare memberships of groups etc. (at the time you were able to do this). In this process you could see specific administrator names, and we did this with right wing data working with a group called Hope not Hate, who recognised many of the names that emerged here. Looking at most liked content from groups you also see the shared values, cultural issues, etc.

So, you could see two areas of Facebook Studies, Facebook I (2006-2011) about presentation of self: profiles and interests studies (with ethics); Facebook II (2011-) which is more about social movements. I think many social media platforms are following this shift – or would like to. So in Instagram Studies the Instagram I (2010-2014) was about selfie culture, but has shifed to Instagram II (2014-) concerned with antagonistic hashtag use for instance.

Twitter has done this and gone further… Twitter I (2006-2009) was about urban lifestyle tool (origins) and “banal” lunch tweets – their own tagline of “what are you doing?”, a connectivist space; Twitter II (2009-2012) has moved to elections, disasters and revolutions. The tagline is “what’s happening?” and we have metrics “trending topics”; Twitter III (2012-) sees this as a generic resource tool with commodification of data, stock market predictions, elections, etc.

So, I want to finish by talking about work on Twitter as a storytelling machine for remote event analysis. This is an approach we developed some years ago around the Iran event crisis. We made a tweet collection around a single Twitter hashtag – which is no longer done – and then ordered by most retweeted (top 3 for each day) and presented in chronological (not reverse) order. And we then showed those in huge displays around the world…

To take you back to June 2009… Mousavi holds an emergency press conference. Voter turn out is 80%. SMS is down. Mousavi’s website and Facebook are blocked. Police use pepper spray… The first 20 days of most popular tweets is a good succinct summary of the events.

So, I’ve taken you on a whistle stop tour of methods. I don’t know if we are coming to the end of this. I was having a conversation the other day that the Web 2.0 days are over really, the idea that the web is readily accessible, that APIs and data is there to be scraped… That’s really changing. This is one of the reasons the app space is so hard to research. We are moving again to user studies to an extent. What the Chinese researchers are doing involves convoluted processes to getting the data for instance. But there are so many areas of research that can still be done. Issue Crawler is still out there and other tools are available at tools.digitalmethods.net.

Twitter studies with DMI-TCAT (Fernando van der Vlist and Emile den Tex)

Fernando: I’m going to be talking about how we can use the DMI-TCAT tool to do Twitter Studies. I am here with Emile den Tex, one of the original developers of this tool, alongside Eric Borra.

So, what is DMI-TCAT? It is the Digital Methods Initiative Twitter Capture and Analysis Toolset, a server side tool which tries to capture robust and reproducible data capture and analysis. The design is based on two ideas: that captured datasets can be refined in different ways; and that the datasets can be analysed in different ways. Although we developed this tool, it is also in use elsewhere, particularly in the US and Australia.

So, how do we actually capture Twitter data? Some of you will have some experience of trying to do this. As researchers we don’t just want the data, we also want to look at the platform in itself. If you are in industry you get Twitter data through a “data partner”, the biggest of which by far is GNIP – owned by Twitter as of the last two years – then you just pay for it. But it is pricey. If you are a researcher you can go to an academic data partner – DiscoverText or Hexagon – and they are also resellers but they are less costly. And then the third route is the publicly available data – REST APIs, Search API, Streaming APIs. These are, to an extent, the authentic user perspective as most people use these… We have built around these but the available data and APIs shape and constrain the design and the data.

For instance the “Search API” prioritises “relevance” over “completeness” – but as academics we don’t know how “relevance” is being defined here. If you want to do representative research then completeness may be most important. If you want to look at how Twitter prioritises the data, then that Search API may be most relevant. You also have to understand rate limits… This can constrain research, as different data has different rate limits.

So there are many layers of technical mediation here, across three big actors: Twitter platform – and the APIs and technical data interfaces; DMI-TCAT (extraction); Output types. And those APIs and technical data interfaces are significant mediators here, and important to understand their implications in our work as researchers.

So, onto the DMI-TCAT tool itself – more on this in Borra & Reider (2014) (doi:10.1108/AJIM-09-2013-0094). They talk about “programmed method” and the idea of the methodological implications of the technical architecture.

What can one learn if one looks at Twitter through this “programmed method”? Well (1) Twitter users can change their Twitter handle, but their ids will remain identical – sounds basic but its important to understand when collecting data. (2) the length of a Tweet may vary beyond maximum of 140 characters (mentions and urls); (3) native retweets may have their top level text property stortened. (4) Unexpected limitations  support for new emoji characters can be problematic. (5) It is possible to retrieve a deleted tweet.

So, for example, a tweet can vary beyond 140 characters. The Retweet of an original post may be abbreviated… Now we don’t want that, we want it to look as it would to a user. So, we capture it in our tool in the non-truncated version.

And, on the issue of deletion and witholding. There are tweets deleted by users, and their are tweets which are withheld by the platform – and the withholding is a country by country issue. But you can see tweets only available in some countries. A project that uses this information is “Politwoops” (http://politwoops.sunlightfoundation.com/) which captures tweets deleted by US politicians, that lets you filter to specific states, party, position. Now there is an ethical discussion to be had here… We don’t know why tweets are deleted… We could at least talk about it.

So, the tool captures Twitter data in two ways. Firstly there is the direct capture capabilities (via web front-end) which allows tracking of users and capture of public tweets posted by these users; tracking particular terms or keywords, including hashtags; get a small random (approx 1%) of all public statuses. Secondary capture capabilities (via scripts) allows further exploration, including user ids, deleted tweets etc.

Twitter as a platform has a very formalised idea of sociality, the types of connections, parameters, etc. When we use the term “user” we mean it in the platform defined object meaning of the word.

Secondary analytical capabilities, via script, also allows further work:

  1. support for geographical polygons to delineate geographical regions for tracking particular terms or keywords, including hashtags.
  2. Built-in URL expander, following shortened URLs to their destination. Allowing further analysis, including of which statuses are pointing to the same URLs.
  3. Download media (e.g. videos and images (attached to particular Tweets).

So, we have this tool but what sort of studies might we do with Twitter? Some ideas to get you thinking:

  1. Hashtag analysis – users, devices etc. Why? They are often embedded in social issues.
  2. Mentions analysis – users mentioned in contexts, associations, etc. allowing you to e.g. identify expertise.
  3. Retweet analysis – most retweeted per day.
  4. URL analysis – the content that is most referenced.

So Emile will now go through the tool and how you’d use it in this way…

Emile: I’m going to walk through some main features of the DMI TCAT tool. We are going to use a demo site (http://tcatdemo.emiledentex.nl/analysis/) and look at some Trump tweets…

Note: I won’t blog everything here as it is a walkthrough, but we are playing with timestamps (the tool uses UTC), search terms etc. We are exploring hashtag frequency… In that list you can see Bengazi, tpp, etc. Now, once you see a common hashtag, you can go back and query the dataset again for that hashtag/search terms… And you can filter down… And look at “identical tweets” to found the most retweeted content. 

Emile: Eric called this a list making tool – it sounds dull but it is so useful… And you can then put the data through other tools. You can put tweets into Gephi. Or you can do exploration… We looked at Getty Parks project, scraped images, reverse Google image searched those images to find the originals, checked the metadata for the camera used, and investigated whether the cost of a camera was related to the success in distributing an image…

Richard: It was a critique of user generated content.

Analysing Social Media Data with TCAT and Tableau (Axel Bruns)

My talk should be a good follow on from the previous presentation as I’ll be looking at what you can do with TCAT data outside and beyond the tool. Before I start I should say that both Amsterdam and QUT are holding summer schools – and we have different summers! – so do have a look at those.

You’ve already heard about TCAT so I won’t talk more about that except to talk about the parts of TCAT I have been using.

TCAT Data Export allows you to export all tweets from selection – containing all of the tweets and information about them. You can also export a table of hashtags – tweet ids from your selection and hashtags; and mentions – tweet ids from your selection with mentions and mention type. You can export other things as well – known users (politicians, celebrities, etc); URLs; etc. And the structure that emerges are the Main TCAT export file (“full export”) and associating Hashtags; Mentions; Any other additional data. If you are familiar with SQL you are essentially joining databases here. If not then that’s fine, Tableau does this for you.

In terms of processing the data there are a number of tools here. Excel just isn’t good enough at scale – limited to 100,000 rows and that Trump dataset was 2.8 M already. So a tool that I and many others have been working with is Tableau. It’s a tool that copes with scale, it’s user-friendly, intuitive, all-purpose data analytics tool, but the downside is that it is not free (unless you are a student or are using it in teaching). Alongside that, for network visualisation, Gephi is the main tool at the moment. That’s open source and free and a new version came out in December.

So, into Tableau and an idea of what we can do with the data… Tableau enables you to work with data sources of any form, databases, spreadsheets, etc. So I have connected the full export I’ve gotten from TCAT… I have linked the main file to hashtag and mention files. Then I have also generated an additional file that expands the URLs in that data source (you can now do this in TCAT too). This is a left join – one main table that other tables are connected to. I’ve connected based on (tweet) id. And the dataset I’m showing here is from the Paris 2015 UN Climate Change. And all the steps I’m going through today are in a PDF guidebook that is available in that session resources link (http://tinyurl.com/aoir2016-digmethods/).

Tableau then tries to make sense of the data… Dimensions are the datasets which have been brought in, clicking on those reveals columns in the data, and then you see Measures – countable features in the data. Tableau makes sense of the file itself, although it won’t always guess correctly.

Now, we’ve joined the data here so that can mean we get repetition… If a tweet has 6 hashtags, it might seem to be 6 tweets. So I’m going to use the unique tweet ids as a measure. And I’ll also right click to ensure this is a distinct count.

Having done that I can begin to visualise my data and see a count of tweets in my dataset… And I can see when they were created – using Created at but also then finessing that to Hour (rather than default of Year). Now when I look at that dataset I see a peak at 10pm… That seems unlikely… And it’s because TCAT is running on Brisbane time, so I need to shift to CET time as these tweets were concerned with events in Paris. So I create a new Formula called CET, and I’ll set it to be “DateAdd (‘hour’, -9, [Created at])” – which simply allows us to take 9 hours off the time to bring it to the correct timezone. Having done that the spike is 3.40pm, and that makes a lot more sense!

Having generated that graph I can click on, say, the peak activity and see the number of tweets and the tweets that appeared. You can see some spam there – of course – but also widely retweeted tweet from the White House, tweets showing that Twitter has created a new emoji for the summit, a tweet from the Space Station. This gives you a first quick visual inspection of what is taking place… And you can also identify moments to drill down to in further depth.

I might want to compare Twitter activity with number of participating users, comparing the unique number of counts (synchronising axes for scale). Doing that we do see that there are more tweets when more users are active… But there is also a spike that is independent of that. And that spike seems to be generated by Twitter users tweeting more – around something significant perhaps – that triggers attention and activity.

So, this tool enables quantitative data analysis as a starting point or related route into qualitative analysis, the approaches are really inter-related. Quickly assessing this data enables more investigation and exploration.

Now I’m going to look at hashtags, seeing the volume against activity. By default the hashtags are ordered alphabetically, but that isn’t that useful, so I’m going to reorder by use. When I do that you can see that by far COP21 – the official hashtag – is by far the most popular. These tweets were generated from that hashtags but also from several search terms for the conference – official abbreviations for the event. And indeed some tweets have “Null” hashtags – no hashtags, just the search terms. You also see variance in spelling and capitalisation. Unlike Twitter Tableau is case sensitive so I would need to use some sort of Formula to resolve this – combining terms to one hashtag. A quick way to do that is to use “LOWER(‘Hashtag’)” which converts all data in the hashtag fields to lower case. That clustering shows COP21 as an even bigger hashtag, but also identifies other popular terms. We do see spikes in a given hashtag – often very brief – and these are often related to one very popular and heavily retweeted tweet has emerged. So, e.g. a prominent actor/figure has tweeted – e.g. in this data set Cara Delevingne (a British supermodel) triggers a short sharp spike in tweets/retweets.

And we can see these hashtags here, their relative popularity. But remember that my dataset is just based on what I asked TCAT to collect… TCOT might be a really big hashtag but maybe they don’t usually mention my search terms, hence being smaller in my data set. So, don’t be fooled into assuming some of the hashtags are small/low use just because they may not be prominent in a collected dataset.

Turning now to Mentions… We can see several Mention Types: original/null (no mentions); mentions; retweet. You also see that mentions and retweets spikes at particular moments – tweets going viral, key figures getting involved in the event or the tweeting, it all gives you a sense of the choreography of the event…

So, we can now look at who is being mentioned. I’m going to take all Twitter users in my dataset… I’ll see how many tweets mention them. I have a huge Null group here – no mentions – so I’ll start by removing that. The most mentioned accounts we see COP21 being the biggest mentioned account, and others such as Narendra Modi (chair of event?), POTUS, UNFCCC, Francois Hollande, the UN, Mashi Rafael, COP21en – the English language event account; EPN – Justin Trudeau; StationCDRKelly; C Figueres; India4Climate; Barack Obama’s personal account, etc. And I can also see what kind of mention they get. And you see that POTUS gets mentions but no retweets, whilst Barack Obama has a few retweets but mainly mentions. That doesn’t mean he doesn’t get retweets, but not in this dataset/search terms. By contrast Station Commander Kelly gets almost exclusively retweets… The balance of mentions, how people are mentioned, what gets retweeting etc… That is all a starting point for closer reading and qualitative analysis.

And now I want to look at who tweets the most… And you’ll see that there is very little overlap between the people who tweet the most, and the people who are mentioned and retweeted. The one account there that appears in both is COP21 – the event itself. Now some of the most active users are spammers and bots… But others will be obsessive, super-active users… Further analysis lets you dig further. Having looked at this list, I can look at what sort of tweets these users are sending… And that may look a bit different… This uses the Mention type and it may be that one tweet mentions multiple users, so get counted multiple times… So, for instance, DiploMix puts out 372 tweets… But when re-looked at for mentions and retweets we see a count of 636. That’s an issue you have to get your head around a bit… And the same issue occurs with hashtags. Looking at the types of tweets put out show some who post only or mainly original tweets, some who do mention others, some only or mainly retweet – perhaps bots or automated accounts. For instance DiploMix retweets diplomats and politicians. RelaxinParis is a bot retweeting everything on Paris – not useful for analysis, but part of lived experience of Twitter of course.

So, I have lots of views of data, and sheets saved here. You can export tables and graphs for publications too, which is very helpful.

I’m going to finish by looking at URLs mentioned… I’ve expanded these myself, and I’ve got the domain/path as well as the domain captured. I remove the NULL group here. And the most popular linked to domain is Twitter – I’m going to combine http and https versions in Tableau – but Youtube, UN, Leader of Iran, etc. are most popular. If I dig further into the Twitter domains, looking at Path, I can see whose accounts/profiles etc. are most linked to. If I dig into Station Commander Kelly you see that the most shared of these URLs are images… And we can look at that… And that’s a tweet we had already seen all day – a very widely shared image of a view of earth.

My time is up but I’m hoping this has been useful… This is the sort of approach I would take – exploring the data, using this as an entry point for more qualitative data analysis.

Analysing Network Dynamics with Agent Based Models (Patrik Wikström)

I will be talking about network dynamics and how we can understand some of the theory of network dynamics. And before I start a reminder that you can access and download all these materials at the URL for the session.

So, what are network dynamics? Well we’ve already seen graphs and visualisations of things that change over time. Network dynamics are very much about things that change and develop over time… So when we look at a corpus of tweets they are not all simultaneous, there is a dimension of time… And we have people responding to each other, to what they see around them, etc. So, how can we understand what goes on? We are interested in human behaviour, social behaviour, the emergence of norms and institutions, information diffusion patterns across multiple networks, etc. And these are complex and related to time, we have to take time into account. We also have to understand how macro level patterns emerge from local interactions between heterogenous agents, and how macro level patterns influence and impact upon those interactions. But this is hard…

It is difficult to capture complexity of such dynamic phenomena with verbal or conceptual models (or with static statistical models). And we can be seduced by big data. So I will be talking about using particular models, agent-based models. But what is that? Well it’s essentially a computer program, or a computer program for each agent… That allows it to be heterogeneous, autonomous and to interact with the environment and with other agents; that means they can interact in a (physical) space or as nodes in a network; and we can allow them to have (limited) perception, memory and cognition, etc. That’s something it is very hard for us to do and imagine with our own human brains when we look at large data sets.

The fundamental goal of this model is to develop a model that represents theoretical constructs, logics and assumptions and we want to be able to replicate the observed real-world behaviour. This is the same kind of approach that we use in most of our work.

So, a simple example…

Let’s assume that we start with some inductive idea. So we want to explain the emergence of the different social media network structures we observe. We might want some macro-level observations of Structure – clusters, path lengths, degree distributions, size; Time – growth, decline, cyclic; Behaviours – contagion, diffusion. So we want to build some kind of model to transfer or take our assumptions of what is going on, and translate that into a computer model…

So, what are our assumptions?

Well lets say we think people use different strategies when they decide which accounts to follow, with factors such as familiarity, similarity, activity, popularity, random… They may all be different explanations of why I connect with one person rather than another…  And lets also assume that when a user joins Twitter they immediately start following a set of accounts, and once part of the network they add more. And lets also assume that people are different – that’s really important! People are interested in different things – they have different passions, topics that interest them, some are more active, some are more passive. And that’s something we want to capture.

So, to do this I’m going to use something called NetLogo – which some of you may have already played with – it is a tool developed maybe 25 years back at Northwestern University. You can download it – or use a limited browser-based version -from: http://ccl.northwestern.edu/netlogo/.

In NetLogo we start with a 3 node network… I initialise the network and get three new nodes. Then I can add a new node… In this model I have a slider for “randomness” – if I set it to less random, it picks existing popular nodes, in the middle it combines popularity with randomness, and at most random it just adds nodes randomly…

So, I can run a simulation with about 200 nodes with randomness set to maximum… You can see how many nodes are present, how many friends the most popular node has, and how many nodes have very few friends (with 3 which is minimum connections in this model). If I now change the formation strategy here to set randomness to zero… then we see the nodes connecting back to the same most popular nodes… A more broadcast-like network. This is a totally different kind of network.

Now, another simulation here toggles the size of nodes to represent number of followers… Larger blobs represent really popular nodes… So if I run this in random mode again, you’ll see it looks very different…

So, why am I showing you this? Well I live to show a really simple model. This is maybe 50 lines of code – you could build it in a few hours. The first message is that it is easy to build this kind of model. And even though we have a simple model we have at least 200 agents… We normally work with thousands or much greater scale, but you can still learn something here. You can see how to replicate the structure of a network. Maybe it is a starting point that requires more data to be added, but it is a place to start and explore. Even though a simple model you can use this to build theory, to guide data collection and so forth.

So, having developed a model you can set up a simulation to run hundreds of times, to analyse with your data analytics tools… So I’ve run my 200 node network, 5000 simulations, comparing randomness and maximum links to a nodes – helping understand that different formation strategy creates different structures. And that’s interesting but it doesn’t take us all the way. So I’d like to show you a different model that takes this a little bit further…

This model is an extension of the previous model – with all the previous assumptions – so you have two formation strategies, but also other assumptions we were talking about… That I am more likely to connect to accounts with shared interests, more inclines to connect with accounts with shared interests, and with that we generate a simulation which is perhaps a better representation of the kinds of network we might see. And this accommodates the idea that this network has content, sharing, and other aspects that inform what is going on in the formation of that network. This visualisation looks pretty but the useful part is the output you can get at an aggregate level… We are looking at population level, seeing how local interactions at local levels, influence macro level patterns and behaviours… We can look at in-degree distribution, we can look at out-degree… We can look at local clustering coefficients, longest/shortest path, etc. And my assumptions might be plausible and reasonable…

So you can build models that give a much deeper understanding of real world dynamics… We are building an artificial network BUT you can combine this with real world data – load a real world network structure into the model and look at diffusion within that network, and understand what happens when one node posts something, what impact would that have, what information diffusion would that have…

So I’ve shown you NetLogo to play with these models. If you want to play around, that’s a great first step. It’s easy to get started with and it has been developed for use in educational settings. There is a big community and lots of models to use. And if you download NetLogo you can download that library of models. Pretty soon, however, I think you’ll find it too limited. There are many other tools you can use… But in general you can use any programming language that you want… Repast and Mason are very common tools. And they are based on Java or C++. You can also use an ABM Python module.

In the folder for this session there are some papers that give a good introduction to agent-based modelling… If we think about agent-based modelling and network theory there are some books I would recommend: Natatame & Chen: Agent-based modelling and Network dynamics. ABM look at Miller & Scott; Gilbert and Troitzsch; Epstein. Network theory – look at Jackson, Watts (& Strogatz), Barabasi.

So, three things:

Simplify! – You don’t need millions of agents. A simple model can be more powerful than a realistic one

Iterate! – Start simple and, as needed, build up complexity, add more features, but only if necessary.

Validate? – You can build models in a speculative way to guide research, to inform data collection… You don’t always have to validate that model as it may be a tool for your thinking. But validation is important if you want to be able to replicate and ensure relevance in the real world.

We started talking about data collection, analysis, and how we build theory based on the data we collect. After lunch we will continue with Carolin, Anne and Fernando on Tracking the Trackers. At the end of the day we’ll have a full panel Q&A for any questions.

And we are back after lunch and a little exposure to the Berlin rain!

Tracking the Trackers (Anne Helmond, Carolin Gerlitz, Esther Weltevrede and Fernando van der Vlist)

Carolin: Having talked about tracking users and behaviours this morning, we are going to talk about studying the media themselves, and of tracking the trackers across these platforms. So what are we tracking? Berry (2011) says:

“For every explicit action of a user, there are probably 100+ implicit data points from usage; whether that is a page visit, a scroll etc.”

Whenever a user makes an action on the web, a series of tracking features are enabled, things like cookies, widgets, advertising trackers, analytics, beacons etc. Cookies are small pieces of text that are placed on the user’s computer indicating that they have visited a site before. These are 1st party trackers and can be accessed by the platforms and webmasters. There are now many third party trackers such as Facebook, Twitter, Google, and many websites now place third party cookies on the devices of users. And there are widgets that enable this functionality with third party trackers – e.g. Disquus.

So we have first party tracker files – text files that remember, e.g. what you put in a shopping cart; third party tracker files used by marketers and data-gathering companies to track your actions across the web; you have beacons; and you have flash cookies.

The purpose of tracking varies, from functionality that is useful (e.g. the shopping basket example) but increasingly prevelant for use in profiling users and behaviours. The increasing use of trackers has resulted in them becoming more visible. There is lots of research looking at the prevalence of tracking across the web, from the Continuum project and the Guardian’s Tracking the Trackers project. One of the most famous plugins that allows you to see the trackers in your own browser is Ghostery – a browser plugin that you can install and immediately detects different kinds of trackers, widgets, cookies, analytics tracking on the sites that you browse to… It shows these in a pop up. It allows you to see the trackers and to block trackers, or selectively block trackers. You may want to selectively block trackers as whole parts of websites disappear when you switch off trackers.

Ghostery detects via tracker library/code snippets (regular expressions). It currently detects around 2295 trackers – across many different varieties. The tool is not uncontroversial. It started as an NGO but was bought by analytics company Evidon in 2010, using the data for marketing and advertising.

So, we thought that if we, as researchers, want to look at trackers and there are existing tools, lets repurpose existing tools. So we did that, creating a Tracker tracker tool based on Ghostery. It takes up a logic of Digital Methods, working with lists of websites. So the Tracker Tracker tool has been created by the Digital Methods Initiative (2012). It allows us to detect which tracers are present on lists of wevsites and create a network view. And we are “repurposing analytical capabilities”. So, what sort of project can we use this with?

One of our first project was on the Like Economy. Our starting point was the fact that social media widgets place cookies (Gerlitz and Helmond 2013), where are they present. These cookies track both platform users and website users. We wanted to see how pervasive these cookies were on the web, and on the most used sites on the web.

We started by using Alexa to identify a collection of 1000 most-visited websites. We inputted it into the Tracking Tracker tool (it’s only one button so options are limited!). Then we visualised the results with Gephi. And what did we get? Well, in 2012 only 18% of top websites had Facebook trackers – if we did it again today it would probably be different. This data may be connected to personal user profiles – when a user has been previously logged in and has a profile – but it is also being collected for non-users of Facebook, they create anonymous profiles but if they subsequently join Facebook that tracking data can be fed into their account/profile.

Since we did this work we have used this method on other projects. Now I’ll hand over to Anne to do a methods walkthrough.

Anne: Now you’ve had a sense of the method I’m going to do a dangerous walkthrough thing… And then we’ll look at some other projects here.

So, a quick methodological summary:

  1. Research question: type of tracker and sites
  2. Website (URL) collection making: existing expert list.
  3. Input list for Tracker Tracker
  4. Run Tracker Tracker
  5. Analyse in Gephi

So we always start with a research question… Perhaps we start with websites we wouldn’t want to find trackers on – where privacy issues are heightened e.g. childrens’ websites, porn websites, etc. So, homework here – work through some research question ideas.

Today we’ll walk through what we will call “adult sites”. So, we will go to Alexa – which is great for locating top sites in categories, in specific countries, etc. We take that list, we put it into Tracker Tracker – choosing whether or not to look at the first level of subpages – and press the button. The tool looks at the Ghostery database, which now scans those websites for the possible 2600 trackers that may exist.

Carolyn: Maybe some of you are wondering if it’s ok to do this with Ghostery? Well, yes, we developed Tracker Tracker in collaboration with Ghostery when it was an NGO, with one of their developers visiting us in Amsterdam. One other note here: if you use Ghostery on your machine, it may be different to your neighbours trackers. Trackers vary by machine, by location, by context. That’s something we have to take into account when requesting data. So for news websites you may, for instance, have more and more trackers generated the longer the site is open – this tool only captures a short window of time so may not gather all of the trackers.

Anne: Also in Europe you may encounter a so-called cookie walls. You have to press OK to accept cookies… And the tool can’t emulate user experience in clicking beyond the cookie walls… So zero trackers may indicate that issue, rather than no trackers.

Q: Is it server side or client side?

A: It is server side.

Q: And do you cache the tracker data?

A: Once you run the tool you can save the CSV and Gephi files, but we don’t otherwise cache.

Anne: Ghostery updates very frequently so that makes it most useful to always use the most up to date list of trackers to check against.

So, once we’ve run the Tracker Tracker tool you get outputs that can be used in a variety of flexible formats. We will download the “exhaustive” CSV – which has all of the data we’ve found here.

If I open that CSV (in Excel) we can see the site, the scheme, the patterns that was used to find the tracker, the name of the tracker… This is very detailed information. So for these adult sites we see things like Google Analytics, the Porn Ad network, Facebook Connect. So, already, there is analysis you could do with this data. But you could also do further analysis using Gephi.

Now, we have steps of this procedure in the tutorial that goes with today’s session. So here we’ve coloured the sites in grey, and we’ve highlighted the trackers in different colours. The purple lines/nodes are advertising trackers for instance.

If you want to create this tracker at home, you have all the steps here. And doing this work we’ve found trackers we’d never seen before – for instance the porn industry ad network DoublePimp (a play on DoubleClick) – and to see regional and geographic difference between trackers, which of course has interesting implications.

So, some more examples… We have taken this approach looking at Jihadi websites, working with e.g. governments to identify the trackers. And found that they are financially dependent on advertising included SkimLinks, DoubleClick, Google AdSense.

Carolyn: And in almost all networks we encounter DoubleClick, AdSense, etc. And it’s important to know that webmasters enable these trackers, they have picked these services. But there is an issue of who selects you as a client – something journalists collaborating on this work raised with Google.

Anne: The other usage of these trackers has been in historical tracking analysis using the internet archive. This enables you to see the website in the context in a techno-commercial configuration, and to analyse it in that context. So for instance looking at New York Times trackers and the wevsite as an ecosystem embedded in the wider context – in this case trackers decreased but that was commercial concentration, from companies buying each other therefore reducing the range of trackers.

Carolyn: We did some work called the Trackers Guide. We wanted to look not only at trackers, but also look at Content Delivery Networks, to visualise on a website how websites are not single items, but collections of data with inflows and outflows. The result became part artwork, part biological fieldguide. We imagined content and trackers as little biological cell-like clumps on the site, creating a whole booklet of this guide. So the image here shows the content from other spaces, content flowing in and connected…

Anne: We were also interested in what kind of data is being collected by these trackers. And also who owns these trackers. And also the countries these trackers are located in. So, we used this method with Ghostery. And then we dug further into those trackers. For Ghostery you can click on a tracker and see what kind of data it collects. We then looked at privacy policies of trackers to see what it claims to collect… And then we manually looked up ownership – and nationality – of the trackers to understand rules, regulations, etc. – and seeing where your data actually ends up.

Carolyn: Working with Ghostery, and repurposing their technology, was helpful but their database is not complete. And it is biased to the English-speaking world – so it is particularly lacking in Chinese contexts for instance. So there are limits here. It is not always clear what data is actually being collected. BUT this work allows us to study invisible participation in data flows – that cannot be found in other ways; to study media concentration and the emergence of specific tracking ecologies. And in doing so it allows us to imagine alternative spatialities of the web – tracker origins and national ecologies. And it provides insights into the invisible infrastructures of the web.

Slides for this presentation: http://www.slideshare.net/cgrltz/aoir-2016-digital-methods-workshop-tracking-the-trackers-66765013

Multiplatform Issue Mapping (Jean Burgess & Ariadna Matamoros Fernandez)

Jean: I’m Jean Burgess and I’m Professor of Digital Media and Director of the DRMC at QUT. Ariadna is one of our excellent PhD students at QUT but she was previously at DMI so she’s a bridge to both organisations. And I wanted to say how lovely it is to have the DRMC and DMI connected like this today.

So we are going to talk about issue mapping, and the idea of using issue mapping to teach digital research methods, particularly with people who may not be interested in social media outside of their specific research area. And about issue mapping as an approach that is outside the dominant “influencers” narrative that is dominant in the marketing side of social media.

We are in the room with people who have been working in this space for a long time but I just want to raise that we are making connections to AMT and cultural and social studies. So, a few ontological things… Our approach combines digital methods and controversy analysis. We understand there to be Controversies which are discreet, acute, often temporality that are sites of intersectionality, bringing together different issues in new combination. And drawing on Latour, Callon etc. we see controversies as generative. They can reveal the dynamics of issues, bring them together in new combinations, trasform them and mode them forward. And we undertake network and content analysis to understand relations among stakeholders, arguments and objects.

There are both very practical applications and more critical-reflexive possibilities of issue mapping. And we bring our own media studies viewpoint to that, with an interest in the vernacular of the space.

So, issue mapping with social media frequently starts with topical Twitter hashtags/hashtag communities. We then have iteractive “issue inventories” – actors, hashtags, media objects from one dataset used as seeds on their own. We then undertake some hybrid network/thematic analysis – e.g. associations among hashtags; thematic network clusters And we inevitably meet the issue of multi-platform/cross-platform engagement. And we’ll talk more about that.

One project we undertook on the #agchatoz, which is a community in Australia around weekly Twitter chats, but connected to a global community, explored the hashtag as a hybrid community. So here we looked at, for instance, the network of followers/followees in this network. And within that we were able to identify clusters of actors (across: Left-learning Twitterati (30%); Australian ag, farmers (29%); Media orgs, politicians (13%); International ag, farmers (12%); Foodies (10%); Right-wing Australian politics and others), and this reveals some unexpected alliances or crossovers – e.g. between animal rights campaigners and dairy farmers. That suggests opportunities to bridge communities, to raise challenges, etc.

We have linked, in the files for this session, to various papers. One of these, Burgess and Matamoros-Fernandez (2016) looks at Gamergate and I’m going to show a visualisation of the YouTube video network (Reider 2015; Gephi), which shows videos mentioned in tweets around that controversy, showing those that were closely related to each other.

Ariadna: My PhD is looking at another controversy, this one is concerned by Adam Goodes, an Australian Rules Footballer who was a high profile player until he retired last year. He has been a high profile campaigner against racism, and has called out racism on the field. He has been criticised for that by one part of society. And in 2014 he performed an indiginous war dance on the pitch, which again received booing from the crowd and backlash. So, I start with Twitter, follow the links, and then move to those linked platforms and moving onwards…

Now I’m focusing on visual material, because the controversy was visual, it was about a gesture. So there is visual content (images, videos, gifs) are mediators of race and racism on social media. I have identified key media objects through qualitative analysis – important gestures, different image genres. And the next step has been to reflect on the differences between platform traces – YouTube relates videos, Facebook like network, Twitter filters, notice and take down automatic messages. That gives a sense of the community, the discourse, the context, exploring their specificities and how their contributes to the cultural dynamics of face and racism online.

Jean: And if you want to learn more, there’s a paper later this week!

So, we usually do training on this at DMRC #CCISS16 Workshops. We usually ask participants to think about YouTube and related videos – as a way to encourage to people to think about networks other than social networks, and also to get to grips with Gephi.

Ariadna: Usually we split people into small groups and actually it is difficult to identify a current controversy that is visible and active in digital media – we look at YouTube and Tumblr (Twitter really requires prior collection of data). So, we go to YouTube to look for a key term, and we can then filter and find results changing… Usually you don’t reflect that much. So, if you look at “Black Lives Matter”, you get a range of content… And we ask participants to pick out relevant results – and what is relevant will depend on the research question you are asking. That first choice of what to select is important. Once this is done we get participants to use the YouTube Data Tools: https://tools.digitalmethods.net/netvizz/youtube/. This tool enables you to explore the network… You can use a video as a “seed”, or you can use a crawler that finds related videos… And that can be interesting… So if you see an Anti-Islamic video, does YouTube recommend more, or other videos related in other ways?

That seed leads you to related videos, and, depending on the depth you are interested in, videos related to the related videos… You can make selections of what to crawl, what the relevance should be. The crawler runs and outputs a Gephi file. So, this is an undirected network. Here nodes are videos, edges are relationships between videos. We generally use the layout: Force Atlas 2. And we run the Modularity Report to colour code the relationships on thematic or similar basis. Gephi can be confusing at first, but you can configure and use options to explore and better understand your network. You can look at the Data Table – and begin to understand the reasons for connection…

So, I have done this for Adam Goodes videos, to understand the clusters and connections.

So, we have looked at YouTube. Normally we move to Tumblr. But sometimes a controversy does not resonate on a different social media platform… So maybe a controversy on Twitter, doesn’t translate on Facebook; or one on YouTube doesn’t resonate on Tumblr… Or keywords will vary greatly. It can be a good way to start to understand the cultures of the platforms. And the role of main actors etc. on response in a given platform.

With Tumblr we start with the interface – e.g. looking at BlackLivesMatter. We look at the interface, functionality, etc. And then, again, we have a tool that can be used: https://tools.digitalmethods.net/netvizz/tumblr/. We usually encourage use of the same timeline across Tumblr and YouTube so that they can be compared.

So we can again go to Gephi, visualise the network. And in this case the nodes and edges can look different. So in this example we see 20 posts that connect 141 nodes, reflecting the particular reposting nature of that space.

Jean: The very specific cultural nature of the different online spaces can make for very interesting stuff when looking at controversies. And those are really useful starting points into further exploration.

And finally, a reminder, we run our summer schools in DMRC in February. When it is summer! And sunny! Apply now at: http://dmrcss.org/!

Analysing and visualising geospatial data (Peta Mitchell)

Normally when I would do this as a workshop I’d give some theoretical and historical background of the emergence of geospatial data, and then move onto the practical workshop on Carto (was CartoDB). Today though I’m going to talk about a case study, around the G8 meeting in Melbourne, and then talk about using Carto to create a social media map.

My own background is a field increasingly known as the geo humanities or the spatial humanities. And I did a close reading project of novels and films to create a Cultural Atlas of Australia. And how locations relate to narrative. For instance almost all films are made in South Australia, regardless of where they are set, mapping patterns of representation. We also created a CultureMap – an app that went with a map to alert you to literary or filmic places nearby that related back to that atlas.

I’ll talk about that G8 stuff. I now work on rapid spatial analytics; participatory geovisualisation and crowdsourced data; VGI – Volunteered Geographic Information; placemaking etc. But today I’ll be talking about emerging forms of spatial information/geodata, neogeographical tools etc.

So Godon and de Souza e Silva (2011) talk about us witnessing the increasing proliferation of geospatial data. And this is sitting alongside a geospatial revolution – GPS enabled devices, geospatial data permeating social media, etc. So GPS emerged in the late ’90s/early 00’s with a slight social friend-finder function. But the geospatial web really begins around 2000, the beginning of the end of the idea of the web as a “placeless space”. To an extent this came from a legal case brought by a French individual against Yahoo!, who were allowing Nazi memorabilia to be sold. That was illegal in France, and Yahoo! claimed that the internet is global, and claimed that it wasn’t possible. A French judge found in favour of the individual, Yahoo! were told it was both doable and easy, and Yahoo! went on to financially benefit from IP based location information. As Richard Rogers that case was the “revenge of geography against the idea of cyberspace”.

Then in 2005 Google Maps was described by John Yudell as that platform having the potential to be a “service factory for the geospatial web”. So in 2005 the “geospatial web” really is there as a term. By 2006 the concept of “Neogeography” was defined by Andrew (?) to describe the kind of non-professional, user-orientated, web 2.0-enabled mapping. There are are critiques in cultural geography, in geospatial literature about this term, and the use of the “neo” part of it. But there are multiple applications here, from humanities to humanitariasm; from cultural mapping to crisis mapping. An example here is Ushahidi maps, where individuals can send in data and contribute to mapping of crisis. Now Ushahidi is more of a platform for crisis mapping, and other tools have emerged.

So there are lots of visualisation tools and platforms. There are traditional desktop GIS – ArcGIS, QGIS. There is basic web-mapping (e.g. Google Maps); Online services (E.g. CARTO, Mapbox); Custom map design applications (e.g. MapMill); and there are many more…

Spatial data is not new, but there is a growth in ambient and algorithmic spatial data. So for instance ABC (TV channel in Australia) did some investigation, inviting audiences to find out as much as they could based on their reporter Will Ockenden’s metadata. So, his phone records, for instance, revealed locations, a sensitive data point. And geospatial data is growing too.

We now have a geospatial sub stratum underpinning all social media networks. So this includes check-in/recommendation platforms: Foursquare, Swarm, Gowalla (now defunct), Yelp; Meetup/hookup apps: Tinder, Grindr, Meetup; YikYak; Facebook; Twitter; Instagram; and Geospatial Gaming: Ingress; Pokemon Go (from which Google has been harvesting improvements for its pedestrian routes).

Geospatial media data is generated from sources ranging from VGI (Volunteered geographic information) to AGI (ambient geographic information), where users are not always aware that they are sharing data. That type of data doesn’t feel like crowd sourced data or VGI, hence the potential challenges, potential and ethical complexity of AGI.

So, the promises of geosocial analysis include a focus on real-time dynamics – people working with geospatial data aren’t used to this… And we also see social media as a “sensor network” for crisis events. There is also potential to provide new insights into spatio-temporal spread of ideas and actions; human mobilities and human behaviours.

People do often start with Twitter – because it is easier to gather data from it – but only between 1% and 3% of Tweets are located. But when we work at festivals we see around 10% being location data – partly a nature of the event, partly as Tweets are often coming through Instagram… On Instagram we see between 20% and 30% of images georeferenced, but based on upload location, not where image is taken.

There is also the challenge of geospatial granularity. On a tweet with Lat Long, that’s fairly clear. When we have a post tagged with a place we essentially have a polygon. And then when you geoparse, what is the granularity – street, city? Then there are issues of privacy and the extent to which people are happy to share that data.

So, in 2014 Brisbane hosted the G20, at a cost of $140 AUS for one highly disruptive weekend. In preceeding G20 meetings there had been large scale protests. At the time the premier of the city was former military and he put the whole central business district was in lockdown and designated a “declared area” – under new laws made for this event. And hotels for G20 world leaders were inside the zone. So, Twitter mapping is usually during crisis events – but you don’t know where this will happen, where to track it, etc. In this case we knew in advance where to look. So, a Safety and Security Act (2013) was put in place for this event, requiring prior approval for protests; arrests for the duration of the event; on the spot strip search; banning of eggs in the central Business District, no manure, no kayaks or floatation devices, no remote control cars or reptiles!

So we had these fears of violent protests, given all of these draconian measures. We had elevated terror levels. And we had war threatened after Abbott said he would “shirtfront” Vladimir Putin over MH17. But all that concern made city leaders concerned that the city might be a ghost town, when they wanted it marketed as a new world city. They were offering free parking etc. to incentivise them to come in. And tweets reinforced the ghost town trope. So, what geosocial mapping enabled was a close to realtime sensor network of what might be happening during the G20.

So, the map we did was the first close to real time social media map that was public facing, using CARTODB, and it was never more than an hour behind reality. We had few false matches. But we had clear locations and clear keywords – e.g. G20 – to focus on. A very few “the meeting will now be held in G20” but otherwise no false matches. We tracked the data through the meeting… Which ran over a weekend and bank holiday. This map parses around 17,000(?) tweets, most of which were not geotagged but parsed. Only 10% represent where someone was when they tweeted, the remaining 90% are subjects of posts from geoparsing of tweets.

Now, even though that declared area isn’t huge, there are over 300 streets there. I had to build a manually constructed gazeteer, using Open Street Map (OSM) data, and then new data. Picking a bounding box that included that area generated a whole range of features – but I wasn’t that excited about fountains, benches etc. I was looking for places people might mention. And I wanted to know about features people might actually mention in their tweets. So, I had a bounding box, and the declared area before… Would have been ideal if the G20 had given me their bounding polygon but we didn’t especially want to draw attention to what we were doing.

So, at the end we had lat, long, amenity (using OSM terms), name (e.g. Obama was at the Marriott so tweets about that), associated search terms – including local/vernacular versions of names of amenities; Status (declared or restricted); and confidence (of location/coordinates – score of 1 for geospatially tagged tweets, 0.8 for buildings, etc.). We could also create category maps of different data sets. On our map we showed geospatial and parsed tweets inside the area, but we only used geotweets outside the declared area. One of my colleagues created a Python script to “read” and parse tweets, and that generated a CSV. That CSV could then be fed into CARTODB. CARTODB has a time dimension, could update directly every half hour, and could use a Dr0pbox source to do that.

So, did we see much disruption? Well no… About celebrity spotting – the two most tweeted images were Obama with a koala and Putin with a koala. It was very hot and very secured so little disruption happened. We did see selfies with Angela Merkel, images of phallic motorcade. And after the G20 there was a complaint filed to board of corruption about the cooling effect of security on participation, particularly in environmental protests. There was still engagement on social media, but not in-person. Disruption, protest, criticism were replaced by spectacle and distant viewing of the event.

And, with that, we turn to an 11 person panel session to answer questions, wrap up, answer questions, etc. 

Panel Session

Q1) Each of you presented different tools and approaches… Can you comment on how they are connected and how we can take advantage of that.

A1 – Jean) Implicitly or explicitly we’ve talked about possibilities of combining tools together in bigger projects. And tools that Peta and I have been working on are based on DMI tools for instance… It’s sharing tools, shared fundamental techniques for analytics for e.g. a Twitter dataset…

A1 – Richard) We’ve never done this sort of thing together… The fact that so much has been shared has been remarkable. We share quite similar outlooks on digital methods, and also on “to what end” – largely for the study of social issues and mapping social issues. But also other social research opportunities available when looking at a variety of online data, including geodata. It’s online web data analysis using digital methods for issue mapping and also other forms of social research.

A1 – Carolyn) All of these projects are using data that hasn’t been generated by research, but which has been created for other purposes… And that’s pushing the analysis in their own way… And tools that we combine bring in levels, encryptions… Digital methods use these, but also a need to step back and reflect – present in all of the presentations.

Q2) A question especially for Carolyn and Anne: what do you think about the study of proprietary algorithms. You talked a bit about the limitations of proprietary algorithms – for mobile applications etc? I’m having trouble doing that…

A2 – Anne) I think in the case of the tracker tool, it doesn’t try to engage with the algorithm, it looks at presence of trackers. But here we have encountered proprietary issues… So for Ghostery, if you download a Firefox plugin you can access the content. We took the library of trackers from that to use as a database, we took that apart. We did talk to Ghostery, to make them aware… The question of algorithms… Of how you get to the blackbox things… We are developing methods to do this… One way in is to see the outputs, and compare that. Also Christian Zudwig is doing the auditing algorithms work.

A2 – Carolyn) Was just a discussion on Twitter about currency of algorithms and research on them… We’ve tried to ride on them, to implement that… Otherwise difficult. One element was on studying mobile applications. We are giving a presentation on this on Friday. Similar approach here, using infrastructures of app distribution and description etc. to look into this… Using existing infrastructures in which apps are built or encountered…

A2 – Anne) We can’t screenscrape and we are moving to this more closed world.

A2 – Richard) One of the best ways to understand algorithms is to save the outputs – e.g. we’ve been saving Google search outputs for years. Trying to save newsfeeds on Facebook, or other sorts of web apps can be quite difficult… You can use the API but you don’t necessarily get what the user has seen. The interface outputs are very different from developer outputs. So people think about recording rather than saving data – an older method in a way… But then you have the problem of only capturing a small sample of data – like analysing TV News. The new digital methods can mean resorting to older media methods… Data outputs aren’t as friendly or obtainable…

A2 – Carolyn) This one strand is accessing algorithms via transparancy; you can also think of them as situated and in context, seeing it in operation and in action in relation to the data, associated with outputs. I’d recommend Salam Marocca on the Impact of Big Data which sits in legal studies.

A2 – Jean) One of the ways we approach this is the “App Walkthrough”, a method Ben Light and I have worked on and will shortly be published in Media and Society, is to think about those older media approaches, with user studies part of that…

Q3) What is your position as researchers on opening up data, and doing ethically acceptable data on the other side? Do you take a stance, even a public stance on these issues.

A3 – Anne) Many of these tools, like the YouTube tool, and his Facebook tools, our developer took the conscious decision to anonymise that data.

A3 – Jean) I do have public positions. I’ve published on the political economy of Twitter… One interesting thing is that privacy discourses were used by Twitter to shut down TwapperKeeper at a time it was seeking to monetise… But you can’t just published an archive of tweets with username, I don’t think anyone would find that acceptable…

A3 – Richard) I think it is important to respect or understand contextual privacy. People posting, on Twitter say, don’t have an expectation of its use in commercial or research uses. Awareness of that is important for a researcher, no matter what terms of service the user has signed/consented to, or even if you have paid for that data. You should be aware and concerned about contextual privacy… Which leads to a number of different steps. And that’s why, for instance, NetVis – the Facebook tool – usernames are not available for comments made, even though FacePager does show that. Tools vary in that understanding. Those issues need to be thought about, but not necessarily uniformly thought about by our field.

A3 – Carolyn) But that becomes more difficult in spaces that require you to take part to research them – WhatsApp? for instance – researchers start pretending to be regular users… to generate insights.

Comment (me): on native vs web apps and approaches and potential for applying Ghostery/Tracker Tracker methods to web apps which are essentially pointing to URLs.

Q4) Given that we are beholden to commercial companies, changes to algorithms, APIs etc, and you’ve all spoken about that to an extent, how do you feel about commercial limitations?

A4 – Richard) Part of my idea of digital methods is to deal with ephemerality… And my ideal to follow the medium… Rather than to follow good data prescripts… If you follow that methodology, then you won’t be able to use web data or social media data… Unless you either work with the corporation or corporate data scientist – many issues there of course. We did work with Yahoo! on political insights… categorising search queries around a US election, which was hard to do from outside. But the point is that even on the inside, you don’t have all the insight or the full access to all the data… The question arises of what can we still do… What web data work can we still do… We constantly ask ourselves, I think digital methods is in part an answer to that, otherwise we wouldn’t be able to do any of that.

A4 – Jean) All research has limitations, and describing that is part of the role here… But also when Axel and I started doing this work we got criticism for not having a “representative sample”… And we have people from across humanities and social sciences seem to be using the same approaches and techniques but actually we are doing really different things…

Q5) Digital methods in social sciences looks different from anthropology where this is a classical “informant” problem… This is where digital ethnography is there and understood in a way that it isn’t in the social sciences…

Resources from this workshop:

Aug 092016
 
Notes from the Unleashing Data session at Repository Fringe 2016

After 6 years of being Repository Fringe‘s resident live blogger this was the first year that I haven’t been part of the organisation or amplification in any official capacity. From what I’ve seen though my colleagues from EDINA, University of Edinburgh Library, and the DCC did an awesome job of putting together a really interesting programme for the 2016 edition of RepoFringe, attracting a big and diverse audience.

Whilst I was mainly participating through reading the tweets to #rfringe16, I couldn’t quite keep away!

Pauline Ward at Repository Fringe 2016

Pauline Ward at Repository Fringe 2016

This year’s chair, Pauline Ward, asked me to be part of the Unleashing Data session on Tuesday 2nd August. The session was a “World Cafe” format and I was asked to help facilitate discussion around the question: “How can the respository community use crowd-sourcing (e.g. Citizen Science) to engage the public in reuse of data?” – so I was along wearing my COBWEB: Citizen Observatory Web and social media hats. My session also benefited from what I gather was an excellent talk on “The Social Life of Data” earlier in the event from the Erinma Ochu (who, although I missed her this time, is always involved in really interesting projects including several fab citizen science initiatives).

I won’t attempt to reflect on all of the discussions during the Unleashing Data Session here – I know that Pauline will be reporting back from the session to Repository Fringe 2016 participants shortly – but I thought I would share a few pictures of our notes, capturing some of the ideas and discussions that came out of the various groups visiting this question throughout the session. Click the image to view a larger version. Questions or clarifications are welcome – just leave me a comment here on the blog.

Notes from the Unleashing Data session at Repository Fringe 2016

Notes from the Unleashing Data session at Repository Fringe 2016

Notes from the Unleashing Data session at Repository Fringe 2016

If you are interested in finding out more about crowd sourcing and citizen science in general then there are a couple of resources that made be helpful (plus many more resources and articles if you leave a comment/drop me an email with your particular interests).

This June I chaired the “Crowd-Sourcing Data and Citizen Science” breakout session for the Flooding and Coastal Erosion Risk Management Network (FCERM.NET) Annual Assembly in Newcastle. The short slide set created for that workshop gives a brief overview of some of the challenges and considerations in setting up and running citizen science projects:

Last October the CSCS Network interviewed me on developing and running Citizen Science projects for their website – the interview brings together some general thoughts as well as specific comment on the COBWEB experience:

After the Unleashing Data session I was also able to stick around for Stuart Lewis’ closing keynote. Stuart has been working at Edinburgh University since 2012 but is moving on soon to the National Library of Scotland so this was a lovely chance to get some of his reflections and predictions as he prepares to make that move. And to include quite a lot of fun references to The Secret Diary of Adrian Mole aged 13 ¾. (Before his talk Stuart had also snuck some boxes of sweets under some of the tables around the room – a popularity tactic I’m noting for future talks!)

So, my liveblog notes from Stuart’s talk (slightly tidied up but corrections are, of course, welcomed) follow. Because old Repofringe live blogging habits are hard to kick!

The Secret Diary of a Repository aged 13 ¾ – Stuart Lewis

I’m going to talk about our bread and butter – the institutional repository… Now my inspiration is Adrian Mole… Why? Well we have a bunch of teenage repositories… EPrints is 15 1/2; Fedora is 13 ½; DSpace is 13 ¾.

Now Adrian Mole is a teenager – you can read about him on Wikipedia [note to fellow Wikipedia contributors: this, and most of the other Adrian Mole-related pages could use some major work!]. You see him quoted in two conferences to my amazement! And there are also some Scotland and Edinburgh entries in there too… Brought a haggis… Goes to Glasgow at 11am… and says he encounters 27 drunks in one hour…

Stuart Lewis at Repository Fringe 2016

Stuart Lewis illustrates the teenage birth dates of three of the major repository softwares as captured in (perhaps less well-aged) pop hits of the day.

So, I have four points to make about how repositories are like/unlike teenagers…

The thing about teenagers… People complain about them… They can be expensive, they can be awkward, they aren’t always self aware… Eventually though they usually become useful members of society. So, is that true of repositories? Well ERA, one of our repositories has gotten bigger and bigger – over 18k items… and over 10k paper thesis currently being digitized…

Now teenagers also start to look around… Pandora!

I’m going to call Pandora the CRIS… And we’ve all kind of overlooked their commercial background because we are in love with them…!

Stuart Lewis at Repository Fringe 2016

Stuart Lewis captures the eternal optimism – both around Mole’s love of Pandora, and our love of the (commercial) CRIS.

Now, we have PURE at Edinburgh which also powers Edinburgh Research Explorer. When you looked at repositories a few years ago, it was a bit like Freshers Week… The three questions were: where are you from; what repository platform do you use; how many items do you have? But that’s moved on. We now have around 80% of our outputs in the repository within the REF compliance (3 months of Acceptance)… And that’s a huge change – volumes of materials are open access very promptly.

So,

1. We need to celebrate our success

But are our successes as positive as they could be?

Repositories continue to develop. We’ve heard good things about new developments. But how do repositories demonstrate value – and how do we compare to other areas of librarianship.

Other library domains use different numbers. We can use these to give comparative figures. How do we compare to publishers for cost? Whats our CPU (Cost Per Use)? And what is a good CPU? £10, £5, £0.46… But how easy is it to calculate – are repositories expensive? That’s a “to do” – to take the cost to run/IRUS cost. I would expect it to be lower than publishers, but I’d like to do that calculation.

The other side of this is to become more self-aware… Can we gather new numbers? We only tend to look at deposit and use from our own repositories… What about our own local consumption of OA (the reverse)?

Working within new e-resource infrastructure – http://doai.io/ – lets us see where open versions are available. And we can integrate with OpenURL resolvers to see how much of our usage can be fulfilled.

2. Our repositories must continue to grow up

Do we have double standards?

Hopefully you are all aware of the UK Text and Data Mining Copyright Exception that came out from 1st June 2014. We have massive massive access to electronic resources as universities, and can text and data mine those.

Some do a good job here – Gale Cengage Historic British Newspapers: additional payment to buy all the data (images + XML text) on hard drives for local use. Working with local informatics LTG staff to (geo)parse the data.

Some are not so good – basic APIs allow only simple searchers… But not complex queries (e.g. could use a search term, but not e.g. sentiment).

And many publishers do nothing at all….

So we are working with publishers to encourage and highlight the potential.

But what about our content? Our repositories are open, with extracted full-text, data can be harvested… Sufficient but is it ideal? Why not do bulk download from one click… You can – for example – download all of Wikipedia (if you want to).  We should be able to do that with our repositories.

3. We need to get our house in order for Text and Data Mining

When will we be finished though? Depends on what we do with open access? What should we be doing with OA? Where do we want to get to? Right now we have mandates so it’s easy – green and gold. With gold there is PURE or Hybrid… Mixed views on Hybrid. Can also publish locally for free. Then for gree there is local or disciplinary repositories… For Gold – Pure, Hybrid, Local we pay APCs (some local option is free)… In Hybrid we can do offsetting, discounted subscriptions, voucher schemes too. And for green we have UK Scholarly Communications License (Harvard)…

But which of these forms of OA are best?! Is choice always a great thing?

We still have outstanding OA issues. Is a mixed-modal approach OK, or should we choose a single route? Which one? What role will repositories play? What is the ultimate aim of Open Access? Is it “just” access?

How and where do we have these conversations? We need academics, repository managers, librarians, publishers to all come together to do this.

4. Do we now what a grown-up repository look like? What part does it play?

Please remember to celebrate your repositories – we are in a fantastic place, making a real difference. But they need to continue to grow up. There is work to do with text and data mining… And we have more to do… To be a grown up, to be in the right sort of environment, etc.

Q&A

Q1) I can remember giving my first talk on repositories in 2010… When it comes to OA I think we need to think about what is cost effective, what is sustainable, why are we doing it and what’s the cost?

A1) I think in some ways that’s about what repositories are versus publishers… Right now we are essentially replicating them… And maybe that isn’t the way to approach this.

And with that Repository Fringe 2016 drew to a close. I am sure others will have already blogged their experiences and comments on the event. Do have a look at the Repository Fringe website and at #rfringe16 for more comments, shared blog posts, and resources from the sessions. 

Jul 122016
 

This week I am at the European Conference on Social Media 2016. I’m presenting later today, and have a poster tomorrow, but will also be liveblogging here. As usual the blog is live so there may be small errors or typos – all corrections and additions are very much welcomed!

We are starting with an introduction to EM Normandie, which has 4 campuses and 3000 students.

Introduction from Sue Nugus, ACPI, welcoming us to the event and the various important indexing etc.

Christine Bernadas, ECSM is co-chair and from EM Normandie, is introducing our opening keynote Abi Ouni, Co-founder and CEO of Spectrum Group. [http://www.spectrumgroupe.fr/]

Keynote Address:Ali Ouni,Spectrum Group, France – Researchers in Social Media, Businesses Need You!!!

My talk today is about why businesses need social media. And that, although we have been using social media for the last 10-15 years, we still need some approaches and frameworks to make better use of it.

My own personal background is in Knowledge Manageent, with a PhD from the Ecole Centrale Paris and Renault. Then moved to KAP IT as Head of Enterprise 2.0, helping companies to integrate new technologies, social media, in their businesses. I belive this is a hard question – the issue of how we integrate social media in our businesses. And then in 2011 I co-founded Spectrum Groupe, a consulting firm of 25 people who work closely with researchers to define new approaches to content management, knowledge management, to define new approaches. And our approach is to design end to end approaches, from diagnostic, to strategy development through to technologies, knowledge management, etc.

When Christine asked me to speak today I said “OK, but I am no longer a researcher”, I did that 12-15 years ago, I am now a practitioner. So I have insights but we need you to define the good research questions based on them.

I looked back at what has been said about social media in the last 10-15 years: “Organisationz cannot afford not to be listening to what is being said about them or interacting with their customers in the space where they are spending their time and, increasingly, their money too” (Malcolm Alder, KPMG, 2011).

And I agree with that. This space has high potential for enterprises… So, lets start with two slides with some statistics. So, these statistics are from We Are Social’s work on digital trends. They find internet activity increasing by 10% every year; 10% growth in social media users; and growth of 4% in social media users accessing via mobile; which takes us to 17% of the total population actively engaging in social media on mobile.

So, in terms of organisations going to social media, it is clearly important. Ut it is also a confusion question. We can see that in 2010 70%+ of big international organisations were actively using social media, but of these 80% have not achieved the intended businesses. So, businesses are expending time and energy on social media but they are not accruing all of the benefits that they have targeted.

So, for me social media are new ways of working, new business models, new opportunities, but also bringing new risks and challenges. And there are questions to be answered that we face every day in an organisational context.

The Social Media Landscape today is very very diverse, there is a high density… There are many platforms, sites, medias… Organisationsa re confused by this landscape and they require help to navigate this space. The choice they have is usually to go to the biggest social media in terms of total users – but is that a good strategy? They need to choose sites with good business value. There are some challenges when considering external sites versus internal sites – should they replicate functionality themselves? And where are the values and risks of integrating social media platforms with enterprise IT systems? For instance listening to social media and making connecting back to CRMs (Customer Relationship Management System(s)).

What about using social media for communications? You can experiement, and learn from those… But that makes more sense when these tools are new, and they are not anymore. Is experimenting always the best approach? How ca we move faster? Clients often ask if they can copy/adopt the digital strategies of their competitors but I think generally not, that these approaches have to be specific to the context and audience.

Social media has a fast evolution speed, so agility is required… Organisations can struggle with that in terms of their own speed of organizational change. A lot of agility is requires to address new technologies, new use cases, new skills. And decisions over skills and whether to own the digital transformation process, or to delegate to others.

The issue of Return on Investment (ROI) is long standing but still important. Existing models do not work well with social media – we are in a new space, new technology, a new domain. There is a need to justify the value of these kinds of projects, but I think a good approach is to work on new social constructs, such as engagement, sentiment, retention, “ROR” – Return on Relationship, collective intelligence… But how does one measure these?

And organisations face challenges of governance… Understanding rules and policies of engagement on social media, on understanding issues of privacy and data protection. And thought around who can engage on social media.

So, I have presented some key challenges… Just a few. There are many more on culture, change, etc. that need to be addressed. I think that it is important that businesses and researchers work together on social media.

Q&A

Q1) Could you tell me something on Return on Relationships… ?

A1) This is a new approach. Sometimes the measure of Return on Investment is to measure every conversation and all time spent… ROR is about long term relationships with customers, partners, suppliers… and it is about having benefits after a longer period of time, rather than immediate Return on Investment. So some examples include turning some customers into advocates –so they become your best salespeople. That isn’t easy, but organisations are really very aware about these social constructs.

Q1) And how would you calculate that?

Comment) That is surely ROI still?

Comment) So, if I have a LinkedIn contact, and they buy my software, then that is a return on investment, and value from social capital… There is a time and quality gain too – you identify key contact and context here. Qualitative but eventually quantitative.

A1) There absolutely is a relationship between ROR and ROI.

Q2) It was interesing to hear your take on research. What you said reminded me of 20 years ago when we talked about “Quality Management” and there was a tension between whether that should be its own role, or part of everyone’s role.

A2) Yes, so we have clients that do want “community management” and ask us to do that for them – but they are the experts in their own work and relationships. The quality of content is key, and they have that expertise. Our expertise is around how to use social media as part of that. The good approach is to think about new ways to work with customers, and to define with our consulting customers what they need to do that. We have a coaching role, helping them to design a good approach.

Q3) Thank you for your presentation. I would like to ask you if you could think of a competency framework for good community management, and how you would implement that.

A3) I couldn’t define that framework, but I think rom what I see there are some key skills in community management are about expertise – people from the business who understands their own structure, needs, knowledge. I think that communication skills need to be good – writing skills, identifying good questions, an ability to spot and transform key questions. From our experience, knowing the enterprise, communication skills and coordinating skills are all key.

Q3) What about emotional engagement?

A3) I think emotional engagement is both good and dangerous. It is good to be invested in the role, but if they are too invested there is a clear line to draw beteen professional engagement and personal engagement. And that can make it dangerous.

Stream B – Mini Track on Empowering Women Through Social Media (Chair – Danilo Piaggesi)

Danilo: I proposed this mini track as I saw that the issues facing women in social media were different, but that women were self-organising and addressing these issues, so that is the genesis of this strand. My own background is in ICT in development and developing countries – which is why I am interested in this area of social media… The UN Sustainable Development Goals (SDG), which include ICT, have been defined as needing to apply to developing and developed countries. And there is a specific goal dedicated to Women and ICT, which has a deadline of 2030 to achieve this SDG.

Sexting & Intimate Relations Online: Identifying How People Construct Emotional Relationships Online & Intimacies Offline
Spurling – Esme, Coventry University, West Midlands, UK

Sexting and intimate relations online have accelerated with the use of phones and smart phones, particularly platforms such as SnapChat and Whats App… Sexting for the purpose of this paper is about the sharing of intimate texts through digital information. But this raises complexity for real life relationships, and how the online experience relates to that, and how heterosexual relationships are mediated. My work is based on interviewees.

I will be talking about “sex selfies”, which are distributed to a global audience online. These selfies (Ellie is showing examples on the “sexselfie” hashtags) purport to be intimate, despite their global sharing and nature. The hashtags here (established around 2014) show heterosexual couples… There is (by comparison to non-heterosexual selfies) a real focus on womens bodies, which is somewhat at odds with the expectations of girls and women showing an interest in sex. Are we losing our memory of what is intimate? Are sexselfies a way to share and retain that memory?

I spoke to women in the UK and US for my research. All men approached refused to be interviewed. We have adapted to the way we communicate face to face through the way we connect online. My participants reflect social media trends already reported in the media, of the blurring of different spheres of public and private. And that is feeding into our intimate lives too. Prensky (2001) refers to this generation as “Digital Natives” (I insert my usual disclaimer that this is the speaker not me!), and it seems that this group are unable to engage in that intimacy without sharing that experience. And my work focuses on shairng online, and how intimacy is formed offline. I took an ethnographic approach, and my participants are very much of a similar age to me, which helped me to connect as I spoke to them about their intimate relationships.

There becomes a dependency on mobile technologies, of demand and expectation… And that is leading to a “leisure for pleasure” mentality (Cruise?)… You need that reward and return for sharing, and that applies to sexting. Amy Hassenhof notes that sexting can be considered a broadcast media. And mainstream media has also been scrutinising sexting and technology, and giving coverage to issues such as “Revenge Porn” – which was made a criminal offence in 2014. This made texting more taboo and changed public perceptions – with judgement online of images of bodies shared on Twitter. When men participate they sidestep a label, being treated in the highly gendered “boys will be boys” casualness. By contrast women showing their own agency may be subject to “slut shaming” (2014 onwards), but sexting continues. And I was curious to find out why this continues, and how the women in my studies relate to comments that may be made about them. Although there is a feeling of safety (and facelessness) about posting online, versus real world practices.

An expert interview with Amy Hassenhof raised the issue of expectations of privacy – that most of those sexting expect their image to be private to the recipient. Intimate information shared through technology becomes tangled with surveillance culture that is bound up with mobile technologies. Smartphones have cameras, microphone… This contributes to a way of imagining the self that is formed only by how we present ourselves online.

The ability to sext online continues despite Butler noting the freedom of expression online, but also the way in which others comment and make a real impact on the lives of those sharing.

In conclusion it is not clear the extent to which digital natives are sharing deliberately – perceptions seemed to change as a result of the experience encountered. One of my participants felt less in control after reflective interviews about her practice, than she had before. We demand communication instantly… But this form of sharing enables emotional reliving of the experience.

Q&A

Q1) Really interesting research. Do you have any insights in why no men wanted to take part?

A1) The first thing is that I didn’t want to interview anyone that I knew. When I did the research I was a student, I managed to find fellow student participants but the male participants cancelled… But I have learned a lot about research since I undertook my evidence gathering. Women were happy to talk about – perhaps because they felt judged online. There is a lot I’d do differently in terms of the methodology now.

Q2) What is the psychological rationale for sharing details like the sex selfies… Or even what they are eating. Why is that relevant for these people?

A2) I think that the reason for posting such explicit sexual images was to reinforce their heterosexual relationships and that they are part of the norm, as part of their identity online. They want others to know what they are doing… As their identity online. But we don’t know if they have that identity offline. When I interviewed Amy Hassenhof she suggested it’s a “faceless identity” – that we adopt a mask online, and feel able to say something really explicit…

A Social Network Game for Encouraging Girls to Engage in ICT and Entrepreneurship: Findings of the Project MIT-MUT
–  Natalie Denk, Alexander Pfeiffer and Thomas Wernbacher, Donau Universität Krems, Ulli Rohsner, MAKAM Research Gmbh, Wien, Austria and Bernhard Ertl,Universität der Bundeswehr, Munich, Germany

This work is based on a mixture of literature review, qualitative analysis of interviews with students and teachers, and the development of the MIT-MUT game, with input and reflection from students and teachers. We are testing the game, and will be sharing it with schools in Austria later this year.

Our intent was to broaden career perspectives of girls at the age of 12-14 – this is younger than is usually targeted but it is the age at which they have to start making decisions and steps in their academic life that will impact on their career. Their decisions are impacted by family, school, peer groups. But the issue is that a lot of girls don’t even see a career in ICT as an option. We want them to show that that is a possibility, to show them the skills they already have, and that this offers a wide range of opportunities, possible career pathways. We also want to provide a route to mentors who are role models, as this is still a male dominated field especially when it comes to entrepreneurship.

Children and young people today grow up as “digital natives” (Prensky 2001) (again, my usual critical caveat), they have a strong affinity towards digital media, they frequently use internet, they use social media networks – primarily WhatsApp, but also Facebook and Instagram. Girls also play games – it’s not just boys that enjoy online gaming – and they do that on their phones. So we wanted to bring this all together.

The MIT-MUT game takes the form of a 7 week long live challenge. We piloted this in Oct/Nov 2015 with 6 schools and 65 actie players in 17 teams. The main tasks in the game are essentially role playing ICT entrepreneurship… Founding small start up companies, creating a company logo, and find an idea for an app for the target group of youth. They needed to then turn their game into a paper prototype – drawing screens and ideas on paper to demonstrate basic functionality and ideas. The girls had to make a video of this paper prototype, and also present their company on video. We deliberately put few technological barriers in place, but the focus was on technology, and the creative aspects of ICT. We wanted the girls to use their skills, to try different roles, to have opportunity to experiment and be creative.

To bring the schools and the project team we needed a central connecting point… We set up a SeN (Social Enterprise ?? Network), and we did that with Gemma – a Microsoft social networking tool for use within companies, that are closed to outside organisations. This was very important for us, given the young age and need for safety in our target user group. They had many of the risks and opportunities of any social network but in this safe bounded space. And, to make this more interesting for the girls, we created a fictional mentor character, “Rachel Lovelace” (named for Ada Lovelace), who is a Silicon Valley entrepreneur, coming to Austria to invest. And the students see a video introduction – we had an actress record about 15 video messages. So everything from the team was through the character of Rachel, whether video or in her network.

A social network like Gemma is perfect for gamification aspects – we did have winners and prizes – but we also had achievements throughout the challenge for finishing a face, making a key contribution, etc. And if course there is a “like” button, the ability to share or praise someone in the space, etc. We also created some mini games, based on favourite genres of the girls – the main goal of these were as starting points for discussing competencies in ICT and Entrepreneurship contexts. With the idea that if you play this game you have these competencies, and why not considering doing more with that.

So, within Gemma, the interface looks a lot like Facebook… And I’ll show you one of these paper prototypes in action (it’s very nicely done!), see all of the winning videos: http://www.mitmut.at/?page_id=940.

To evaluate this work we had a quantitative approach – part of the game presented by Rachel – as well as a quantitative approach based on feedback from teachers and some parents. We had 65 girls, 17 teams, 78% completed the challenge at least to phase 4 (the video presentation – all the main tasks completed). 26% participated in the voting phase (phase 5). Of our participants 30 girls would recommend the game to others, 10 were uncertain, and 4 would not recommend the game. They did enjoy the creativity, design, the paper prototyping. They didn’t like the information/the way the game was structured. The communication within the game was rated in quite a mixed way – some didn’t like it, some liked it. The girls interested in ICT rated the structure and communication more highly than others. The girls stayed motivated but didn’t like the long time line of the game. And we saw a significant increase in knowledgeability of ICT professions, they reported increase in feeling talented, and they had a higher estimation of their own presentation skills.

In the qualitative approach students commented on the teamwork, the independence, the organisational skills, the presentation capabilities. They liked having a steady contact person (the Rachel Lovelace character), the chance of winning, and the feeling of being part of a specialist project.

So now we have a beta version, we have added a scoring system for contributions with points and stars. We had a voting process but didn’t punish girls for not delivering on time, wanted to be very open… But girls thought that we should have done this and given more objective, more strict feedback. And they wanted more honest and less enthusiastic feedback from “Rachel”. They felt she was too enthusiastic. We also restructured the information a bit…

For future development we’d like to make a parallel programme for boys. The girls appreciated the single sex nature of the network. And I would personally really like to develop a custom made social media network for better gamifiation integration, etc. And I’d like

Q&A

Q1) I was interested that you didn’t bring in direct technical skills – coding, e.g. on Raspberry PIs etc. Why was that?

A1) Intentionally skipped programming part… They have lessons and work on programming… But a lack of that idea of creative ways to use ICT, the logical and strategic skills you would need… But they already do informatics as part of their teaching.

Q2) You set this up because girls and women are less attracted to ICT careers… But what is the reason?

A2) I think they can’t imagine to have a career in ICT… I think that is mainly about gender stereotypes. They don’t really know women in ICT… And they can’t imagine what that is as a career, what it means, what that career looks like… And to act out their interests…

And with that I’ve switched to the Education track for the final part of this session… 

Social Media and Theatre Pedagogy for the 21C: Arts-Based Inquiry in Drama Education – Amy Roberts and Wendy Barber, University of Ontario, Canada

Amy is starting her presentation with a video on social media and performance pedagogy, the blurring of boundaries and direct connection that it affords. The video notes that “We have become a Dramaturgical Community” and that we decide how we present ourselves.

Theatre does not exist without the audience, and theatre pedagogy exists at the intersection between performance and audience. Cue another video – this time more of a co-presentation video – on the experience of the audience being watched… Blau in The Audience (1990) talks about the audience “not so much as a mere congregation of people as a body of thought and desire”.  Being an audience member is now a standard part of everyday life – through YouTube, Twitter, Facebook, Vine… We see ourselves every day. The song “Digital Witness” by Saint Vincent sums this up pretty well.

YouTube Preview Image

Richard Allen in 2013 asked whether audience actually wants conclusive endings in their theatre, instead showing preference for more videogame open ended type experiences. When considering what modern audiences want… Liveness is prioritised in all areas of life and that that does speak to immediacy of theatre. Originally “live” was about co-presence but digital spaces are changing that. The feeling of liveness comes from our engagement with technology – if we engage with machines, like we do with humans, and there is a response, then that feels live and immediate. Real time experiences gives a feeling of liveness… One way to integrate that with theatre is through direct digital engagement across the audience, and with performance. Both Baker and Auslander agree that liveness is about immediate human contact.

The audience is demanding for live work that engages them in its creation and consumption through the social media spaces they use all the time. And that means educators have to be part of connecting the need for art and tech… So I want to share some of my experiences in attempting “drama tech” research. I’m calling this: “Publicly funded social board presents… Much ado about nothing”. I had been teaching dramatic arts for many years, looking at new technologies and the potential for new tools to enable students to produce “web theatre” around the “theatre of the oppressed” for their peers, with collaboration with audience as creator and viewer. I was curious to see how students would use the 6 second restriction of Vine, and that using familiar tools students could create tools familiar to the students.

The project had ethics approval… All was set but a board member blocked the project as Twitter and Vine “are not approved learning tools”… I was told I’d have to use Moodle… Now I’ve used Moodle before… And it’s great but NOT for theatre (see Nicholls and Phillip 2012). Eisner (2009) talks about “Education can learn from the arts that form and content cannot be separated.How something is said or done shapes the content of experience.”. The reason for this blocking was that there was potential that students might encounter risks and issues that they shouldn’t access… But surely that is true of television, of life, everything. We have to teach students to manage risks… Instead we have a culture of blocking of content, e.g. anything with “games” in the name – even if educational tools. How can you teach media literacy if you don’t have the support to do that, to open up. And this seems to be the case across publicly funded Ontario schools. I am still hoping to do this research in the future though…

Q&A

Q1) How do you plan to overcome those concerns?

A1) I’m trying to work with those in power… We had loads of safeguards in place… I was going to upload the content myself… It was really silly. The social media policy is just so strict.

Q1) They’ll have reasons, you have to engage with those to make that case…

Q2) Can I just ask what age this work was to take place with?

A2) I work with Grade 9-12… But this work specifically was going to focus on 17 and 18 year olds.

Q3) I think that many arts teachers are quite scared by technology – and you made that case well. You focus on technology as a key tool at the end there… And that has to be part of that argument.

A3) It’s both… You don’t teach hammer, you teach how you use the hammer… My presentation is part of a much bigger paper which does address both the traditional and that affordances of technology.

Having had a lovely chat with Amy over lunch, I have now joined Stream B – Monitoring and Privacy on Social media – Chair – Andree Roy

Monitoring Public Opinion by Measuring the Sentiment of Re-tweets on Twitter – LashariIntzar Ali and Uffe KockWiil,University of Southern Denmark, Denmark

I have just completed my PhD at the University of Southern Denmark, and I’ll be talking about some work I’ve been doing on measuring public opinion using social media. I have used Twitter to collect data – this is partly as Twitter is most readibly accessible and it is structured in a way that suits this type of analysis – it operates in real time, people use hashtags, and there are frequent actors and influencers in this space. And there are lots of tools available for analysis such as Tweetreach, Google Analytics, Cytoscope. My project, CBTA, is combining monitoring and analysis of Tweets…

I have been looking for dictation on geographical location based tweets, using a trend based data analyser, with data collection of a specific date and using network detection on negative comments. I also limited my analysis to tweets which have been retweeted – to show they have some impact. In terms of related studies supporting this approach: Steiglitx (2012) found that retweets is a simple powerful mechanism for information diffusion; Shen (2015) found re-tweeting behaviour is an influencing behaviour from the post of influential user. The sentiment analysis – a really useful quick assessment of content – looks at “positive”, “negative” and “neutral” content. I then used topic base monitoring an overview of the wider public. The intent was to move towards real-time monitoring and analysis capabilities.

So, the CBTA Tool display shows you trending topics, which you can pick from, and then you can view tweets and filter by positive, negative, or neutral posts. The tool is working and the code will be shared shortly. In this system there is a keyword search of tweets which collects tweets, these are then filtered. Once filtered (for spam etc), tweets are classified using NLTK which categorises into “Endorse RT”, “Oppose RT” and “Report RT”, the weighted retweets are then put through a process to compute net influence.

So for my work has looked at data from Pakistan around terms: Zarb-e-Azb; #OpZarbeAzb; #Zerb-e-asb etc. And I gathered tweets and retweets, and deduplicated those tweets with more than one hashtag. Once collected the algorithm for measuring re-tweets influence used follower counts, onward retweets etc. And looking at the influence here, most of the influential tweets were those with a positive/endorsing tone.

But we now have case studies for Twitter, but also for other social media sites. We will be making case studies available online. And looking at other factors, for instance we are interested in the location of tweets as a marker for accuracy/authenticity and to understand how other areas are influencing/influenced by global events.

Q&A

Q1) I have a question about the small amount of negative sentiment… What about sarcasm?

A1) When you look at data you will see I found many things… There was some sarcasm there… I have used NLTK but I added my own analysis to help deal with that.

Q2) So it registers all tweets right across Twitter? So can you store that data and re-parse it again if you change the sentiment analysis?

A2) Yes, I can reprocess it. In Twitter there is limited availability of Tweets for 7 days only so my work captures a bigger pool of tweets that can then be analysed.

Q3) Do you look at confidence scores here? Sentiment is one thing…

A3) Yes, this processing needs some human input to train it… But in this approach it is trained by data that is collected each week.

Social Media and the European Fundamental Rights to Privacy and Data Protection – BeyversEva, University of Passau and TilmanHerbrich, University of Leipzig, Germany

Tilman: Today we will be talking about Data Protection and particularly potential use in commercial contexts, particularly marketing. This is a real area of conflict in social media. We are going to talk about those fundamental rights to privacy and data protection in the EU, the interaction with other fundamental rights, and things like profiling etc. The Treaties and the Charter of Fundamental Rights (CFR) are primary law based on EU law. There is also secondary law including Directives (requiring transposition into national law, but are not binding until then), and Regulations (binding in entirity on all member states, they are automatically law in all member states).

In 2018 the CFR will become legally binding across the piece. In this change private entities and public bodies will all be impacted by the CFR. But how does one enforce those? They could institute a proceeding before a national court, then the National Court must refer questions to the European Court of Human Rights who will answer and provide clarifications, that will then enable the National Courts to take a judgement on the specific case at hand.

When we look across the stakeholders, we see that they all have different rights under the law. And that means there is a requirement to balance those rights. The European Court of Justice (ECJ) has always upheld that concerned rights and interests must be considered, evaluated and weighed in order to find an adequate balance between colliding fundamental rights – as an example the Google Spain Data Protection case in Spain where their commercial rights were deemed secondary to the inidividual rights to privacy.

Eva: Most social media sites are free to use, but this is made possible by highly profiled advertising. Profiling is articulated in Article 4 in the CFR as including aspects of behaviours, personality, etc. Profiling is already seen as an issue that is a threat to Data Protection. We would argue that it poses an even greater threat: users are frequently comfortable to give their real name in order to find others which means they are easily identifiable; users private lives are explicity part of the individual’s profile and may include sensitive data; further this broad and comprehensive data set has very wide scope.

So, on the one hand the users individual privacy is threatened, but so is the freedom to conduct a business (Art 16 CFR). The right to data protection (Article 8, CFR) rests on the idea of consent – and the way that consent is articulated in the law – that consent must be freely given, informed and specific – is incompatible with social networking services and the heavy level of data processing associated with them. These spaces adopt excessive processing, there is dynamic evolution of these platforms, and their concept is networking. Providers make changes in platform, affordances, advertising, etc. create continued changes of the use and collection of data – at odds with specific requirements for consent. The concept of networking means that individuals manage information that is not just about themselves but also others – their image, their location, etc. European Data Protection law does nothing to accommodate the privacy of others in this way. There has been no specific ruling on the interaction of business and personal rights here, but given previous trends it seems likely that business will win.

These data collections by social networking sites also has commercialisation potential to exploit users data. It is not clear how this will evolve – perhaps through greater national law in the changing or terms and conditions?

This is a real tension, with rights of businesses on one side, the individual on the other. The European legislator has upheld fundamental data protection law, but there is still much to examine here. We wanted to give you an overview of relevant concepts and rights in social media contexts and we hope that we’ve done this.

Q&A

Q1) How do these things change when Europe is outwith the legislative jurisdiction of most social media companies are – they are global
A1) General Data Protection Law 2018 will target companies in the EU, if they profile there. It was unclear until now… Previously you had to have a company here in Europe (usually Ireland), but in 2018 it will be very clear and very strict.

Q2) How has the European Court of Human rights fared so far in judgements?

A2) In Google Spain case, in another Digital Rights case the ECHR has upheld personal rights. And we see this also on the storage and retention of data… But the regulation is quite open, right now there are ways to circumvent.

Q3) What are the consequences of non-compliance? Maybe the profit I make is greater than that risk?

A3) That has been an issue until now. Fines have been small. From 2018 it will be up to 5% of worldwide revenue – that’s a serious fine!

Q4) Is private agreement… Is the law stronger than private agreement? Many agree without reading, or without understanding, are they protected if they agree to something illegal.

A4) Of course you are able to contract and agree to data use. But you have to be informed… So if you don’t understand, and don’t care… The legislator cannot change this. This is a problem we don’t have an approach for. You have to be informed, have to understand purpose, and understand means and methods, so without that information the consent is invalid.

Q5) There has been this Safe Harbour agreement breakdown. What impact is that having on regulations and practices?

A5) The regulations, probably not? But the effect is that data processing activities cannot be based on Safe Harbour agreement… So companies have to work around or work illegally etc. So now you can choose a Data Protection agreement – standardised contracts to cover this… But that is insecure too.

Digital Friendship on Facebook and Analog Friendship Skills – KordoutisPanagiotis, Panteion University of Social and Political Sciences, Athens and EvangeliaKourti,University of Athens, Greece

Panagiotis: My colleague and I were keen to look at friendship on Facebook. There is a lot of work on this topic of course, but very little work connecting Facebook and real life friendship from a psychological perspective. But lets start by seeing how Facebook describes itself and friendship… Facebook talk about “building, strengthening and enriching friendships”. Operationally they define friendship through digital “Facebook acts” such as “like”, “comment”, “chat” etc. But this creates a paradox… You can have friends you have never met and will never meet – we call them “unknown friends” and they can have real consequences for life.

People perceive friendship in Facebook in different ways. In Greece (Savrami 2009, Kourti, Kourdoutis, Madaglou 2016) young people see Facebook friendship as a “phony” space, due to “unknown friends” and the possibility of manipulating self presentation. As a tool for popularity, public relations, useful acquaintances; a doubtful and risky mode of dating; the resort of people with a limited nnumber of friends and lack of “real” social live; and the resort of people who lack friendship skills (Buotte, wood and pratt 2009). BUT it is widely used and most are happy with their usage…

So, how about psychological definitions of analog friendship? Baron-Cohen and Wheelright (2003) talk about friendship as survival supporting social interdependence based on attachment and instrumentality skills.

Attachment involves high interdependence, commitment, systematic support, responsiveness, communication, investment in joint outcomes, high potential for developing the friendship – it is not static but dynamic. It is being satisfied by the interaction with each other, with the company of each other. They are happy to just be with someone else.

Instrumentality is also part of friendship though and it involves low interdependence, low commitment, non-systematic support, low responsiveness, superficial communication, expectations for specific benefits and personal outomes, little potential for developing the relationship – a more static arrangements. And they are satisfied by interacting with others for a specific goal or activity.

Now the way that I have presented this can perhaps look like the good and the bad side… But we need both sides of that equation, we need both sets of skills. What we perceive as friendship in analog life usually has a prevalence of attachement over instrumentality…

So, why are we looking at this? We wanted to look into whether those common negative attitudes about Facebook and friendship were accurate. Will FB users with low friendship skills have more Fb friends? Engage in more Fb “friendship acts”; will they use Fb more intensely; will they have more “unknown” friends than users with stronger friendship skills”. And when I say stronger friendship skills – I mean those with more attachment skills versus those with more instrumental skills.

In our method here we had 201 participants, most were women (139) from Universities and technological Institutes in metropolitan areas of Greece. All had profiles in Fb. median age was 20, all had used Facebook for 2 hours the day before, and many reported being online at least 8 hours a day, some on a permanent ongoing basis. We asked them how many friends they have… Then we asked them for an estimate of how many they know in-person. Then we asked them how many of these friends they have never met or will never meet – they provided an estimation. There were other questions about interactions in Facebook. We used a scale called the Facebook Insensity Scale (Ellison, Steinfield and Lampe 2007) which looks at importance of Facebook in the persons life (this is a 12 pt Likert scale). We also used an Active Digital Sociability Scale which we came up with – this was a 12 pt likert scale on Fb Friendship acts etc. And we used a Friendship Questionnaire (Baron-Cohen and Wainwright 2003). This was a paper exercise, for less than 30 minutes.

When we looked at stronger and weaker friendship skills groups – we had 44.3% of participants in the stronger friendship skills group, 52% in the weaker friendship skills group. More women had stronger friendship skills – consistent with the general population across countries.

So, firstly do people with weaker friendship skills have more friends? No, there was no difference. But we found a gender result – men had more friends in facebook, and also had weaker friendship skills.

Do people with weaker friendships skills engage more frequently in Fb friendship operations of friendship acts? No. No difference. Chatting wa smost popular, browsing adn liking were most frequet acts regardless of skills. Less frequent were participating in groups, check in and gaming. BUT a very telling difference: Men were more likely to comment than women, and that’s significant for me.

Do people with weaker friendship skills engage in Fb use it more intensively? Yes and No. There was a difference… But those with stronger friendship skills showed high Fb intensity, compared to those with weaker friendship. Men with stronger skills were more intensive in their use than women with strong skills.

Do people with weaker friendship skills have more friends on facebook? No. Do they have more unknown friends? No. But there was a gender effect. 16% of men have unknown friends, ony 9% of women do. Do those with weaker friendship skills interact more with unknown friends? No, opposite. Those with stroger skills, interact more with unknown friends. And so on.

And do those with weaker friendship skills actually meet unknown friends from Fb in real life? Yes, but opposite to expected. If they have stronger skills I’m more likely to meet you in real life… If I am a man… The percentages are small (3% of men, 1% of women).

So, what do I make of all this? Facebook is not the resort of people with weak friendship skills. Our data suggests it may be advantageous space for those with higher friendship skills, it is a socail space regulated by lots of social norms – it is an extension of what happens in real life. And what is the norm at play? It is the famous idea that men are encouraged to be bold, women to be cautious and apprehensive. Women have stronger social skills, but Facebook and the dynamics suppresses them, and enhances men with weaker skills… So, that’s my conclusion here!

Q&A

Q1) Very interesting. When men start to see someone they haven’t met before… Wouldn’t it be women? To hit on them?

A1) Actually yes, often it is dating. But men are eager to go on about it… to interact and go on to meet. Women are very cautious. We have complimented this work with qualitative work that shows women need much longer interaction – they need to interact for maybe 3 years before meeting. Men are not so concerned.

Q2) You haven’t talked about quality etc. of your quantitative data?

A2) I haven’t mentioned it here, but it’s in the paper (in the Proceedings). The Friendship questionnaire is based on established work, saw similar distribution ratios as seen elsewhere. We haven’t tried it (but are about to) with those with clinical status, Aspergers, etc. The Facebook Intensity questionnaire had a high reliability alpha.

Q3) Did you do any comparison of this data with any questions on trolling, cyber bullying, etc. as the consequences for sharing opinion or engaging with strangers for women is usually harsher than for men.

A3) Yes, some came up in the qualitative study where individuals were able to explain their reasons.

Q4) Did your work look at perceptions by employers etc. And how that made a difference to selecting friends?

A4) We didn’t look at this, but others have. Some are keen not to make friends in specific groups – they use Facebook to sell a specific identity to a specific audience.

Q5) The statistics you produced are particularly interesting… What is your theoretical conjecture as a result of this work?

A5) My feeling is that we have to see looking at Facebook as an alternative mode of socialising. It has been normalised so the same social rules functioning in the rest of society do function in Facebook. This was an example. It sounds commonplace but it is important.

The Net Generation’s Perceptions of Digital Activism –  StochLouise and SumarieRoodt, University of Cape Town, South Africa

Sumarie: I will be talking about how the Net Generation view digital activism. And the reason this is of interest to me is because of the many examples of digital activism we see around us. I’ll talk a bit about activism in South Africa, and particularly a recent campaign called “Fees Must Fall”.

There are various synonyms for Digital Activism but that’s the term I’ll use. So what is this? It’s origins start with the internet, with connection and mobilisation. We saw the rise of social media and the huge increase in people using it. We saw economies and societies coming online and using these spaces over the last 10 years. What does this mean for us? Well it enables quick and far-reaching information sharing. And there is a video that goes with this too.

Joyce 2013 defines Digital Activism as being about “the use of digital media in collective efforts to bring about social or political change, using methods outside of routine decision-making processes”. “It is non-violent and civil but can involve hacking (Edwards et al. 2013). We see digital activism across a range of approaches: from Slacktivism (things that are easy to participate in); online activism; internet activism; cyber activism; hacktivism. That’s a broad range, there are subtleties that divide into these and other terms, and the different characteristics of these types of activism.

Some examples…

In 2011 we saw revolutions in Egypt, Tunisia, Occupy Wall Street;

2012-14 we saw BringBackOurGirls, and numerous others;

2015 onwards we have:

  • RhodesMustFall – on how Cecil John Rhodes took resources from the indigenous communities, and recent removals of statues etc. and naming of buildings, highly sensitive.
  • FeesMustFall  – about providing free education to everybody, particularly university – less than 10% of South Africans go to University and they tend to be those from the more privileged background – as a result of that we weren’t allowed to raise our fees for now, and we are encouraged to find other funders to subsidise education and we cannot exclude anyone because of lack of economic access, the government will help but…. a lot of conflict there particularly around corruption, but government also classified universities as advantaged or non advantaged university and distributes funds much more to non advantaged university.
  • ZumaMustFall – our president is also famous for causing havoc politically and economically for what many see as very poor decisions, particularly under public scrutiny in the last 12 months.

In the room we are having a discussion about other activist activities, including an Israeli campaign against internet censorship law targeted at pornography etc. but including political and cultural aspects. Others mention 38 degrees etc. and successful campaigns to get issues debated. 

Now, digital activism can be on any platform – not necessarily Facebook or Twitter.

When we look at who our students are today – the “Net Generation”, “Millennials”, “Digital Natives” – and characteristics (Oblinger and Oblinger) associated this group include: confidence with technologu, always connected, immediate, social and team orientated, diverse, visual, education driven, emotionally open. But this isn’t homogenous, not all students will have these qualities.

So, what did we do with our students to assess students view? We looks at 230 students, and targeted those looked at in the literature: those born in any year from 1983 to 2003, and they needed to be those with some form of online identit(ies). We had an online questionnare that ran over 5 days. We analysed with Qualtrics, and thematic analysis. There are limitations here – all students were registered in the Comms department – business etc.

In terms of the demographics: Male participants were 38%, female were 62%; Average age was 22, minimum was 17, maximum was 33. We asked about the various characteristics, using a Likert scale questions… Showing that all qualify suffiently to be this “Net Generation”. We asked if they paid attention to digital activism… Most did, but it’s not definitive. Now this is the beginning of a much bigger project…

We asked if the participants had ever signed an online petition – 145 had; and 144 believed online petitions made a difference. We also asked if the internet and social media have a positive effect on an activism campaign – 92% do, and that has huge interest to companies and advertisers. And 89% of participants felt the use of social media in these causes has contributed to creating a society that is more aware of important issues.

What did we learn? Well we did see that this generation are inclined to participate in slacktivism. They believe digital activism mades a difference. They pay attention to online campaigns and are aware of which ones have been successful – at least in terms of having some form of impact or engagement.

Now, if you’d like access to the surveys, etc. do get in touch.

Q&A

Q1) How does UCT engage with the student body around local activism?

A1) Mostly that has been digitally, with the UCT Facebook page. There were also official statements from the University… But individual staff were discouraged from reacting. But freedom of speech for the students. It increased conflict in some way, but it also made students feel heard. Hard to call which side it fell on. Policy change is being made as a result of this work… They had a chance to be heard. We wanted free speech (unless totally inappropriate).

Q2) I see that you use a lot of “yes” and “no” questions… I like that but did you then also get other data?

A2) Yes. I present that work here. This paper doesn’t show the thematic analysis – we are still working on submitting that somewhere. We have that data, so once the full piece is in a journal we can let you know.

Q3) Do you know any successful campaigns in your context?

A3) Yes, FeesMustFall started in individual universities, and turned then to the government. It actually got quite serious, quite violent, but that definitely has changed their approach. And that campaign continues and will continue for now.

At this point of the day my laptop lost juice, the internet connection dropped, and there was a momentary power outage just as my presentation was about to go ahead! All notes from my strand are therefore from those taken on my mobile – apologies for more typos than usual!

Stream C – Teaching and Supporting Students – Chair – Ted Clark

Students’ Digital Footprints: Curation of Online Presences, Privacy and Peer Support – Nicola Osborne and Louise Connelly,University of Edinburgh, UK

That was me!

My slides are available on Prezi herehttps://prezi.com/hpphwg6u-f6b/students-digital-footprints-curation-of-online-presences-privacy-and-peer-support/

The paper can be found in the ECSM 2016 Proceedings, and will also be shared on the University of Edinburgh Research Explorer along with others on the Managing Your Digital Footprint (research strand) researchhttp://www.research.ed.ac.uk/portal/en/publications/students-digital-footprints(5f3dffda-f1b4-470f-abd4-24fd6081ab98).html 

Please note that the remaining notes are very partial as taken on my smartphone and, unfortunately, somewhat eaten by the phone in the process… 

How do you Choose a Friend? Greek Students’ Friendships in Facebook – KourtiEvangelia, University of Athens and PanagiotisKordoutisand AnnaMadoglou,Panteion University of Social and Political Sciences, Greece

This work, relating to Panagiotis’ paper earlier (see above) looked at how individuals make friends on Facebook. You can find out more about the methodology in this paper and Panagiotis’ paper on Analog and Facebook friends.

We asked our cohort of students to tell us specifically about their criteria for making new friends, whether they were making the approach for friendship or responding to others’ requests. We also wanted to find out how they interacted with people who were not (yet) their friends in Facebook, and what factors played a part. The data was collected in a paper questionnaire with the same cohort as reported in Panagiotis’ paper earlier today.

Criteria for interacting with a friend, never met before within Facebook. The most frequent answer was “I never do” but the next most popular responses were common interests and interest in getting to know others better. physical appearance seems to play a factor, more so than previous interactions but less so than positive personality traits. 

Criteria for deciding to meet a previously unknown friend. Most popular response here was “I never do so”, followed by sufficient previous FB interaction, common acquaintances, positive personality etc. less so.

Correspondence Analysis – I won’t go into here, very interesting in terms of gender. Have a look at the Proceedings. 

Conclusion is that Facebook operated as social identity tool. And supporting offline relationships. self involvement with the medium seems to define selection criteria compatible with different social goals reinforcing one real-life social network.

Q&A

Q1) I’m very interested in how FB suggests new friends. Did students comment on that. 

A1) We didn’t ask about that.

Q2) isn’t your data gender biased in some way – most of your participants are female.

A2) Yes. But we continue this… With qualitative data it’s a problem, but means and standard deviation cover that. 

Q2) Reasons for sending a request to who you don’t know. First work by Ellison etc. showed people connecting with already known people… I wonder if it is still true? 

A2) Interesting questions. We must say that students answer to their professor in a uni context, that means maybe this is an explanation… 

Comment) Facebook gives you status for numbers and types of friends etc. 

A2) it’s about social identity and identity construction. Many have different presences with different goals. 

Comment) there is a bit of showing off in social. For status. 

Professional Development of Academic Staff in the use of Social Media for Teaching and Learning – Julie Willems, Deakin University, Burwood, Australia

This work has roots in 2012. from then to 2015 I ran classes for staff on using social media. This follows conversations I’ve heard around the place about expecting staff to use social media without training. 

Now I use a very broad definition of social media – from mainstream sites to mobile apps to gaming etc. Media that accesses digital means for communication in various forms. 

Why do we need staff development for social media? To deal with concerns of staff, students move there, also super enthusiasm.. 

My own experience is of colleagues who have run with it, which has raised all sorts of concerns. Some would say that an academic should be doing teaching, research, service and development can end up being the missing leg on the chair there. And staff development is not just about development on social media but also within social media. 

We ran some webinars within Zoom webinar, showing Twitter use with support online, offline and on Twitter – particularly important  for a distributed campus like we have. 

When we train staff we have to think about the pedagogy, we have to think about learning outcomes. We need to align the course structure with LOs, and also to consider staff workload in how we do that training. What will our modes of delivery be? What types of technology will they meet and use – and what prep/overhead is involved in that? We also need to consider privacy issues. And then how do you fill that time. 

So the handout I’ve shared here was work for one days course, to be delivered in a flipped classroom – prep first, in person, then online follow up. Could be completed quickly but many spent more time on these.

This PPT from a module I developed for staff at Monash university, with social media at the intersection of formal and informal learning, and the interaction of teacher-directed learning and student-centred learning. That quadrant model is useful to be aware of: Willem Blakemore(?): 4QF.

Q&A

Q1) What was the object among staff at your university?

A1) First three years were optional. This last year Monash require staff to do 3 one day courses per year. One can be a conference with a full report. Social Media is one of 8 options. Wanted to give an encouragement for folk to attend. 

Q2) How many classes use your social media as a result?

A2) I’ve just moved institution. One of our architecture lecturers was using FB in preference to LMS: students love it, faculty concerned. Complex. At my current university social media isn’t encouraged but it is use. Regardless of attitude social media is in use… And we at least have to be aware of that. 

Q3) I was starting to think that you were encouraging faculty staff to use Social media alone, rather than with LMS.

A3) At Monash reality was using social alongside LMS. That connection discouraged in my new faculty. 

Q4) I loved that you brought up that pressure from teaching staff – as so many academics in social media now, they are min more active and a real pressure to integrate.

A4) I think that gap is growing too… Between resisters and those keen to use. Students are aware of what they share – a Demi formal space… Have to be aware.

Q5) do you have a range of social media tools or just Facebook?

A5) mainly Facebook, sometimes Twitter and Linked In. I’m in engineering and architecture. 

Q5) Are they approved for use by faculty?

A5) Yes, the structure you have there had been. 

Q6) also encourage academic staff to use academic networking sites?

A6) depends on context. Depends… ResearchGate good for pubs, Academic.edu like bus card. 

Q7) Reward and recognition

A7) Stuff on sheet was for GCAP… Came out of that… 

Q8) Will we still have these requirements to train in, say, 5 years time? Surely they’ll be like pen and pencil now?

A8) Maybe. Universities are keen for good profiles though, which means this stuff matters in this competitive academic marketplace. 

And with that Day One has drawn to a close. I’m off to charge a lot of devices and replace my memory sticks! More tomorrow in a new liveblog post. 

 July 12, 2016  Posted by at 9:22 am Events Attended, LiveBlogs Tagged with: , ,  No Responses »
Jul 072016
 

On 27th June I attended a lunchtime seminar, hosted by the University of Edinburgh Centre for Research in Digital Education with Professor Catherine Hasse of Aarhus University

Catherine is opening with a still from Ex-machina (2015, dir. Alex Garland). The title of my talk is the difference between human and posthuman learning, I’ll talk for a while but I’ve moved a bit from my title… My studies in posthuman learning has moved me to more of a posthumanistic learning… Today human beings are capable of many things – we can transform ourselves, and ourselves in our environment. We have to think about that and discuss that, to take account of that in learning.

I come from the centre for Future Technology, Culture and Learning, Aarhus University, Denmark. We are hugely interdisciplinary as a team. We discuss and research what is learning under these new conditions, and to consider the implications for education. I’ll talk less about education today, more about the type of learning taking place and the ways we can address that.

My own background is in anthropology of education in Denmark, specifically looking at physicists.In 2015 we got a big grant to work on “The Technucation Project” and we looked at the anthropology of education in Denmark in nurses and teachers – and the types of technological literacy they require for their work. My work (in English) has been about “Mattering” – the learning changes that matter to you. The learning theories I am interested in acknowledge cultural differences in learning, something we have to take account of. What it is to be human is already transformed. Posthumanistics learning is a new conceptualisations and material conditions that change what it was to be human. It was and it ultra human to be learners.

So… I have become interested in robots.. They are coming into our lives. They are not just tools. Human beings encounter tools that they haven’t asked for. You will be aware of predictions that over a third of jobs in the US may be taken over by automated processes and robots in the next 20 years. That comes at the same time as there is pressure on the human body to become different, at the point at which our material conditions are changing very rapidly. A lot of theorists are picking up on this moment of change, and engaging with the idea of what it is to be human – including those in Science and Technology Studies, and feminist critique. Some anthropologist suggest that it is not geography but humans that should shape our conceptions of the world (Anthrpos- Anthropocene), others differ and conceive of the capitalocene. When we talk about the posthuman a lot of the theories acknowledge that we can’t talk about the fact that we can’t think of the human in the same way anymore. Kirksey & Helmreich (2010) talk of “natural-cultural hybrids”, and we see everything from heart valves to sensors, to iris scanning… We are seeing robots, cybords, amalgamations, including how our thinking feeds into systems – like the stockmarkets (especially today!). The human is de-centered in this amalgamation but is still there. And we may yet get to this creature from Ex-machina, the complex sentient robot/cyborg.

We see posthuman learning in uncanny valley… gradually we will move from robots that feel far away, to those with human tissues, with something more human and blended. The new materialism and robotics together challenge the conception of the human. When we talk of learning we talk about how humans learn, not what follows when bodies are transformed by other (machine) bodies. And here we have to be aware that in feminism that people like Rosa Predosi(?) have been happy with the discarding of the human: for them it was always a narrative, it was never really there. The feminist critique is that the “human” was really retruvian man.. But they also critique the idea that Posthu-man is a continuation of individual goal-directed and rational self-enhancing (white male) humans. And that questions the post human…

There are actually two ways to think of the post human. One way is the posthuman learning as something that does away with useless, biological bodies (Kurzweil 2005) and we see transhumanists, Verner Vinge, Hans Moravec, Natasha Vita-More in this space that sees us heading towards the singularity. But the alternative is a posthumanistic approach, which is about cultural transformations of boundaries in human-material assemblages, referencing that we have never been isolated human beings, we’ve always been part of our surroundings. That is another way to see the posthuman. This is a case that I make in an article (Hayles 1999) that we have always been posthuman. We also see have, on the other hand, Spinozists approach which is about how are we, if we understand ourselves as de-centered, able to see ourselves as agents. In other words we are not separate from the culture, we are all Nature-cultural…Not of nature, not of culture but naturacultural (Hayles; Haraway).

But at the same time if it is true that human beings can literally shape the crust of the earth, we are now witnessing anthropomorphism on steroids (Latour, 2011 – Waiting for Gaia [PDF]). The Anthropocene perspective is that, if human impact on Earth can be translated into human responsibility fr the earth, the concept may help stimulate appropriate societal responses and/or invoke appropriate planetary stewardship (Head 2014); the capitalocene (see Jason Moore) talks about moving away from cartesian dualism in global environmental change, the alternative implies a shift from humanity and nature to humanity in nature, we have to counter capitalism in nature.

So from the human to the posthuman, I have argue that this is a way we can go with our theories… There are two ways to understand that, the singularist posthumanism or spinozist posthumanism. And I think we need to take a posthumanistic stance with learning – taking account of learning in technological naturecultures.

My own take here… We talk about intra-species differentiations. This nature is not nature as resource but rather nature as matrices – a nature that operates not only outside and inside our bodies (from global climate to the microbiome) but also through our bodies, including embodied minds. We do create intra-species differentiation, where learning changes what maters to you and others, and what matters changes learning. To create an ecological responsible ultra-sociality we need to see ourselves as a species of normative learners in cultural organisations.

So, my own experience, after studying physicists as an anthropologists I no longer saw the night sky the same way – they were stars and star constellations. After that work I saw them as thousands of potetial suns – and perhaps planets – and that wasn’t a wider discussion at that time.

I see it as a human thing to be learners. And we are ultra social learning. And that is a characteristic of being human. Collective learning is essentially what has made us culturally diverse. We have learning theories that are relavent for cultural diversity. We have to think of learning in a cultural way. Mediational approachs in collective activity. Vygotsky takes the idea of learners as social learners before we become personal learners and that is about the mediation – not natureculture but cultureculture (Moll 2000). That’s my take on it. So, we can re-centre human beings… Humans are not the centre of the universe, or of the environment. But we can be at the centre and think about what we want to be, what we want to become.

I was thinking of coming in with a critique of MOOCs, particularly as those being a capitolocene position. But I think we need to think of social learning before we look at individual learning (Vygotsky 1981). And we are always materially based. So, how do we learn to be engaged collectively? What does it matter – for MOOCs for instance – if we each take part from very different environments and contexts, when that environment has a significant impact. We can talk about those environments and what impact they have.

You can buy robots now that can be programmed – essentially sex robots like “Roxxxy” – and are programmed by reactions to our actions, emotions etc. If we learn from those actions and emotions, we may relearn and be changed in our own actions and emptions. We are seeing a separation of tool-creation from user-demand in Capitalocene. The introduction of robots in work places are often not replacing the work that workers actually want support with. The seal robots to calm dementia patients down cover a role that many carers actually enjoyed in their work, the human contact and suport. But those introducing them spoke of efficiency, the idea being to make employees superfluous but described as “simply an attempt to remove some of the most demeaning hard task from the work with old people so the wor time ca be used for care and attention” (Hasse 2013).

These alternative relations with machines are things we always react too, humans always stretch themselves to meet the challenge or engagement at hand. An inferentialist approach (Derry 2013) acknowledges many roads to knowledge but materiality of thinking reflects that we live in a world of not just case but reason. We don’t live in just a representationalism (Bakker and Derry 2011) paradigm, it is much more complex. Material wealth will teach us new things.. But maybe these machines will encourage us to think we should learn more in a representative than an inferentialist way. We have to challenge robotic space of reasons. I would recommend Jan Derry’s work on Vygotsky in this area.

For me robot representationalism has the capacity to make convincing representations… You can give and take answers but you can’t argue space and reasons… They cannot reason from this representation. Representational content is not articulated by determinate negation and complex concept formation. Algorithmic learning has potential and limitations, and is based on representationalism. Not concept formation. I think we have to take a position on posthumanistic learning, with collectivity as a normative space of reasons; acknowledge mattering matter in concept formation; acknowledge human inferentialism; acknowledge transformation in environment…

Discussion/Q&A

Q1) Can I ask about causes and reasons… My background is psychology and I could argue that we are more automated than we think we are, that reasons come later…

A1) Inferentialism is challenging  the idea of giving and taking reasons as part of normative space. It’s not anything goes… It’s sort of narrowing it down, that humans come into being in terms of learning and thinking in a normative space that is already there. Wilfred Sellers says there is no “bare given” – we are in a normative space, it’s not nature doing this… I have some problems with the term dialectical… But it is a kind of dialective process. If you give an dtake reasons, its not anything goes. I think Jen Derry has a better phrasing for this. But that is the basic sense. And it comes for me from analytical philosophy – which I’m not a huge fan of – but they are asking important questions on what it is to be human, and what it is to learn.

Q2) Interesting to hear you talk about Jan Derry. She talks about technology perhaps obscuring some of the reasoning process and I was wondering how representational things fitted in?

A2) Not in the book I mentioned but she has been working on this type of area at University of London. It is part of the idea of not needing to learn representational knowledge, which is built into technological systems, but for inferentialism we need really good teachers. She has examples about learning about the bible, she followed a school class… Who look at the bible, understand the 10 commandments, and then ask them to write their own bible 10 commandments on whatever topic… That’s a very narrow reasoning… It is engaging but it is limited.

Q3) An ethics issue… If we could devise robots or machines, AI, that could think inferentially, should we?

A3) A challenge for me – we don’t have enough technical people. My understanding is that it’s virtually impossible to do that. You have claims but the capacities of AI systems so far are so limited in terms of function. I think that “theory of mind” is so problematic. They deteriorise what it means to be human, and narrow what it means to be our species. I think algorithmic learning is representational… I may be wrong though… If we can… There are poiltical issues. Why make machines that are one to one to human beings… Maybe to be slaves, to do dirty work. If they can think inferentiality, should they not have ethical rights. In spinostas we have a responsibility to think about those ethical issues.

Q4) You use the word robot, that term is being used to be something very embodies and physical.. But algorithmic agency, much less embodied and much less visible – you mentioned the stock market – and how that fits in.

A4) In a way robots are a novelty, a way to demonstrate that. A chatbot is also a robot. Robot covers a lot of automated processes. One of the things that came out of AI at one point was that AI couldn’t learn without bodies.. That for deep learning there needs to be some sort of bodily engagement to make bodily mistakes. But then encounters like Roxy and others is that they become very much better… As humans we stretch to engage with these robots… We take an answer for an answer, not just an algorithm, and that might change how we learn.

Q4) So the robot is a point of engaging for machine learning… A provocation.

A4) I think roboticists see this as being an easy way to make this happen. But everything happens so quickly… Chips in bodies etc. But can also have robots moving in space, engaging with chips.

Q5) Is there something here about artifical life, rather than artifical intelligence – that the robot provokes that…

A5) That is what a lot of roboticists work at, is trying to create artificial life… There is a lot of work we haven’t seen yet. Working on learning algorithms in computer programming now, that evolves with the process, a form of artifical life. They hope to create robots and if they malfunction, they can self-repair so that the next generation is better. We asked at a conference in Prague recently, with roboticists, was “what do you mean by better?” and they simply couldn’t answer that, which was really interesting… I do think they are working on artifical life as well. And maybe there are two little connections between those of us in education, and those that create these things.

Q6) I was approached by robotics folks about teaching robots to learn drawing with charcoal, largely because the robotic hand had enough sensitivity to do something quite complex – to teach charcoal drawing and representation… The teacher gesticulates, uses metaphor, describes things… I teach drawing and representational drawing… There is no right answer there, which is tough for robototics… What is the equivelent cyborg/dual space in learning? Drawing toolsa re cyborg-esque in terms of digital and drawing tools… BUt also that diea of culture… You can manipulate tools, awareness of function and then the hack, and complexity of that hack… I suppose lots of things were ringing true but I couldn’t quite stick them in to what I’m trying to get at…

A6) Some of this is maybe tied to Schuman Enhancement Theory – the idea of a perfect cyborg drawing?

Q6) No, they were interested in improving computer learning, and language, but for me… The idea of human creativity and hacking… You could pack a robot with the history of art, and representation, so much information… Could do a lot… But is that better art? Or better design? A conversation we have to have!

A6) I tend to look at the dark side of the coin in a way… Not because I am techno-determinist… I do love gadgets, technology enhances our life, we can be playful… BUt in the capitalocene… There is much more focus on this. The creative side of technology is what many people are working on… Fantastic things are coming up, crossovers in art… New things can be created… What I see in nursing and teaching learning contexts is how to avoid engaging… So lifting robots are here, but nursing staff aren’t trained properly and they avoid them… Creativity goes many ways… I’m seeing from quite a particular position, and that is partly a position of warning. These technologies may be creative and they may then make us less and less creative… That’s a question we have to ask. For physicists, who have to be creative, are always so tied to the materiality, the machines and technologies in their working environments. I’ve also seen some of these drawing programmes…. It is amazing what you can draw with these tools… But you need purpose, awareness of what those changes mean… Tools are never innocent. We have to analyse what tools are doing to us