Oct 062016
 

Today I am again at the Association of Internet Researchers AoIR 2016 Conference in Berlin. Yesterday we had workshops, today the conference kicks off properly. Follow the tweets at: #aoir2016.

As usual this is a liveblog so all comments and corrections are very much welcomed. 

PA-02 Platform Studies: The Rules of Engagement (Chair: Jean Burgess, QUT)

How affordances arise through relations between platforms, their different types of users, and what they do to the technology – Taina Bucher (University of Copenhagen) and Anne Helmond (University of Amsterdam)

Taina: Hearts on Twitter: In 2015 Twitter moved from stars to hearts, changing the affordances of the platform. They stated that they wanted to make the platform more accessible to new users, but that impacted on existing users.

Today we are going to talk about conceptualising affordances. In it’s original meaning an affordance is conceived of as a relational property (Gibson). For Norman perceived affordances were more the concern – thinking about how objects can exhibit or constrain particular actions. Affordances are not just the visual clues or possibilities, but can be felt. Gaver talks about these technology affordances. There are also social affordances – talked about my many – mainly about how poor technological affordances have impact on societies. It is mainly about impact of technology and how it can contain and constrain sociality. And finally we have communicative affordances (Hutchby), how technological affordances impact on communities and communications of practices.

So, what about platform changes? If we think about design affordances, we can see that there are different ways to understand this. The official reason for the design was given as about the audience, affording sociality of community and practices.

Affordances continues to play an important role in media and social media research. They tend to be conceptualised as either high-level or low-level affordances, with ontological and epistemological differences:

  • High: affordance in the relation – actions enabled or constrained
  • Low: affordance in the technical features of the user interface – reference to Gibson but they vary in where and when affordances are seen, and what features are supposed to enable or constrain.

Anne: We want to now turn to platform-sensitive approach, expanding the notion of the user –> different types of platform users, end-users, developers, researchers and advertisers – there is a real diversity of users and user needs and experiences here (see Gillespie on platforms. So, in the case of Twitter there are many users and many agendas – and multiple interfaces. Platforms are dynamic environments – and that differentiates social media platforms from Gibson’s environmental platforms. Computational systems driving media platforms are different, social media platforms adjust interfaces to their users through personalisation, A/B testing, algorithmically organised (e.g. Twitter recommending people to follow based on interests and actions).

In order to take a relational view of affordances, and do that justice, we also need to understand what users afford to the platforms – as they contribute, create content, provide data that enables to use and development and income (through advertisers) for the platform. Returning to Twitter… The platform affords different things for different people

Taking medium-specificity of platforms into account we can revisit earlier conceptions of affordance and critically analyse how they may be employed or translated to platform environments. Platform users are diverse and multiple, and relationships are multidirectional, with users contributing back to the platform. And those different users have different agendas around affordances – and in our Twitter case study, for instance, that includes developers and advertisers, users who are interested in affordances to measure user engagement.

How the social media APIs that scholars so often use for research are—for commercial reasons—skewed positively toward ‘connection’ and thus make it difficult to understand practices of ‘disconnection’ – Nicolas John (Hebrew University of Israel) and Asaf Nissenbaum (Hebrew University of Israel)

Consider this… On Facebook…If you add someone as a friend they are notified. If you unfriend them, they do not. If you post something you see it in your feed, if you delete it it is not broadcast. They have a page called World of Friends – they don’t have one called World of Enemies. And Facebook does not take kindly to app creators who seek to surface unfriending and removal of content. And Facebook is, like other social media platforms, therefore significantly biased towards positive friending and sharing actions. And that has implications for norms and for our research in these spaces.

One of our key questions here is what can’t we know about

Agnotology is defined as the study of ignorance. Robert Proctor talks about this in three terms: native state – childhood for instance; strategic ploy – e.g. the tobacco industry on health for years; lost realm – the knowledge that we cease to hold, that we loose.

I won’t go into detail on critiques of APIs for social science research, but as an overview the main critiques are:

  1. APIs are restrictive – they can cost money, we are limited to a percentage of the whole – Burgess and Bruns 2015; Bucher 2013; Bruns 2013; Driscoll and Walker
  2. APIs are opaque
  3. APIs can change with little notice (and do)
  4. Omitted data – Baym 2013 – now our point is that these platforms collect this data but do not share it.
  5. Bias to present – boyd and Crawford 2012

Asaf: Our methodology was to look at some of the most popular social media spaces and their APIs. We were were looking at connectivity in these spaces – liking, sharing, etc. And we also looked for the opposite traits – unliking, deletion, etc. We found that social media had very little data, if any, on “negative” traits – and we’ll look at this across three areas: other people and their content; me and my content; commercial users and their crowds.

Other people and their content – APIs tend to supply basic connectivity – friends/following, grouping, likes. Almost no historical content – except Facebook which shares when a user has liked a page. Current state only – disconnections are not accounted for. There is a reason to not know this data – privacy concerns perhaps – but that doesn’t explain my not being able to find this sort of information about my own profile.

Me and my content – negative traits and actions are hidden even from ourselves. Success is measured – likes and sharin, of you or by you. Decline is not – disconnections are lost connections… except on Twitter where you can see analytics of followers – but no names there, and not in the API. So we are losing who we once were but are not anymore. Social network sites do not see fit to share information over time… Lacking disconnection data is an idealogical and commercial issue.

Commercial users and their crowds – these users can see much more of their histories, and the negative actions online. They have a different regime of access in many cases, with the ups and downs revealed – though you may need to pay for access. Negative feedback receives special attention. Facebook offers the most detailed information on usage – including blocking and unliking information. Customers know more than users, or Pages vs. Groups.

Nicholas: So, implications. From what Asaf has shared shows the risk for API-based research… Where researchers’ work may be shaped by the affordances of the API being used. Any attempt to capture negative actions – unlikes, choices to leave or unfriend. If we can’t use APIs to measure social media phenomena, we have to use other means. So, unfriending is understood through surveys – time consuming and problematic. And that can put you off exploring these spaces – it limits research. The advertiser-friends user experience distorts the space – it’s like the stock market only reporting the rises except for a few super wealthy users who get the full picture.

A biography of Twitter (a story told through the intertwined stories of its key features and the social norms that give them meaning, drawing on archival material and oral history interviews with users) – Jean Burgess (Queensland University of Technology) and Nancy Baym (Microsoft Research)

I want to start by talking about what I mean by platforms, and what I mean by biographies. Here platforms are these social media platforms that afford particular possibilities, they enable and shape society – we heard about the platformisation of society last night – but their governance, affordances, are shaped by their own economic existance. They are shaping and mediating socio-cultural experience and we need to better to understand the values and socio-cultural concerns of the platforms. By platform studies we mean treating social media platforms as spaces to study in their own rights: as institutions, as mediating forces in the environment.

So, why “biography” here? First we argue that whilst biographical forms tend to be reserved for individuals (occasionally companies and race horses), they are about putting the subject in context of relationships, place in time, and that the context shapes the subject. Biographies are always partial though – based on unreliable interviews and information, they quickly go out of date, and just as we cannot get inside the heads of those who are subjects of biographies, we cannot get inside many of the companies at the heart of social media platforms. But (after Richard Rogers) understanding changes helps us to understand the platform.

So, in our forthcoming book, Twitter: A Biography (NYU 2017), we will look at competing and converging desires around e.g the @, RT, #. Twitter’s key feature set are key characters in it’s biography. Each has been a rich site of competing cultures and norms. We drew extensively on the Internet Archives, bloggers, and interviews with a range of users of the platform.

Nancy: When we interviewed people we downloaded their archive with them and talked through their behaviour and how it had changed – and many of those features and changes emerged from that. What came out strongly is that noone knows what Twitter is for – not just amongst users but also amongst the creators – you see that today with Jack Dorsey and Anne Richards. The heart of this issue is about whether Twitter is about sociality and fun, or is it a very important site for sharing important news and events. Users try to negotiate why they need this space, what is it for… They start squabling saying “Twitter, you are doing it wrong!”… Changes come with backlash and response, changed decisions from Twitter… But that is also accompanied by the media coverage of Twitter, but also the third party platforms build on Twitter.

So the “@” is at the heart of Twitter for sociality and Twitter for information distribution. It was imported from other spaces – IRC most obviously – as with other features. One of the earliest things Twitter incorporated was the @ and the links back.. You have things like originally you could see everyone’s @ replies and that led to feed clutter – although some liked seeing unexpected messages like this. So, Twitter made a change so you could choose. And then they changed again to automatically not see replies from those you don’t follow. So people worked around that with “.@” – which created conflict between the needs of the users, the ways they make it usable, and the way the platform wants to make the space less confusing to new users.

The “RT” gave credit to people for their words, and preserved integrity of words. At first this wasn’t there and so you had huge variance – the RT, the manually spelled out retweet, the hat tip (HT). Technical changes were made, then you saw the number of retweets emerging as a measure of success and changing cultures and practices.

The “#” is hugely disputed – it emerged through hashtag.org: you couldn’t follow them in Twitter at first but they incorporated it to fend off third party tools. They are beloved by techies, and hated by user experience designers. And they are useful but they are also easily coopted by trolls – as we’ve seen on our own hashtag.

Insights into the actual uses to which audience data analytics are put by content creators in the new screen ecology (and the limitations of these analytics) – Stuart Cunningham (QUT) and David Craig (USC Annenberg School for Communication and Journalism)

The algorithmic culture is well understood as a part of our culture. There are around 150 items on Tarleton Gillespie and Nick Seaver’s recent reading list and the literature is growing rapidly. We want to bring back a bounded sense of agency in the context of online creatives.

What do I mean by “online creatives”? Well we are looking at social media entertainment – a “new screen ecology” (Cunningham and Silver 2013; 2015) shaped by new online creatives who are professionalising and monetising on platforms like YouTube, as opposed to professional spaces, e.g. Netflix. YouTube has more than 1 billion users, with revenue in 2015 estimated at $4 billion per year. And there are a large number of online creatives earning significant incomes from their content in these spaces.

Previously online creatives were bound up with ideas of democratic participative cultures but we want to offer an immanent critique of the limits of data analytics/algorithmic culture in shaping SME from with the industry on both the creator (bottom up) and platform (top down) side. This is an approach to social criticism exposes the way reality conflicts not with some “transcendent” concept of rationality but with its own avowed norms, drawing on Foucault’s work on power and domination.

We undertook a large number of interviews and from that I’m going to throw some quotes at you… There is talk of information overload – of what one might do as an online creative presented with a wealth of data. Creatives talk about the “non-scalable practices” – the importance and time required to engage with fans and subscribers. Creatives talk about at least half of a working week being spent on high touch work like responding to comments, managing trolls, and dealing with challenging responses (especially with creators whose kids are engaged in their content).

We also see cross-platform engagement – and an associated major scaling in workload. There is a volume issue on Facebook, and the use of Twitter to manage that. There is also a sense of unintended consequences – scale has destroyed value. Income might be $1 or $2 for 100,000s or millions of views. There are inherent limits to algorithmic culture… But people enjoy being part of it and reflect a real entrepreneurial culture.

In one or tow sentences, the history of YouTube can be seen as a sort of clash of NoCal and SoCal cultures. Again, no-one knows what it is for. And that conflict has been there for ten years. And you also have the MCNs (Multi-Contact Networks) who are caught like the meat in the sandwich here.

Panel Q&A

Q1) I was wondering about user needs and how that factors in. You all drew upon it to an extent… And the dissatisfaction of users around whether needs are listened to or not was evident in some of the case studies here. I wanted to ask about that.

A1 – Nancy) There are lots of users, and users have different needs. When platforms change and users are angry, others are happy. We have different users with very different needs… Both of those perspectives are user needs, they both call for responses to make their needs possible… The conflict and challenges, how platforms respond to those tensions and how efforts to respond raise new tensions… that’s really at the heart here.

A1 – Jean) In our historical work we’ve also seen that some users voices can really overpower others – there are influential users and they sometimes drown out other voices, and I don’t want to stereotype here but often technical voices drown out those more concerned with relationships and intimacy.

Q2) You talked about platforms and how they developed (and I’m afraid I didn’t catch the rest of this question…)

A2 – David) There are multilateral conflicts about what features to include and exclude… And what is interesting is thinking about what ideas fail… With creators you see economic dependence on platforms and affordances – e.g. versus PGC (Professionally Generated Content).

A2 – Nicholas) I don’t know what user needs are in a broader sense, but everyone wants to know who unfriended them, who deleted them… And a dislike button, or an unlike button… The response was strong but “this post makes me sad” doesn’t answer that and there is no “you bastard for posting that!” button.

Q3) Would it be beneficial to expose unfriending/negative traits?

A3 – Nicholas) I can think of a use case for why unfriending would be useful – for instance wouldn’t it be useful to understand unfriending around the US elections. That data is captured – Facebook know – but we cannot access it to research it.

A3 – Stuart) It might be good for researchers, but is it in the public good? In Europe and with the Right to be Forgotten should we limit further the data availability…

A3 – Nancy) I think the challenge is that mismatch of only sharing good things, not sharing and allowing exploration of negative contact and activity.

A3 – Jean) There are business reasons for positivity versus negativity, but it is also about how the platforms imagine their customers and audiences.

Q4) I was intrigued by the idea of the “Medium specificity of platforms” – what would that be? I’ve been thinking about devices and interfaces and how they are accessed… We have what we think of as a range but actually we are used to using really one or two platforms – e.g. Apple iPhone – in terms of design, icons, etc. and the possibilities of interface is, and what happens when something is made impossible by the interface.

A4 – Anne) When the “medium specificity” we are talking about the platform itself as medium. Moving beyond end user and user experience. We wanted to take into account the role of the user – the platform also has interfaces for developers, for advertisers, etc. and we wanted to think about those multiple interfaces, where they connect, how they connect, etc.

A4 – Taina) It’s a great point about medium specitivity but for me it’s more about platform specifity.

A4 – Jean) The integration of mobile web means the phone iOS has a major role here…

A4 – Nancy) We did some work with couples who brought in their phones, and when one had an Apple and one had an Android phone we actually found that they often weren’t aware of what was possible in the social media apps as the interfaces are so different between the different mobile operating systems and interfaces.

Q5) Can you talk about algorithmic content and content innovation?

A5 – David) In our work with YouTube we see forms of innovation that are very platform specific around things like Vine and Instagram. And we also see counter-industrial forms and practices. So, in the US, we see blogging and first person accounts of lives… beauty, unboxing, etc. But if you map content innovation you see (similarly) this taking the form of gaps in mainstream culture – in India that’s stand up comedy for instance. Algorithms are then looking for qualities and connections based on what else is being accessed – creating a virtual circle…

Q6) Can we think of platforms as instable, about platforms having not quite such a uniform sense of purpose and direction…

A6 – Stuart) Most platforms are very big in terms of their finance… If you compare that to 20 years ago the big companies knew what they were doing! Things are much more volatile…

A6 – Jean) That’s very common in the sector, except maybe on Facebook… Maybe.

PA-05: Identities (Chair: Tero Jukka Karppi)

The Bot Affair: Ashley Madison and Algorithmic Identities as Cultural Techniques – Tero Karppi, University at Buffalo, USA

As of 2012 Ashley Madison is the biggest online dating site targeted at those already in a committed relationship. Users are asked to share their gender, their sexuality, and to share images. Some aspects are free but message and image exchange are limited to paid accounts.

The site was hacked in 2016, stealing site user data which was then shared. Security experts who analysed the data assessed it as real, associated with real payment details etc. The hacker intention was to expose cheaters but my paper is focused on a different aspect of the aftermath. Analysis showed 43 male bots, and 70k female bots and that is the focus of my paper. And I want to think about this space and connectivity by removing the human user from the equation.

The method for me was about thinking about the distinction between human and non-human user, the individual and the bot. Eminating from germination theory I wanted to use cultural techniques – with materials, symbolic values, rules and places. So I am seeking elements of difference of different materials in the context of the hack and the aftermath.

So, looking at a news items: “Ashley madison, the dating website for cheaters, has admitted that some women on its site were virtual computer programmes instead of real women.” (CNN money), which goes onto say that users thought that they were cheating, but they weren’t after all! These bots interacted with users in a variety of ways from “winking” to messaging, etc. The role of the bot is to engage users in the platform and transform them into paying customers. A blogger talked about the space as all fake – the men are cheaters, the women are bots and only the credit card payments are real!

The fact that the bots are so gender imbalanced tells us the difference in how the platform imagines male and female users. In another commentary they comment on the ways in which fake accounts drew men in – both by implying real women were on the site, and by using real images on fake accounts… The lines between what is real and what is fake have been blurred. Commentators noted the opaqueness of connectivity here, and of the role of the bots. Who knows how many of the 4 million users were real?

The bots are designed to engage users, to appear as human to the extent that we understand human appearance. Santine Olympo talked about bots whilst others looking at algorithmic spaces and what can be imagined and created from our wants and needed. According to Ashley Madison employees the bots – or “angels” – were created to match the needs of users, recycling old images from real user accounts. This case brings together the “angel” and human users. A quote from a commentator imagines this as a science fiction fantasy where real women are replaced by perfect interested bots. We want authenticity in social media sites but bots are part of our mundane everyday existence and part of these spaces.

I want to finish by quoting from Ashley Madison’s terms and conditions, in which users agree that “some of the accounts and users you may encounter on the site may be fiction”.

Facebook algorithm ruins friendship – Taina Bucher, University of Copenhagen

“Rachel”, a Facebook user/informant states this in a tweet. She has a Facebook account that she doesn’t use much. She posts something and old school friends she has forgotten comment on it. She feels out of control… And what I want to focus on today are ordinary affects of algorithmic life taking that idea from ?’s work and Catherine Stewart’s approach to using this in the context of understanding the encounters between people and algorithmic processes. I want to think about the encounter and how the encounter itself becoming generative.

I think that the fetish could be one place to start in knowing algorithms… And how people become attuned to them. We don’t want to treat algorithms as a fetish. The fetishist doesn’t care about the object, just about how the object makes them feel. And so the algorithm as fetish can be a mood maker, using the “power of engagement”. The power does not reside in the algorithm, but in the types of ways people imagine the algorithm to exist and impact upon them.

So, I have undertaken a study of people’s personal algorithm stories, looking at people’s personal algorithm stories about Facebook algorithm; monitoring and querying Twitter for comments and stories (through keywords) relating to Facebook algorithms. And a total of 25 interviews were undertaken via email, chat and Skype.

So, when Rachel tweeted about Facebook and friendship, that gave me the starting point to understand stories and the context for these positions through interviews. And what repeatedly arose was the uncanny nature of Facebook algorithms. Take, for instance Micheal, a musician in LA. He shares a post and usually the likes come in rapidly, but this time nothing… He tweets that the algorithm is “super frustrating” and he believes that Facebook only shows paid for posts. Like others he has developed his own strategy to show posts more clearly. He says:

“If the status doesn’t build buzz (likes, comments, shares) within the first 10 minutes or so it immediately starts moving down the news feed and eventually gets lost.”

Adapting behaviour to social media platforms and their operation can be seen as a form of “optimisation”. Users aren’t just updating their profile or hoping to be seen, they are trying to change behaviours to be better seen by the algorithm. And this takes us to the algorithmic imaginary, the ways of thinking about what algorithms are, what they should be, how they function, and what these imaginations in turn make possible. Many of our participants talked about changing behaviours for the platform. Rachel talks about “clicking every day to change what will show up on her feed” is not only her using the platform, but thinking and behaving differently in the space. Adverts can also suggest algorithmic intervention and, no matter whether the user is profiled or not (e.g. for anti-wrinkle cream), users can feel profiled regardless.

So, people do things to algorithms – disrupting liking practices, comment more frequently to increase visibility, emphasise positively charged words, etc. these are not just interpreted by the algorithm but also shape that algorithm. Critiquing the algorithm is not enough, people are also part of the algorithm and impact upon its function.

Algorithmic identity – Michael Stevenson, University of Groningen, Netherlands

Michael is starting with a poster of Blade Runner… Algorithmic identity brings to mind cyberpunk and science fiction. But day to day algorithmic identity is often about ads for houses, credit scores… And I’m interested in this connection between this clash of technological cool vs mundane instruments of capitalism.

For critics the “cool” is seen as an ideological cover for the underlying political economy. We can look at the rhetoric around technology – “rupture talk”, digital utopianism as that covering of business models etc. Evgeny Morozov writes entertainingly of this issue. I think this critique is useful but I also think that it can be too easy… We’ve seen Morozov tear into Jeff Jarvis and Tim O’Reilly, describing the latter as a spin doctor for Silicon Valley. I think that’s too easy…

My response is this… An image of Christopher Walken saying “needs more Bourdieu”. I think we need to take seriously the values and cultures and the effort it takes to create those. Bourdieu talks about the new media field with areas of “web native”, open, participatory, transparant at one end of the spectrum – the “autonomous pole”; and the “heteronomous pole” of mass/traditional media, closed, controlled, opaque. The idea is that actors locate themselves between these poles… There is also competition to be seen as the most open, the most participatory – you may remember a post from a few years back on Google’s idea of open versus that of Facebook. Bourdieu talks of the autonomous pole as being about downplaying income and economic value, whereas the heteronomous pole is much more directly about that…

So, I am looking at “Everything” – a site designed in the 1990s. It was built by the guys behind Slashdot. It was intended as a compendium of knowledge to support that site and accompany it – items of common interest, background knowledge that wasn’t news. If we look at the site we see implicit and explicit forms of impact… Voting forms on articles (e.g. “I like this write up”), and soft links at the bottom of the page – generated by these types of feedback and engagement. This was the first version in the 1990s. Then in 1999 Nathan Dussendorf(?) developed the Everything2 built with the Everything Development Engine. This is still online. Here you see that techniques of algorithmic identity and datafication of users, this is very explicitly presented – very much unlike Facebook. Among the geeks here the technology is put on top, showing reputation on the site. And being open source, if you wanted to understand the recommendation engine you could just look it up.

If we think of algorithms as talk makers, and we look back at 1999 Everything2, you see the tracking and datafication in place but the statement around it talks about web 2.0/social media type ideas of democracy, meritocracy, conflations of cultural values and social actions with technologies and techniques. Aspects of this are bottom up and you also talk about the role of cookies, and the addressing of privacy. And it directly says “the more you participate, the greater the opportunity for you to mold it your way”.

Thinking about Field Theory we can see some symbolic exclusion – of Microsoft, of large organisations – as a way to position Everything2 within the field. This continues throughout the documentation across the site. And within this field “making money is not a sin” – that developers want to do cool stuff, but that can sit alongside making money.

So, I don’t want to suggest this is a utopian space… Everything2 had a business model, but this was of its time for open source software. The idea was to demonstrate capabilities of the development framework, to get them to use it, and to then get them to pay for services… But this was 2001 and the bubble burst… So the developers turned to “real jobs”. But Everything2 is still out there… And you can play with the first version on an archived version if you are curious!

The Algorithmic Listener – Robert Prey, University of Groningen, Netherlands

This is a version of a paper I am working on – feedback appreciated. And this was sparked by re-reading Raymond Williams, who talks about “there are in fact no masses, but only ways of seeing people as masses” (1958/2011). But I think that in the current environment Williams might now say “there are in fact no individuals, but only ways of seeing people as individuals”. and for me I’m looking at this through the lens of music platforms.

In an increasingly crowded and competitive sector platforms like Spotify, SoundCloud, Apple Music, Deezer, Pandora, Tidel, those platforms are increasingly trying to differentiate themselves through recommendation engines. And I’ll go on to talk about recommendations as individualisation.

Pandora internet radio calls itself the “music genome project” and sees music as genes. It seeks to provide recommendatoins that are outside the distorting impact of cultural information, e.g. you might like “The colour of my love” but you might be put off by the fact that Celine Dion is not cool. They market themselves against the crowd. They play on the individual as the part separated from the whole. However…

Many of you will be familiar with Spotify, and will therefore be familiar with Discover Weekly. The core of Spotify is the “taste profile”. Every interaction you have is captured and recorded in real time – selected artists, songs, behaviours, what you listen to and for how long, what you skip. Discover weekly uses both the taste profile and aspects of collaborative filtering – selecting songs you haven’t discovered that fits your taste profile. So whilst it builds a unique identity for each user, it also relies heavily on other peoples’ taste. Pandora treats other people as distortion, Spotify sees it as more information. Discover weekly does also understands the user based on current and previous behaviours. Ajay Kalia (Spotify) says:

“We believe that it’s important to recognise that a single music listener is usually many listeners… [A] person’s preference will vary by the type of music, by their current activity, by the time of day, and so on. Our goal then is to come up with the right recommendation…”

This treats identity as being in context, as being the sum of our contexts. Previously fixed categories, like gender, are not assigned at the beginning but emerge from behaviours and data. Pagano talks about this, whilst Cheney-Lippold (2011) talks about “cybernetic relationship to individual” and the idea of individuation (Simondon). For Simondon we are not individuals, individuals are an effect of individuation, not the cause. A focus on individuation transforms our relationship to recommendation systems… We shouldn’t be asking if they understand who we are, but the extent to which the person is an effect of personalisation. Personalisation is seen as about you and your need. From a Simondonian perspective there is no “you” or “want” outside of technology. In taking this perspective we have to acknowledge the political economy of music streaming systems…

And the reality is that streaming services are increasingly important to industry and advertisers, particularly as many users use the free variants. And a developer of Pandora talks about the importance for understanding profiles for advertisers. Pandora boasts that they have 700 audience segments to data. “Whether you want to reach fitness-driven moms in Atlanta or mobile Gen X-er… “. The Echo Nest, now owned by Spotify, had created highly detailed consumer profiling before it was brought up. That idea isn’t new, but the detail is. The range of segments here is highly granular… And this brings us to the point that we need to take seriously what Nick Seaver (2015) says we need to think of: “contextualisation as a practice in its own right”.

This matters as the categories that emerge online have profound impacts on how we discover and encounter our world.

Panel Q&A

Q1) I think it’s about music category but also has wider relevance… I had an introduction to the NLP process of Topic Modelling – where you label categories after the factor… The machine sorts without those labels and takes it from the data. Do you have a sense of whether the categorisation is top down, or is it emerging from the data? And if there is similar top down or bottom up categorisation in the other presentations, that would be interesting.

A1 – Robert) I think that’s an interesting question. Many segments are impacted by advertisers, and identifying groups they want to reach… But they may also

Micheal) You talked about the Ashley Madison bots – did they have categorisation, A/B testing, etc. to find successful bots?

Tero) I don’t know but I think looking at how machine learning and machine learning history

Micheal) The idea of content filtering from the bottom to the top was part of the thinking behind Everything…

Q2) I wanted to ask about the feedback loop between the platforms and the users, who are implicated here, in formation of categories and shaping platforms.

A2 – Taina) Not so much in the work I showed but I have had some in-depth Skype interviews with school children, and they all had awareness of some of these (Facebook algorithm) issues, press coverage and particularly the review of the year type videos… People pick up on this, and the power of the algorithm. One of the participants emails me since the study noting how much she sees writing about the algorithm, and about algorithms in other spaces. Awareness is growing much more about the algorithms shaping spaces. It is more prominent than it was.

Q3) I wanted to ask Michael about that idea of positioning Everything2 in relation to other sites… And also the idea of the individual being transformed by platforms like Spotify…

A3 – Michael) I guess the Bourdieun vision is that anyone who wants to position themselves on the spectrum, they can. With Everything you had this moment during the Internet Bubble, a form of utopianism… You see it come together somewhat… And the gap between Wired – traditional mass media – and smaller players but then also a coming together around shared interests and common enemies.

A3 – Robert) There were segments that did come from media, from radio and for advertisers and that’s where the idea of genre came in… That has real effects… When I was at High School there were common groups around particular genres… But right now the move to streaming and online music means there are far more mixed listening and people self-organise in different ways. There has been de-bunking of Bourdieu, but his work was at a really different time.

Q4) I wanted to ask about interactions between humans and non-human. Taina, did people feel positive impacts of understanding Facebook algorithms… Or did you see frustrations with the Twitter algorithms. And Tero, I was wondering how those bots had been shaped by humans.

A4 – Taina) The human and non-human, and whether people felt more or less frustrated by understanding the algorithm. Even if they felt they knew, it changes all the time, their strategies might help but then become obsolete… And practices of concealment and misinformation were tactics here. But just knowing what is taking place, and trying to figure it out, is something that I get a sense is helpful… But maybe that is’t the right answer to it. And that notion of a human and a non human is interesting, particularly for when we see something as human, and when we see things as non-human. In terms of some of the controversies… When is an algorithm blamed versus a human… Well there is no necessary link/consistency there… So when do we assign humanness and non-humanness to the system and does it make a difference?

A4 – Tero) I think that’s a really interesting questions…. Looking at social media now from this perspective helps us to understand that, and the idea of how we understand what is human and what is non-human agency… And what it is to be a human.

Q5) I’m afraid I couldn’t here this question

A5 – Richard) Spotify supports what Deleuze wrote about in terms of the individual and how aspects of our personality are highlighted at the points that is convenient. And how does that effect help us regulate. Maybe the individual isn’t the most appropriate unit any more?

A5 – Taine) For users the exposure that they are being manipulated or can be summed up by the algorithm, that is what can upset or disconcert them… They don’t like to feel summed up by that…

Q6) I really like the idea of the imagined… And perceptions of non-human actors… In the Ashley Madison case we assume that men thought bots were real… But maybe not everyone did that. I think that moment of how and when people imagine and ascribe human or non-human status here. In one way we aren’t concerned by the imaginary… And in another way we might need to consider different imaginaries – the imaginary of the platform creators vs. users for instance.

A6 – Tero) Right now I’m thinking about two imaginaries here… Ashley Madison’s imaginary around the bots, and the users encountering them and how they imagine those bots…

A6 – Taine) A good question… How many imaginaries o you think?! It is about understanding more who you encounter, who you engage with. Imaginaries are tied to how people conceive of their practice in their context, which varies widely, in terms of practices and what you might post…

And with that session finished – and much to think about in terms of algorithmic roles in identity – it’s off to lunch… 

PS-09: Privacy (Chair: Michael Zimmer)

Unconnected: How Privacy Concerns Impact Internet Adoption – Eszter Hargittai, Ashley Walker, University of Zurich

The literature in this area seems to target the usual suspects – age, socio-economic status… But the literature does not tend to talk about privacy. I think one of the reasons may be the idea that you can’t compare users and non-users of the internet on privacy. But we have located a data set that does address this issue.

The U.S. Federal Communication Commission’s issued a National Consumer’s Broadband Service Capability Service in 2009 – when about 24% of Americans were still not yet online. This work is some years ago but our insterest is in the comparison rather than numbers/percentages. And this questioned both internet users and non-users.

One of the questions was: “It is too easy for my personal information to be stolen online” and participants were asked if they Strongly agreed, somewhat agreed, somewhat disagreed, disagreed. We looked at that as bivariate – strongly agreed or not. And analysing that we found that among internet users 63.3% said they strongly agreed versus 81% of non internet users. Now we did analyse demographically… It is what you expect generally – more older people are not online (though interestingly more female respondents are online). But even then the internet non-users again strongly agreed about that privacy/concern question.

So, what does that mean? Well getting people online should address people’s concerns about privacy issues. There is also a methodological takeaway – there is value to asking non-users about internet-related questions – as they may explain their reasons.

Q&A

Q1) Was it asked whether they had previously been online?

A1) There is data on drop outs, but I don’t know if that was captured here.

Q2) Is there a differentiation in how internet use is done – frequently or not?

A2) No, I think it was use or non-use. But we have a paper coming out on those with disabilities and detailed questions on internet skills and other factors – that is a strength of the dataset.

Q3) Are there security or privacy questions in the dataset?

A3) I don’t think there are, or we would have used them. It’s a big national dataset… There is a lot on type of internet connection and quality of access in there, if that is of interest.

Note, there is more on some of the issues around access, motivations and skills in the Royal Society of Edinburgh Spreading the Benefits of Digital Participation in Scotland Inquiry report (Fourman et al 2014). I was a member of this inquiry so if anyone at AoIR2016 is interested in finding out more, let me know. 

Enhancing online privacy at the user level: the role of internet skills and policy implications – Moritz Büchi, Natascha Just, Michael Latzer, U of Zurich, Switzerland

Natascha: This presentation is connected with a paper we just published and where you can read more if you are interested.

So, why do we care about privacy protection? Well there is increased interest in/availability of personal data. We see big data as a new asset class, we see new methods of value extraction, we see growth potential of data-driven management, and we see platformisation of internet-based markets. Users have to continually balance the benefits with the risks of disclosure. And we see issues of online privacy and digital inequality – those with fewer digital skills are more vulnerable to privacy risks.

We see governance becoming increasingly important and there is an issue of understanding appropriate measures. Market solutions by industry self-regulation is problematic because of a lack of incentives as they benefit from data. At the same time states are not well placed to regulate because of their knowledge and the dynamic nature of the tech sector. There is also a route through users’ self-help. Users self-help can be an effective method to protect privacy – whether opting out, or using privacy enhancing technology. But we are increasingly concerned but we still share our data and engage in behaviour that could threaten our privacy online. And understanding that is crucial to understand what can trigger users towards self-help behaviour. To do that we need evidence, and we have been collecting that through a world internet study.

Moritz: We can imperically address issues of attitudes, concerns and skills. The literature finds these all as important, but usually at most two factors covered in the literature. Our research design and contributions look at general population data, nationally representative so that they can feed into policy. The data was collected in the World Internet Project, though many questions only asked in Switzerland. Participants were approached on landline and mobile phones. And our participants had about 88% internet users – that maps to the approx. population using the internet in Switzerland.

We found a positive effect of privacy attitudes on behaviours – but a small effect. There was a strong effect of privacy breaches and engaging in privacy protection behaviours. And general internet skills also had an effect on privacy protection. Privacy breaches – learning the hard way – do predict privacy self-protection. Caring is not enough – that pro-privacy attitudes do not really predict privacy protection behaviours. But skills are central – and that can mean that digital inequalities may be exacerbated because users with low general internet skills do not tend to engage in privacy protection behaviour.

Q&A

Q1) What do you mean by internet skills?

A1 – Moritz): In this case there were questions that participants were asked, following a model by Alexander von Durnstern and colleagues developed, that asks for agreement or disagreement

Navigating between privacy settings and visibility rules: online self-disclosure in the social web – Manuela Farinosi1,Sakari Taipale2, 1: University of Udine; 2: University of Jyväskylä

Our work is focused on self-disclosure online, and particularly whether young people are concerned about privacy in relation to other internet users, privacy to Facebook, or privacy to others.

Facebook offers complex privacy settings allowing users to adopt a range of strategies in managing their information and sharing online. Waters and Ackerman (2011) talk about the practice of managing privacy settings and factors that play a role including culture, motivation, risk-taking ratio, etc. And other factors are at play here. Fuchs (2012) talks about Facebook as commercial organisation and concerns around that. But only some users are aware of the platform’s access to their data, may believe their content is (relatively) private. And for many users privacy to other people is more crucial than privacy to Facebook.

And there are differences in privacy management… Women are less likely to share their phone number, sexual orientation or book preferences. Men are more likely to share corporate information and political views. Several scholars have found that women are more cautious about sharing their information online. Nosko et al (2010) found no significant difference in information disclosure except for political informaltion (which men still do more of).

Sakari: Manuela conducted an online survey in 2012 in Italy with single and multiple choice questions. It was issued to university students – 1125 responses were collected. We focused on 18-38 year old respondents, and only those using facebook. We have slightly more female than male participants, mainly 18-25 years old. Mostly single (but not all). And most use facebook everyday.

So, a quick reminder of Facebook’s privacy settings… (a screenshot reminder, you’ve seen these if you’ve edited yours).

To the results… We found that the data that are most often kept private and not shared are mobile phone number, postal address or residence, and usernames of instant messaging services. The only data they do share is email address. But disclosure is high of other types of data – birth date for instance. And they were not using friends list to manage data. Our research also confirmed that women are more cautious about sharing their data, and men are more likely to share political views. The only not gender related issues were disclosure of email and date of birth.

Concerns were mainly about other users, rather than Facebook, but it was not substantially different in Italy. We found very consistent gender effects across our study. We also checked factors related to concerns but age, marital status, education, and perceived level of expertise as Facebook user did not have a significant impact. The more time you spend on Facebook, the less likely you are to care about privacy issues. There was also a connection between respondents’ privacy concerns were related to disclosures by others on their wall.

So, conclusions, women are more aware of online privacy protection than men, and protection of private sphere. They take more active self protection there. And we speculate on the reasons… There are practices around sense of security/insecurity, risk perception between men and women, and the more sociological understanding of women as maintainers of social labour – used to taking more care of their material… Future research needed though.

Q&A

Q1) When you asked users about privacy settings on Facebook how did you ask that?

A1) They could go and check, or they could remember.

WHOSE PRIVACY? LOBBYING FOR THE FREE FLOW OF EUROPEAN PERSONAL DATA – Jockum Philip Hildén, University of Helsinki, Finland

My focus is related to political science… And my topic is lobbying for the free flow of European Personal Data – and how the General Data Protection Regulation come into being and which lobbyists influenced the legislators. This is a new piece of regulation coming in next year. It was the subject of a great deal of lobbying – it became visible when the regulation was in parliament, but the lobbying was much earlier than that.

So, a quick description of EU law making. There is the European Commission which proposes legislation and that goes to both the Council of Europe and also to the Parliament. Both draw up regulations based on the proposal and then that becomes final regulation. In this particular case there was public consultation before the final regulation so I looked at a wide range of publicly available position pages. Looking across here I could see 10 types of stakeholders offering replies to the position papers – far more in 2011 than to the first version in 2009. Companies in the US participated to a very high degree – almost as much as those in the UK and France. That’s interesting… And that’s partly to do with the extended scope of this new regulation that covers EU but also service providers in the US and other locations. This idea is not exclusive to this regulation, known as “the Brussels effect”.

In terms of sector I have categorised the stakeholders so I have divided IP and Node communications for instance, to understand their interests. But I am interested in what they are saying, so I draw on Kluver (2013) and the “preference attainment model” to compare policy preferences of interest groups with the Commissions preliminary draft proposal, the Commission’s final proposal, and the final legislative act adopted by the council. So, what interests did the council take into account? Well almost every article changed – which makes those changes hard to pin down. But…

There is an EU Power Struggle. The Commission draft contained 26 different cases where it was empowered to adopt delegated acts. All but one of these articles were removed from the Council’s draft. And there were 48 exceptions for member states, most of them are “in the public interest”… But that could mean anything! And thus the role of nation states comes into question. The idea of European law is to have consistent policy – that amount of variance undermines that.

We also see a degree of User disempowerment. Here we see responses from Digital Europe – a group of organisations doing any sort of surveillance; But we also see the American Chambers of Commerce submitting responses. In these responses both are lobbying for “implicit consent” – the original draft requested explicit consent. And the Commission sort of brought into this, using a concept of unambiguous consent… Which is itself very ambiguous. Looking at the Council vs Free Data Advocates and then compared to Council vs Privacy Advocates. The Free Data Advocates are pro free movement of data, and privacy – as that’s useful to them too, but they are not keen on greater Commission powers. Privacy Advocates are pro privacy and more supportive of Commission powers.

In Search of Safe Harbors – Privacy and Surveillance of Refugees in Europe – Paula Kift, New York University, United States of America

Over 2015 a million refugees and migrants arrived at the borders of Europe. One of the ways in which the EU attempted to manage this influx was to gather information on these peoples. In particular satellite surveillance and data collection on individuals on arrival.   
The EU does acknowledge that biometric data does raise privacy issues, but that satellites and drones is not personally identifiable or an issue here. I will argue that the right to privacy does not require presence of Personally Identifiable Information.
As background there are two pieces of legislation, Eurosur – regulations to gather and share satelite and drone data across Member States. Although the EU justifies this on the basis of helping refugees in distress, it isn’t written into the regulation. Refugee and human rights organisations say that this surveillance is likely to enable turning back of migrants before they enter EU waters.
If they do reach the EU, according to Eurodac (2000) refugees must give fingerprints (if over 14 years old) and can only apply for asylum status in one country. But in 2013 this regulation has been updated so that fingerprinting can be used in law enforcement – that goes again EU human rights act and Data Protection law. It is also demeaning and suggests that migrants are more likely to be criminal, something not backed up by evidence. They have also proposed photography and fingerprinting be extended to everyone over 6 years old. There are legitimate reasons for this… Refugees come into Southern Europe where opportunities are not as good, so some have burned off fingerprints to avoid registration there, so some of these are attempts to register migrants, and to avoid losing children once in the EU.
The EU does not dispute that biometric data is private data. But with Eurodac and Eurosur the right to data protection does not apply – they monitor boats not individuals. But I argue that the Right to Private Life is jeapodised here, through prejudice, reachability and classifiability… The bigger issue may actually be the lack of personal data being collected… The EU should approach boats and identify those with asylum claim, and manage others differently, but that is not what is done.
So, how is big data relevant? Well big data can turn non personally identifiable information into PII through aggregation and combination. And classifying individuals also has implications for the design of Data Protection Laws. Data protection is a procedural right, but privacy is a substantive right, less dependent on personally identifiable information. Ultimately the right to privacy protects the person, rather than the integrity of the data.
Q&A
Q1) In your research have you encountered any examples of when policy makers have engaged with research here?
A1 – Paula) I have not conducted any on the ground interviews or ethnographic work with policy makers but I would suggest that the increasing focus on national security is driving this activity, whereas data protection is shrinking in priority.
A1 – Jockum) It’s fairly clear that the Council of Europe engaged with digital rights groups, and that the Commission did too. But then for every one of those groups, there are 10 lobby groups. So you have Privacy International and European Digital Rights who have some traction at European level, but little traction at national level. My understanding is that researchers weren’t significantly consulted, but there was a position paper submitted by a research group at Oxford, submitted by lawers, but their interest was more aligned with national rather than digital rights issues.
Q2) You talked about the ? being embedded in the new legislation… You talk about information and big data… But is there any hope? We’ve negotiated for 4 years, won’t be in force until 2018…
A2 – Paula) I totally agree… You spend years trying to come up with a framework, but it all rests on PII…. And so how do we create Data Protection Act that respects personal privacy without being dependent on PII? Maybe the question is not about privacy but about profiles and discrimination.
A2 – Jockum) I looked at all the different sectors to look at surveillance logic, to understand why surveillance is related to regulation. The problem with Data Protection regulation is inherently problematic as it has opposing goals – to protect individuals and to enable the sharing of data… So, in that sense, surveillance logic is informing this here.
Q3) Could you outline again the threats here beyond PII?
A3 – Paula) Refugees who are aware of these issues don’t take their phones – but that reduces chance of identification but also stops potential help calls and rescues. But the risk is also about profiling… High ranking job offers are more likely to be made to women than men… Google thinks I am between 60 and 80 years old and Jewish, I’m neither, they detect who I am… And that’s where the risk is here… profiling… e.g. transactions being blocked through proposals.
Q4) Interesting mixture of papers here… Many people are concerned about social side of privacy… But know little of institutional privacy concerns. Some become more cynical… But how can we improve literacy… How can we influence people here about Data Protection laws, and privacy measures…
A4 – Esther) It varies by context. In the US the concern is with government surveillance, the EU it’s more about corporate surveillance… You may need to target differently. Myself and a colleague wrote a paper on apathy of privacy… There are issues of trust, but also work on skills. There are bigger conversations, not just with users, to be had. There are conversations to have generally with the population… Where do you infuse that, I don’t know… How do you reach adults, I don’t know?
A4 – Natascha) Not enough to strengthen awareness and rights… Skills are important here too… That you really need to ensure that skills are developed to adapt to policies and changes. Skills are key.
Q5) You talked about exclusion and registration,,, And I was wondering how exclusion to and exclusion of registration (e.g. the dead are not registered).
A5 – Paula) They collect how many are registered… But that can lead to threat inflation and very flawed data. In terms of data that is excluded there is a capacity issue… That may be the issue with deaths. The EU isn’t responsible for saving lives, but doesn’t want to be seen as responsible for those deaths either.
Q6) I wanted to come back to what you see as the problematic implications of the boat surveillance.
A6 – Paula) For many data collection is fine until something happens to you… But if you know it takes place it can have an impact on your behaviours… So there is work to be done to understand if refugees are aware of that surveillance. But the other issue here is about the use of drone surveillance to turn people back then that has clear impact on private lives, particularly as EU states have bilateral agreements with nations that have not all ratified refugee law – meaning turned back boats may result in significantly different rights and opportunities.
RT-07: IR (Chair: Victoria Nash)

The Politics of Internet Research: Reflecting on the challenges and responsibilities of policy engagement

Victoria Nash (University of Oxford, United Kingdom), Wolfgang Schulz (Hans-Bredow-Institut für Medienforschung, Germany), Juan-Carlos De Martin (Politecnico di Torino, Italy), Ivan Klimov, New Economic School, Russia (not attending), Bianca C. Reisdorf (representing Bill Dutton, Quello Center, Michigan Statue University), Kate Coyer, Central European University, Hungary (not attending)

Victoria: I am Vicky Nash and I have convened a round table of members of the international network of internet research centres.

Juan-Carlos: I am director of the Nexa Center for Internet and Society in Italy and we are mainly computer scientists like myself, and lawers. We are ten years old.

Wolfgang: I am associated with two centres, in Humboldt primarily and our interest is in governance and surveillance primarily. We are celebrating our five birthday this year. I also work with the Hans-Bredow-Institut a traditional media institute, multidisciplinary, and we increasingly focus on the internet and internet studies as part of our work.

Bianca: I am representing Bill Dutton. I am Assistant Director of the Quello Center at Michigan State University centre. We were more focused on traditional media but have moved towards internet policy in the last few years as Bill moved to join us. There are three of us right now, but we are currently recruiting for a policy post-doc.

Victoria: Thanks for that, I should talk about the department I am representing… We are in a very traditional institution but our focus has explicitly always been involvement in policy and real world impact.

Victoria: So, over the last five or so years, it does feel like there are particular challenges arising now, especially working with politicians. And I was wondering if other types of researchers are facing those same challenges – is it about politics, or is it specific to internet studies. So, can I kick off and ask you to give me an example of a policy your centre has engaged in, how you were involved, and the experience of that.

Juan-Carlos: There are several examples. One with the regional government in our region of Italy. We were aware of data and participatory information issues in Europe. We reached out and asked if they were aware. We wanted to make them aware of opportunities to open up data, and build on OECD work, but we were also doing some research ourselves. Everybody agreed in the technical infrastructure and on political level… We assisted them in creating the first open data portal in Italy, and one of the first in Europe. And that was great, it was satisfying at the time. Nothing was controversial, we were following a path in Europe… But with a change of regional government that portal has somewhat been neglected so that is frustrating…

Victoria: What motivated that approach you made?

JC: We had a chance to do something new and exciting. We had the know-how and the way it could be, at least in Italy, and that seemed like a great opportunity.

Wolfgang: My centres, I’m kind of an outsider in political governance as I’m concerned with media. But in internet governance it feels like this is our space and we are invested in how it is governed – more so than in other areas. The example I have is from more traditional media work… And that’s from the Hans-Bredow-Institute. We were asked to investigate for a report on usage patterns changes, technology changes, and puts strain on governance structures in Germany… And where there is a need for solutions to make federal and state law in Germany more convergent and able to cope with those changes. But you have to be careful when providing options, because of course you can make some options more appealing than others… So you have to be clear about whether you will be and present it as neutral, or whether you prefer an option and present it differently. And that’s interesting and challenging as an academic and with the role of an academic and institution.

Victoria: So did you consciously present options you did not support?

Wolfgang: Yes, we did. And there were two reasons for this… They were convinced we would come up with a suggestion and basis to start working with… And they accepted that we would not be specifically taking a side – for the federal or local government. And also they were confident we wouldn’t attempt to mess up the system… We didn’t present the ideal but we understood other dependencies and factors and trusted us to only put in suggestions to enhance and practically work, not replace the whole thing…

Victoria: And did they use your options?

Wolfgang: They ignored some suggestions, but where they acted they did take our options.

Bianca: I’ll talk about a semi-successful project. We were looking at detailed postcode level data on internet access and quality and reasons for that. We submitted to the National Science Foundation, it was rejected, then two weeks later we were invited to an event on just that topic by the NPIA. So we are collectively drafting suggestions from the NPIA and from a wide range of many research centres, and we are drafting that now. It was nice to be invited by policy makers… and interesting to see that idea picked up through that process in some way…

Victoria: That’s maybe an unintended consequences aspect there… And that suggestion to work with others was right for you?

Bianca: We were already keen to work with other research centres but actually we also now have policy makers and other stakeholders around the table and that’s really useful.

Victoria: those were all very positive… Maybe you could reflect on more problematic examples…

JC: Ministers often want to show that they are consulting on policy but often that is a gesture, a political move to listen but then policy made an entirely different way… After a while you get used to that. And then you have to calculate whether you participate or not – there is a time aspect there.

Victoria: And for conflict of interest reasons you pay those costs of participating…

JC: Absolutely, the costs are on you.

Wolfgang: We have had contact from ministeries in Germany but then discovered they are interested in the process as a public relations tool rather than as a genuine interest in the outcome. So now we assess that interest and engage – or don’t – accordingly. We try to say at the beginning “no, please speak to someone else” when needed. At Humboldt is reluctant to engage in policy making, and that’s a historical thing, but people expect us to get involved. We are one of the few places that can deliver monitoring on the internet, and there is an expectation to do that… And when ministeries design new programmes, we are often asked to be engaged and we have learned to be cautious about when we engage. Experience helps but you see different ways to approach academia – can be PR, sometimes you want support for your position or support politically, or you can actually be engaged in research to learn and have expertise and information. If you can see what approach it is, you can handle it appropriately.

Victoria: I think as a general piece of advice – to always question “why am I being approached” in the framing of “what are their motivations?”, that is very useful.

Wolfgang: I think starting in terms of research questions and programmes that you are concerned with gives you a counterpoint in your own thinking to dealing with requests. Then when good opportunities come up you can take it and make use of it… But academic value can be limited of some approaches so you need a good reason to engage in those projects and they have to align with your own priorities.

Bianca: My bad example is related to that. The Net Neutrality debate is a big part of our work… There are a lot of partisan opinions on that, and not a lot of neutral research there. We wanted to do a big project there but when we try to get funding for that we have been steered to stay away. We’ve been steered that talking about policy with policy makers is very negative, it is taken poorly. This debate has been bouncing around for 10 years, we want to see where Net Neutrality is imposed if we see changes in investment… But we need funding to do that… And funders don’t want to do it and are usually very cosy with policy makers…

Victoria: This is absolutely an issue, these concerns are in the minds of policy makers as well and that’s important.

Wolfgang: When we talk about research in our field and policy makers, it’s not just about when policy makers approach you to do something… You have a term like Net Neutrality at the centre that requires you to be either neutral or not neutral, that really shapes how you handle that as an academic… You can become, without wanting it, someone promoting one side sometimes. On a minor protection issue we did some work on co-regulation with Australia that seemed to solve a problem… But then after this debate in Germany and started drafting the inter-state treaty on media regulation, the policy makers were interested… And then we felt that we should support it… and I entered the stage but it’s not my question anymore… So you have opinion about how you want something done…

JC: As a coordinator of a European project there was a call that included a topic of “Net Neutrality” – we made a proposal but what happened afterwards clearly proved that that whole area was topic. It was in the call… But we should have framed it differently. Again at European level you see the Commission funds research, you see the outcomes, and then they put out a call that entirely contradicts the work that they funded for political reasons. There is such a drive for evidence-based policy making that it is important that they frame that way… It is evidence-based when it fits their agenda, not when it doesn’t.

Victoria: I did some work with the Department of Media, Culture and Sport last year, again on minor protection, and we were told at the offset to assume porn caused harm to minors. And the frames of reference was shaped to be technical – about access etc. They did bring in a range of academic expertise but the terms of reference really constrained the contribution that was possible. So, there are real bear traps out there!

Wolfgang: A few years back the European Commission asked researchers to look at broadcasters and interruptions to broadcasts and the role of advertising, even though we need money we do not do that, it isn’t answering interesting research questions for us.

Victoria: I raised a question earlier about the specific stakes that academia has in the internet, it isn’t just what we study. Do you want to say more about that.

Wolfgang: Yes, at the pre-conference we had an STS stream… People said “of course we engage with policy” and I was wondering why that is the main position… But the internet comes from academia and there is a long standing tradition of engagement in policy making. Academics do engage with media policy, but they would’t class it as “our domain”, but they were not there are part of the beginning – academia was part of that beginning of the internet.

Q&A

Q1) I wonder if you are mistaking the “of-ness” with the fact that the internet is still being formed, still in the making. Broadcast is established, the internet is in constant construction.

A1 – Wolfgang) I see that

Q1) I don’t know about Europe but in the US since the 1970s there have been deliberate efforts to reduce the power of decision makers and policy makers to work with researchers…

A1 – Bianca) The Federal Communications Commission is mainly made of economists…

Q1) Requirements and roles constrain activities. The assumption of evidence-based decisions is no longer there.

Q2) I think that there is also the issue of shifting governance. Internet governance is changing and so many academics are researching the governance of the internet, we reflect greatly on that. The internet and also the governance structure are still in the making.

Victoria: Do you feel like if you were sick of the process tomorrow, you’d still want to engage with policy making?

A2 – Phoebe) We are a publicly funded university and we are focused on digital inequalities… We feel real responsibility to get involved, to offer advice and opinions based on our advice. On other topics we’d feel less responsible, depending on the impact it would have. It is a public interest thing.

A2 – Wolfgang) When we look at our mission at the Hans-Bredow-Institute we have a vague and normative mission – we think a functioning public sphere is important for democracy… Our tradition is research into public spheres… We have a responsibility there. But we also have a responsibility that the evaluation of academic research becomes more and more important but there is no mechanism to ensure researchers answer the problems that society has… We have a completely divided set of research councils and their yardsticks are academic excellence. State broadcasters do research but with no peer review at all… There are some calls from the Ministry of Science that are problem-orientated but on the whole there isn’t that focus on social issues and relevance in the reward process, in the understanding of prestige.

Victoria: In the UK we have a bizarre dichotomy where research is measured against two measures: impact – where policy impact has real value – and that applies in all fields; but there is also regulation that you cannot use project funds to “lobby” government – which means you potentially cannot communicate research to politicians who disagree. This happened because a research organisation (not a university) opposed government policy with research funded by them… Implications for universities is currently uncleared.

JC: Italy is implementing a similar system to the UK. Often there is no actual mandate on a topic, so individuals come up with ideas without numbers and plans… We think there is a gap – but it is government and ministries work. We are funded to work in the national interest… But we need resources to help there. We are filling gaps in a way that is not sustainable in the long term really – you are evaluated on other criteria.

Q3) I wanted to ask about policy research… I was wondering if there is policy research we do not want to engage in. In Europe, and elsewhere, there is increasing need to attract research… What are the guidelines or principles around what we do or do not go for funding wise.

A3 – Bianca) We are small so we go for what interests us… But we have an advisory board that guides us.

A3 – Wolfgang) I’m not sure that there are overarching guidelines – there may be for other types of special centres – but it’s an interesting thing to have a more formalised exchange like we have right now…

A3 – JC) No, no blockers for us.

A3 – Victoria) Academic freedom is vigorously held up at Oxford but that can mean we have radically different research agendas in the same centre.

Q4) With that lack of guidance, isn’t there a need for academics to show that they have trust, especially in the public sphere, especially when getting funding from, say, Google or Microsoft. And how can you embed that trust?

A4 – Wolfgang) I think peer review as a system functions to support that trust. But we have to think about other institutional settings, and that there is enough oversight… And many associations, like Liebneiz, requires an institutional review board, to look over the research agenda and ensure some outside scrutiny. I wouldn’t say every organisation or research centre needs that – it can be helpful but costly in terms of time in particular. And you cannot trust the general public to do that, you need it to be peers. An interesting question though, especially as Humboldt has national funding from Google… In this network academics play a role, and organisations play a role, and you have to understand the networks and relationships of partners you work with, and their interests.

A4 – Bianca) That’s a question that we’ve faced recently… That concern that corporate funding may sway result and the best way to face that is to publish methodology, questionnaires, process… to ensure the work is understood in that context that enables trust in the work.
A4 – JC) We spent years trying to deal with the issue of independence and it is very important as academia has responsibility to provide research that is independent and unbiased by funding etc. And not just about the work itself, but also perceptions of the work… It is quite a local/contextual issue. So, getting money from Google is perceived differently in different countries, and at different times…
Victoria: This is something we have to have more conversations about this. In medicine there is far more conversation about codes of conduct around funding. I am also concerned that PhD funding is now requiring something like a third of PhDs to be co-funded by industry, without any understanding from UK Government about what that means and what that means for peer review… That’s something we need to think about far more stringently.
Q5) For companies there are requirements to review outputs before publications to check for proprietary information and ensure it is not released. That makes industry the final arbiter here. In Canada our funding is also increasingly coming from industry and there that means that proprietary data gives them final say…
A5 – Bianca) Sometimes it has to be about negotiating contracts and being clear what is and is not acceptable.
Victoria) That’s my concern with new PhD funding models, and also with use of industry data. It will be non-negotiable that the research is not compromised but how you make that process clear is important.
Q6) What are your models here – are you academic or outside academia?
A6 – JC) Academic and policy are part of the work we are funded to do.
A6 – Bianca) We are 99% Endowment funded, hence having a lot of freedom but also advisory board guidance.
A6 – Wolfgang) Our success is assessed by academic publication. The Humboldt Institute is funded largely by private companies but a range of them, but also from grants. The Hans-Bredow-Institute is mainly directly funded by the Hamburg Ministry of Science but we’d like to be funded from other funders across Germany.
A6 – Victoria) Our income is research income, teaching income from masters degrees… We are a department of the university. Our projects are usually policy related, but not always government related.
Q7) I was wondering if others in the room have been funded for policy work – my experience has been that policy makers had expectations and an idea of how much control they wanted… By contrast money from Google comes with a “research something on the internet” type freedom. This is not what I would have expected so I just wondered how others experiences compared.
Comment) I was asked to do work across Europe with public sector broadcasters… I don’t know how well my report was seen by policy makers but it was well received by the public sector broadcaster organisations.
Comment) I’ve had public sector funding, foundation funding… But I’ve never had corporate money… My cynical take is that corporations maybe are doing this as PR, hence not minding what you work on!
Comment) I receive money from funding agencies, I did a joint project that I proposed to a think tank… Which was orientated to government… But a real push for impact… Numbers needed to be in the title. I had to be an objective researcher but present it the right way… And that worked with impact… And then the government offered me a contract to continue the research – working for them not against them. The funding was coming from a position close to my own idea… I felt it was a bit instrumentalised in this way…
A7 – Wolfgang) I think that it is hard to generalise… Companies as funders do sometimes make demands and expect control of publishing of results… And whether it is published or not. We don’t do that – our work is always public domain. It’s case by case… But there is one aspect we haven’t talked about and that is the relationship between the individual researcher and their political engagement (or not) and how that impacts upon the neutrality of the organisation. As a lawyer I’m very aware of that… For instance if giving expert evidence in court, the importance of being an individual not the organisation. Especially if partners/funders before or in the future are on the opposite side. I was an expert for Germany in a court case, with private broadcasters on the other side, and you have to be careful there…
A7 – JC) There is so little money for research in Italy… Regarding corporations… We got some money from Google to write an open source library, it’s out there, it’s public… There was no conflict there. But money from companies for policy work is really difficult. But lots of case by case issues in-between.
Q8) But companies often fund social science work that isn’t about policy but has impact on policy.
A8 – JC) We don’t do social science research so we don’t face that issue.
A8 – Victoria) Finding ways to make that work that guarantees independence is often the best way forward – you cannot and often do not want to say no… But you work with codes of conduct, with advisory board, with processes to ensure appropriate freedoms.
JC: A question to the audience… A controversial topic arises, one side owns the debate and a private company approaches to support your voice… Do you take their funding?
Comment) I was asked to do that and I kind of stalled so that I didn’t have to refuse or take part, but in that case I didn’t feel
Comment) If having your voice in the public triggers the conversation, you do make it visible and participate, to progress the issue…
Comment) Maybe this comes down to personal versus institutional points of view. And I would need to talk to colleagues to help me make that decision, to decide if this would be important or not… Then I would say yes… Better solution is to say “no, I’m talking in a private capacity”.
JC) I think that the point of separating individual and centres here is important. Generally centres like ours do not take a position… And there is an added element that if a corporation wants to be involved, a track record of past behaviour makes it less troublesome. Saying something for 10 years gives you credibility in a way that suddenly engaging does not.
Wolfgang) In Germany it is general practice that if your arguments are not being heard, then you engage expertise – it is general practice in German legal academic practice. It is ok I think.
Comment) In the Bundestag they bring in experts… But of course the choice of expert reflects values and opinions made in articles. So you have a range of academics supporting politics… If I am invited to talk to parliament, I say what I always say “this is not a problem”.
Victoria: And I think that nicely reminds us why this is the politics of internet research! Thank you.
Plenary Panel: Who Rules the Internet? Kate Crawford (Microsoft Research NYC), Fieke Jansen (Tactical Tech), Carolin Gerlitz (University of Siegen) – Chair: Cornelius Puschmann
Jennifer Stromer-Galley, President of the Association of Internet Researchers: For those of you who are new to the AoIR, this is our 17th conference and we are an international organisation that looks at issues around the internet – now including those things that have come out of the internet including mobile apps. And our panel today we will be focusing on governance issues. Before that I would like to acknowledge this marvellous city of Berlin, and to thank all of my colleagues in Germany who have taken such care, and to Humboldt University for hosting us in this beautiful venue. And now, I’d like to handover to Herr Matthias Graf von Kielmansegg, representing Professor Dr Elizabeth Wacker, Federal Minister of Labour and Social Affairs.
Matthias Graf von Kielmansegg: is here representing Professor Wacker, who takes a great interest in internet and society, including the issues that you are looking at here this week. If you are not familiar with our digitisation policy, the German government published a digital agenda for the first time two years ago, covering all areas of government operation. In terms of activities it concentrates on the term 2013-2017, and it needs to be extended, and it reaches strategically far into the next decade. Additionally we have a regular summit bringing together the private sector, unions, government and the academic world looking at key issues.
You all know that digital is a fundamental gamechanger, in the way goods and services are used, the ways we communicate and collaborate, and digital loosens our ties to time and place… And we aren’t at the end but at the middle of this process. Wikipedia was founded 16 years ago, the iPhone launched 9 years ago, and now we talk about Blockchain… So we do not know where we will be in 10 or 20 years time. And good education and research are key to that. And we need to engage proactively. In Germany we are incorporating Internet of Things into our industries. In Germany we used to have a technology-driven view of these things, but now we look at economic and cultural contexts or ecosystems to understand digital systems.
Research is one driver, the other is that science, education, and research are users in their own right. Let me focus first on education… Here we must answer some major issues – what will drive change here, technology or pedagogy? Who will be the change agents? And what of the role of teachers and schools? They must take the lead in change and secure the dominance of pedagogy, using digital tools to support our key education goals – and not vice versa. And that means digital education must offer more opportunities, flexibilities, and better preparation for tomorrow’s world of work. With this in mind we plan to launch a digital education campaign to help young people find their place in an ever changing digital world, and to be ready to adapt to the changes that arise. How education can support our economic model and higher education. And we will need to address issues of technical infrastructure, governance – and for us how this plays out with our 60 federal states. Closer to your world is the world of science. Digital tools create huge amounts of new data and big data. The challenges organisations face is not just infrastructure but how to access and use this data. We call our approach Securing the Life Cycle of Data, concerned with aceess, use, reuse, interoperability. And how will be decide what we save, and what we delete? And who will decide how third parties use this data. And big data goes alongside other aspects such as high powered computing. We plan to launch an initiative of action in this area next year. To oversee this we have a Scientific Oversight Body with stakeholders. We are also keen to embrace Open Data and the resources to support that. We have added new conditions to our own funding conditions – any publication based on research funded by us, must be published open access.
We need to know more about internet and society need to be known, and there is research to be done. So, the federal government has decided to establish a German Internet Institute. It will address a number of areas of importance: access and use of the digital world; work and value creation and our democracy. We want an interdisciplinary team of social scientists, economists, and information scientists. The competitive selection process is just underway, and we expect the winner to be announced next spring. There is readiness to spend up to €15M over the first five years. And this highlights the importance of the digital world in Germany.
Let me just make one comment. The overall title of this conference is Internet Rules! It is still up to us to be the fool or the wise… We need to understand what might happen is politics, economics and society do not find the answers to the challenges we face. And so hopefully we will find that it’s not the internet that rules, but that democracy rules!
Kate Crawford
When Cornelius asked me to look at the idea of “Who rules the internet?” I looked up at my bookshelf, and found lots of books written by people in this community, many of you in this room, looking at just this question. And we have moved from the ’90s utopianism to the world of infrastructure, socio-technical aspects, the Internet of Things layer – and zombie web cams being coopted by hackers. So many of you have enhanced my understanding of this issue.
Right now we see machine learning and AI being rapidly build into our world without implications being fully understood… I am talking narrowly about AI here… Sometimes they have lovely feminine names: Siri, Alexa, etc… But these systems are embedded in our phones, we have AI analysing images on Facebook. It will never be separate from humans, but it is distinct and significant, and we see AI beyond the internet and into systems – on who gets released from jail, on hospital stays, etc. I am sure all of us were surprised by the fact that Facebook, last month, censored a Pulitzer Prize winning image of a girl being napalmed in Vietnam… We don’t know the processes that triggered this, though an image of a nude girl likely triggers these processes… Now that had attention, the Government of Norway accused Facebook or erasing our shared history. The image was restored but this is the tip of the iceberg – and most images and actions are not so apparent to us…
This lack of visibility is important but it isn’t new… There are many organisational and procedural aspects that are opaque… I think we are having a moment around AI where we don’t know what is taking place… So what do we do?
We could make them transparent… But this doesn’t seem likely to work. A colleague and I have written about the history of transparency and that process and availability code does not necessarily tell you exactly what is happening and how this is used. Y Combinator has installed a system, called HAL 9000 brilliantly, and have boasted that they don’t know how it filters applications, only the system could do that. That’s fine until that system causes issues, denies you rights, gets in your way…
So we need to understand these algorithms from the outside… We have to poke them… And I think of Christian Salmand(?)’s work on algorithmic auditing. Christian couldn’t be here this evening and my thoughts are with him. But he is also part of a group who are trying to pursue legal rights to enable this type of research.
And there are people that say that AI can fix this system… This is something that the finance sector talks about. They have an environment of predatory machine learning hunting each other – Terry Cary has written about this. It’s tempting to create a “police AI” to watch these… I’ve been going back to the 1970s books on AI, and the work of Joseph Weizenbaum who created ELIZA. And he suggested that if we continue to ascribe AI to human acting systems it might be a slow acting poison. It is a reminder to not be seduced by these new forms of AI.
Carolin Gerlitz, University of Siegen
I think after the last few days the answer to the question of “who rules the internet?”, I think the answer is “platforms”!
Their rules of who users are, what they can do, can seem very rigid. Before Facebook introduced the emotions, the Like button was used in a range of ways. With the introduction of emotions they have rigidly defined responses, creating discreet data points to be advertiser ready and available to be recombined.
There are also rules around programmability, that dictate what data can be extracted, how, by whom, in what ways… And platforms also like to keep the interpretation of data in control, and adjust the rules of APIs. Some of you have been working to extract data from platforms where things are changing rapidly – Twitter API changes, Facebook API and Research changes, Instagram API changes, all increasingly restricting access, all dictating who can participate. And limiting the opportunity to hold platforms to account, as my colleague Anne Helmond argues.
Increasingly platforms are accessed indirectly through intermediaries which create their own rules, a cascade of rules for users to engage with. Platforms don’t just extend to platforms but also to apps… As many of you have been writing about in regard to platforms and apps… And Christian, if he were here today, would talk about the increasing role of platforms in this way…
And platforms reach out not only to users but also non-users. They these spaces are also contextual – with place, temporality and the role of commercial content all important here.
These rules can be characterised in different ways… There is a dichotomy of openness and closedness. Much of what takes place is hidden and dictated by cascading sets of rule sets. And then there is the issue of evaluation – what counts, for whom, and in what way? Tailorism refers to the mass production of small tasks – and platforms work in these fine grained algorithmed way. But platforms don’t earn money from users’ repetitive actions… Or from use of platform data by third parties. They “put life to work” (Lazlo) by using data points raising questions of who counts and what counts.
Fieke Jansen, Tactical Tech
I work at an NGO, on the ground in real world scenarios. And we are concerned with the Big Five: Apple, Amazon, Google, Microsoft and Facebook. How did we get like this? People we work with are uncomfortable with this. When we ask activists and ask them to draw the internet, they mostly draw a cloud. We asked at a session “what happens if the government bans Facebook” and they cannot imagine it – and if Facebook is beyond government then where are we at here? And I work with an open source company who use Google Apps for Business – and that seems like an odd situation to me…
But I’ll leave the Big Five for now and turn to BitNik… They used the dark net shopper and brought random stuff for $50… And then placed them in a gallery… They did
Iced T watch… After Wikileaks an activist in Berlin found all the NSA services spying on this and worked out who was working for the secret service… But that triggers a real debate… There was real discussion of being anti-patriotic, and puts people in data… But the data he used, from LinkedIn, is sold every day…. He just used it in a way that raised debate. We allow that selling use… But this coder’s work was not… Isn’t that debate needed.
So, back to the Big Five. In 2014 Google (now Alphabet) was the second biggest company in the world – with equivalent GDP bigger than Austria. We choose to use many of their services every day… But many of their services are less in our face. In the sensor world we have fewer choices about data… And with the big companies it is political too… In Brussels you have to register lobbists – there are 9 for Google, 7 used to work for the European Parliament… There is a revolving door here.
There is also an issue of skill… Google has wealth and power and knowledge that are very large to counter. Facebook have, around 400m active users a month, 300m likes a day, they are worth $190m… And here we miss the political influence. They have an enormous drive to conquer the global south… They want to roll out Facebook Sero as “the internet”…
So, who rules the internet? It’s the 1% of the 1%… It is the Big Five, but also the venture capitalists who back them… Sequoia and Kleiner Perkins Caufield & Byers, and you have Peter Thiel… It is very few people behind many of the biggest companies including some of the Big Five…
People use these services that work well, work easily… I only use open source… Yes, it is harder… Why are so few questioning and critiquing that? We feed the beast on an every day basis… It is our universities – also moving to decentralised Big Five platforms in preference to their own, it is our government… and if we are not critical what happens?
Panel Discussion
Cornelius: Many here study internet governance… So I want to ask, Kate, does AI rule the internet?
Kate: I think it is really hard to think about who rules the internet. The interesting thing about automated decision making networks have been with us for a while… It’s less about ruling, and who… And it’s more about the entanglements, fragmentation and governance. We talk about the Big Five… I would probably say there are Seven companies here, deciding how we get into university, healthcare, housing, filtering far beyond the internet… And governments do have a role to play.
Cornelius: How do we govern what we don’t understand?
Kate: That’s a hard question… That keeps me up at night that question… Governments look to us academics, technology sectors, NGOs, trying to work out what to do. We need really strong research groups to look at this – we tried to do this with AI Now. Interdisciplinary is crucial – these issues cannot be solved by computer science alone or social science alone… This is the biggest challenge of the next 50 years.
Cornelius: What about how national governments can legislate for Facebook, say? (I’m simplifying a longer question that I didn’t catch in time here, correction welcome!)
Carolyn: I’m not sure about Facebook but in our digital methods workshop we talked about how on Twitter content can be deleted, that can then be exposed in other locations via the API. And it is also the case that these services are specific and localised… We expect national governments to have some governance, when what you understand and how you access information varies by location… Increasing that uncanny notion. I also wanted to comment on something you asked Kate – thinking about the actors here, they all require engagement of users – something Fieke pointed to. Those actors involved in rulers are dependent on actions of other actors.
Cornelius: So how else we be running these things? The Chinese option, the Russion options, are there better options?
Carolyn: I think I cannot answer – I’d want to put it to these 570 smart people for the next two days. My answer would be to acknowledge distributedness to which we have to respond and react… We cannot understand algorithms and AI without understanding context…
Carolyn: Fieke, what you talked about… Being extreme… Are we whining because as Europeans we are being colonised by other areas of the world, even as we use and are obsessed by our devices and tools – complaining then checking our iPhones. I’m serious… If we did care that much, maybe actions would change… You said people have the power here, maybe it’s not a big enough issue…
Fieke: Is it Europeans concerned about Americans from a libertarian point of view? Yes. I work mainly in non-European parts of the world and particularly in the North America… For many the internet is seen as magical and neutral – but those who research it we know it is not. But when you ask why people use tools, it’s their friends or community. If you ask them who owns it, that raises questions that are framed in a relevant way. The framing has to fit people’s reality. In South America talk of Facebook Sero as the new colonialism, you will have a political conversation… But we also don’t always know why we are uncomfortable… It can feel abstract, distant, and the concern is momentary. Outside of this field, people don’t think about it.
Kate: Your provocation that we could just step away, and move to open source. The reality includes opportunity costs to employment, to friends and family… But even if you do none of those things then you walk down the streets and you are tracked by sensors, by other devices…
Fieke: I absolutely agree. All the data collected beyond our control is the concern… But we can’t just roll over and die, we have to try and provoke and find mechanisms to play…
Kate: I think that idea of what the political levers may be… Those conversation of legal, ethical, technical parameters seem crucial, more than consumer choice. But I don’t think we have sufficient collective models of changing information ecologies… and they are changing so rapidly.
Q&A
Q1) Thank you for this wonderful talk and perspectives here. You talked about the infrastructure layer… What about that question. You say this 1% of 1% own the internet, but do they own the infrastructure? Facebook is trying to balloon in the internet so that they cannot be cut off… It also – second question – used to be that YOU owns the internet that changed the dominance of big companies… This happens in history quite often… So what about that?
A1 – Fieke) I think that Kate talked about the many levels of ownership… Facebook piggy backs on other infrastructures, Google does the balloons. It used to be that government owned the infrastructure. There are new cables rolling out… EU funding, governments, private companies, rich people… The infrastructure is mainly owned by companies now.
A1 – Kate) I think infrastructure studies has been extraordinarily rich – work of Nicole Serafichi for instance – but also we have art responses. Infrastructure is very of the moment… But what happens next… It is not just about infrastructures and their ownerships, but also surveillance access to these. There are things like MESH networks… And there are people working here in Berlin to flag up faux police networks during protests to help protestors protect themselves.
A1 – Carolyn) I think that platforms would have argued differently ten years ago about who owned the internet – but “you” probably wouldn’t have been the answer…
Q2) I wonder if the real issue is that we are running on very vague ideas of government that have been established for a very different world. People are responding to elections and referenda in very irrational ways that suggest that model is not fit for purpose. Is there a better form of governance or democracy that we should move towards? Can AI help us there?
A2 – Kate) What a beautiful and impossible to answer question! Obviously I cannot answer that properly but part of the reason I do AI research is to try to inform and shape that… Hence my passion for building research in this space. We don’t have much data to go on but the imaginative space here has been dominated by those with narrow ideas. I want to think about how communities can develop and contribute to AI, and what potential there is.
Q3) Do we need to rethink what we mean by democratic control and regulations… Regulations are closely associated with nation states, but that’s not the context that most of the internet operates. Do we need to re-engage with the question of globalisation again.
A3) As Carolyn said, who is the “you” in web 2.0, and whose narrative is there. Globalisation is similar. I pay taxes to a nation state that has rules of law and governance… By denying that they buy into the narrative of mainly internet companies and huge multinational organisations.
Cornelius: I have the declaration of independence of the internet by Perry Barlow which I was tempted to quote you… But it is interesting to reflect on how we have moved from utopian positions to where we are today.
Q4 – participant from Google!) There is an interesting question here… If this question was pointing to deeper truth… A clear ruler, an internet, would allow this question of who rules to be answered. I would ask how we have agency over how the proliferation of internet technologies and how we benefit from them… ?
A4 – Kate) A great title, but long for the programme! But your phrasing is so interesting – if it is so diverse and complex then how we engage is crucial. I think that is important but, the optimistic part, I think we can do this.
A4 – Carolyn) One way to engage is through descent… and negotiating on a level that ensures platforms work beyond economic values…
Q5) The last time I was forced to give away my data was by the Australian state (where I live) in completing the census… I had to complete it or I would be fined over $1000 AUS – Facebook, Twitter, etc. never did that… I rule this kind of internet, I am still free in my choices. But on the other hand why is it that states that are best at governing platforms are the ones I want to live in the least. Maybe without the platforms no-one would use the internet so we’d have one problem less… If we as academics think about platforms in these mythic ways, maybe we end up governing in a way that is more controlled and has undesirable effects.
A5 – Kate) Many questions there, I’ll address two of those. On the census I’d refer you to articles
University of Cambridge study showed huge accuracy in determining marital status, sexuality and whether a drug or alcohol user based on Facebook likes… You may feel free but those data patterns are being built. But we have to move beyond thinking that only by active participation do you contribute to these platforms…
A5 – Fieke) The Census issue you brought up is interesting… In the UK, US and Australia the contractor for the Census is conducted by one of the world’s biggest arms manufacturers… You don’t give data to the Big Five… But…  So, we do need to question the politics behind our actions… There is also a perception that having technical skills makes you superior to those without, and if we do that we create a whole new class system and that raises whole new questions.
Q6) The question of internet raises issues of boundaries, and how we do governance and work of governance and rule-making. Ideally when we do that governance and rule-making there are values behind that… So what are the values that you think need to underlie those structures and systems…
A6 – Carolyn) I think values that do not discriminate people through algorithmic processing, AI, etc. Those tools should allow people to not be discriminated on the basis of things they have done in the past… But that requires understanding of how that discrimination is taking place now…
A6 – Kate) I love that question… All of these layers of control come with values baked in, we just don’t know what they are… I would be interested to see what values drop out of those systems, that don’t fit the easy metricisation of our world. Some great things to fall out of feminist and race theory and values from that…
A6 – Fieke) I would add that values should not just be about the individual, and should ensure that the collective is also considered…
Cornelius: Thank you for offering a glimmer of hope! Thank you all!