Jun 302017

Today I’m at ReCon 2017, giving a presentation later (flying the flag for the unconference sessions!) today but also looking forward to a day full of interesting presentations on publishing for early careers researchers.

I’ll be liveblogging (except for my session) and, as usual, comments, additions, corrections, etc. are welcomed. 

Jo Young, Director of the Scientific Editing Company, is introducing the day and thanking the various ReCon sponsors. She notes: ReCon started about five years ago (with a slightly different name). We’ve had really successful events – and you can explore them all online. We have had a really stellar list of speakers over the years! And on that note…

Graham Steel: We wanted to cover publishing at all stages, from preparing for publication, submission, journals, open journals, metrics, alt metrics, etc. So our first speakers are really from the mid point in that process.

SESSION ONE: Publishing’s future: Disruption and Evolution within the Industry

100% Open Access by 2020 or disrupting the present scholarly comms landscape: you can’t have both? A mid-way update – Pablo De Castro, Open Access Advocacy Librarian, University of Strathclyde

It is an honour to be at this well attended event today. Thank you for the invitation. It’s a long title but I will be talking about how are things are progressing towards this goal of full open access by 2020, and to what extent institutions, funders, etc. are being able to introduce disruption into the industry…

So, a quick introduction to me. I am currently at the University of Strathclyde library, having joined in January. It’s quite an old university (founded 1796) and a medium size university. Previous to that I was working at the Hague working on the EC FP7 Post-Grant Open Access Pilot (Open Aire) providing funding to cover OA publishing fees for publications arising from completed FP7 projects. Maybe not the most popular topic in the UK right now but… The main point of explaining my context is that this EU work was more of a funders perspective, and now I’m able to compare that to more of an institutional perspective. As a result o of this pilot there was a report commissioned b a British consultant: “Towards a competitive and sustainable open access publishing market in Europe”.

One key element in this open access EU pilot was the OA policy guidelines which acted as key drivers, and made eligibility criteria very clear. Notable here: publications to hybrid journals would not be funded, only fully open access; and a cap of no more than €2000 for research articles, €6000 for monographs. That was an attempt to shape the costs and ensure accessibility of research publications.

So, now I’m back at the institutional open access coalface. Lots had changed in two years. And it’s great to be back in this spaces. It is allowing me to explore ways to better align institutional and funder positions on open access.

So, why open access? Well in part this is about more exposure for your work, higher citation rates, compliant with grant rules. But also it’s about use and reuse including researchers in developing countries, practitioners who can apply your work, policy makers, and the public and tax payers can access your work. In terms of the wider open access picture in Europe, there was a meeting in Brussels last May where European leaders call for immediate open access to all scientific papers by 2020. It’s not easy to achieve that but it does provide a major driver… However, across these countries we have EU member states with different levels of open access. The UK, Netherlands, Sweden and others prefer “gold” access, whilst Belgium, Cyprus, Denmark, Greece, etc. prefer “green” access, partly because the cost of gold open access is prohibitive.

Funders policies are a really significant driver towards open access. Funders including Arthritis Research UK, Bloodwise, Cancer Research UK, Breast Cancer Now, British Heard Foundation, Parkinsons UK, Wellcome Trust, Research Councils UK, HEFCE, European Commission, etc. Most support green and gold, and will pay APCs (Article Processing Charges) but it’s fair to say that early career researchers are not always at the front of the queue for getting those paid. HEFCE in particular have a green open access policy, requiring research outputs from any part of the university to be made open access, you will not be eligible for the REF (Research Excellence Framework) and, as a result, compliance levels are high – probably top of Europe at the moment. The European Commission supports green and gold open access, but typically green as this is more affordable.

So, there is a need for quick progress at the same time as ongoing pressure on library budgets – we pay both for subscriptions and for APCs. Offsetting agreements are one way to do this, discounting subscriptions by APC charges, could be a good solutions. There are pros and cons here. In principal it will allow quicker progress towards OA goals, but it will disproportionately benefit legacy publishers. It brings publishers into APC reporting – right now sometimes invisible to the library as paid by researchers, so this is a shift and a challenge. It’s supposed to be a temporary stage towards full open access. And it’s a very expensive intermediate stage: not every country can or will afford it.

So how can disruption happen? Well one way to deal with this would be the policies – suggesting not to fund hybrid journals (as done in OpenAire). And disruption is happening (legal or otherwise) as we can see in Sci-Hub usage which are from all around the world, not just developing countries. Legal routes are possible in licensing negotiations. In Germany there is a Projekt Deal being negotiated. And this follows similar negotiations by open access.nl. At the moment Elsevier is the only publisher not willing to include open access journals.

In terms of tools… The EU has just announced plans to launch it’s own platform for funded research to be published. And Wellcome Trust already has a space like this.

So, some conclusions… Open access is unstoppable now, but still needs to generate sustainable and competitive implementation mechanisms. But it is getting more complex and difficult to disseminate to research – that’s a serious risk. Open Access will happen via a combination of strategies and routes – internal fights just aren’t useful (e.g. green vs gold). The temporary stage towards full open access needs to benefit library budgets sooner rather than later. And the power here really lies with researchers, which OA advocates aren’t always able to get informed. It is important that you know which are open and which are hybrid journals, and why that matters. And we need to think if informing authors on where it would make economic sense to publish beyond the remit of institutional libraries?

To finish, some recommended reading:

  • “Early Career Researchers: the Harbingers of Change” – Final report from Ciber, August 2016
  • “My Top 9 Reasons to Publish Open Access” – a great set of slides.


Q1) It was interesting to hear about offsetting. Are those agreements one-off? continuous? renewed?

A1) At the moment they are one-off and intended to be a temporary measure. But they will probably mostly get renewed… National governments and consortia want to understand how useful they are, how they work.

Q2) Can you explain green open access and gold open access and the difference?

A2) In Gold Open Access, the author pays to make your paper open on the journal website. If that’s a hybrid – so subscription – journal you essentially pay twice, once to subscribe, once to make open. Green Open Access means that your article goes into your repository (after any embargo), into the world wide repository landscape (see: https://www.jisc.ac.uk/guides/an-introduction-to-open-access).

Q3) As much as I agree that choices of where to publish are for researchers, but there are other factors. The REF pressures you to publish in particular ways. Where can you find more on the relationships between different types of open access and impact? I think that can help?

A3) Quite a number of studies. For instance is APC related to Impact factor – several studies there. In terms of REF, funders like Wellcome are desperate to move away from the impact factor. It is hard but evolving.

Inputs, Outputs and emergent properties: The new Scientometrics – Phill Jones, Director of Publishing Innovation, Digital Science

Scientometrics is essentially the study of science metrics and evaluation of these. As Graham mentioned in his introduction, there is a whole complicated lifecycle and process of publishing. And what I will talk about spans that whole process.

But, to start, a bit about me and Digital Science. We were founded in 2011 and we are wholly owned by Holtzbrink Publishing Group, they owned Nature group. Being privately funded we are able to invest in innovation by researchers, for researchers, trying to create change from the ground up. Things like labguru – a lab notebook (like rspace); Altmetric; Figshare; readcube; Peerwith; transcriptic – IoT company, etc.

So, I’m going to introduce a concept: The Evaluation Gap. This is the difference between the metrics and indicators currently or traditionally available, and the information that those evaluating your research might actually want to know? Funders might. Tenure panels – hiring and promotion panels. Universities – your institution, your office of research management. Government, funders, policy organisations, all want to achieve something with your research…

So, how do we close the evaluation gap? Introducing altmetrics. It adds to academic impact with other types of societal impact – policy documents, grey literature, mentions in blogs, peer review mentions, social media, etc. What else can you look at? Well you can look at grants being awarded… When you see a grant awarded for a new idea, then publishes… someone else picks up and publishers… That can take a long time so grants can tell us before publications. You can also look at patents – a measure of commercialisation and potential economic impact further down the link.

So you see an idea germinate in one place, work with collaborators at the institution, spreading out to researchers at other institutions, and gradually out into the big wide world… As that idea travels outward it gathers more metadata, more impact, more associated materials, ideas, etc.

And at Digital Science we have innovators working across that landscape, along that scholarly lifecycle… But there is no point having that much data if you can’t understand and analyse it. You have to classify that data first to do that… Historically we did that was done by subject area, but increasingly research is interdisciplinary, it crosses different fields. So single tags/subjects are not useful, you need a proper taxonomy to apply here. And there are various ways to do that. You need keywords and semantic modeling and you can choose to:

  1. Use an existing one if available, e.g. MeSH (Medical Subject Headings).
  2. Consult with subject matter experts (the traditional way to do this, could be editors, researchers, faculty, librarians who you’d just ask “what are the keywords that describe computational social science”).
  3. Text mining abstracts or full text article (using the content to create a list from your corpus with bag of words/frequency of words approaches, for instance, to help you cluster and find the ideas with a taxonomy emerging

Now, we are starting to take that text mining approach. But to use that data needs to be cleaned and curated to be of use. So we hand curated a list of institutions to go into GRID: Global Research Identifier Database, to understand organisations and their relationships. Once you have that all mapped you can look at Isni, CrossRef databases etc. And when you have that organisational information you can include georeferences to visualise where organisations are…

An example that we built for HEFCE was the Digital Science BrainScan. The UK has a dual funding model where there is both direct funding and block funding, with the latter awarded by HEFCE and it is distributed according to the most impactful research as understood by the REF. So, our BrainScan, we mapped research areas, connectors, etc. to visualise subject areas, their impact, and clusters of strong collaboration, to see where there are good opportunities for funding…

Similarly we visualised text mined impact statements across the whole corpus. Each impact is captured as a coloured dot. Clusters show similarity… Where things are far apart, there is less similarity. And that can highlight where there is a lot of work on, for instance, management of rivers and waterways… And these weren’t obvious as across disciplines…


Q1) Who do you think benefits the most from this kind of information?

A1) In the consultancy we have clients across the spectrum. In the past we have mainly worked for funders and policy makers to track effectiveness. Increasingly we are talking to institutions wanting to understand strengths, to predict trends… And by publishers wanting to understand if journals should be split, consolidated, are there opportunities we are missing… Each can benefit enormously. And it makes the whole system more efficient.

Against capital – Stuart Lawson, Birkbeck University of London

So, my talk will be a bit different. The arguements I will be making are not in opposition to any of the other speakers here, but is about critically addressing our current ways we are working, and how publishing works. I have chosen to speak on this topic today as I think it is important to make visible the political positions that underly our assumptions and the systems we have in place today. There are calls to become more efficient but I disagree… Ownership and governance matter at least as much as the outcome.

I am an advocate for open access and I am currently undertaking a PhD looking at open access and how our discourse around this has been coopted by neoliberal capitalism. And I believe these issues aren’t technical but social and reflect inequalities in our society, and any company claiming to benefit society but operating as commercial companies should raise questions for us.

Neoliberalism is a political project to reshape all social relations to conform to the logic of capital (this is the only slide, apparently a written and referenced copy will be posted on Stuart’s blog). This system turns us all into capital, entrepreneurs of our selves – quantification, metricification whether through tuition fees that put a price on education, turn students into consumers selecting based on rational indicators of future income; or through pitting universities against each other rather than collaboratively. It isn’t just overtly commercial, but about applying ideas of the market in all elements of our work – high impact factor journals, metrics, etc. in the service of proving our worth. If we do need metrics, they should be open and nuanced, but if we only do metrics for people’s own careers and perform for careers and promotion, then these play into neoliberal ideas of control. I fully understand the pressure to live and do research without engaging and playing the game. It is easier to choose not to do this if you are in a position of privelege, and that reflects and maintains inequalities in our organisations.

Since power relations are often about labour and worth, this is inevitably part of work, and the value of labour. When we hear about disruption in the context of Uber, it is about disrupting rights of works, labour unions, it ignores the needs of the people who do the work, it is a neo-liberal idea. I would recommend seeing Audrey Watters’ recent presentation for University of Edinburgh on the “Uberisation of Education”.

The power of capital in scholarly publishing, and neoliberal values in our scholarly processes… When disruptors align with the political forces that need to be dismantled, I don’t see that as useful or properly disruptive. Open Access is a good thing in terms of open access. But there are two main strands of policy… Research Councils have spent over £80m to researchers to pay APCs. Publishing open access do not require payment of fees, there are OA journals who are funded other ways. But if you want the high end visible journals they are often hybrid journals and 80% of that RCUK has been on hybrid journals. So work is being made open access, but right now this money flows from public funds to a small group of publishers – who take a 30-40% profit – and that system was set up to continue benefitting publishers. You can share or publish to repositories… Those are free to deposit and use. The concern of OA policy is the connection to the REF, it constrains where you can publish and what they mean, and they must always be measured in this restricted structure. It can be seen as compliance rather than a progressive movement toward social justice. But open access is having a really positive impact on the accessibility of research.

If you are angry at Elsevier, then you should also be angry at Oxford University and Cambridge University, and others for their relationships to the power elite. Harvard made a loud statement about journal pricing… It sounded good, and they have a progressive open access policy… But it is also bullshit – they have huge amounts of money… There are huge inequalities here in academia and in relationship to publishing.

And I would recommend strongly reading some history on the inequalities, and the racism and capitalism that was inherent to the founding of higher education so that we can critically reflect on what type of system we really want to discover and share scholarly work. Things have evolved over time – somewhat inevitably – but we need to be more deliberative so that universities are more accountable in their work.

To end on a more positive note, technology is enabling all sorts of new and inexpensive ways to publish and share. But we don’t need to depend on venture capital. Collective and cooperative running of organisations in these spaces – such as the cooperative centres for research… There are small scale examples show the principles, and that this can work. Writing, reviewing and editing is already being done by the academic community, lets build governance and process models to continue that, to make it work, to ensure work is rewarded but that the driver isn’t commercial.


Comment) That was awesome. A lot of us here will be to learn how to play the game. But the game sucks. I am a professor, I get to do a lot of fun things now, because I played the game… We need a way to have people able to do their work that way without that game. But we need something more specific than socialism… Libraries used to publish academic data… Lots of these metrics are there and useful… And I work with them… But I am conscious that we will be fucked by them. We need a way to react to that.

Redesigning Science for the Internet Generation – Gemma Milne, Co-Founder, Science Disrupt

Science Disrupt run regular podcasts, events, a Slack channel for scientists, start ups, VCs, etc. Check out our website. We talk about five focus areas of science. Today I wanted to talk about redesigning science for the internet age. My day job is in journalism and I think a lot about start ups, and to think about how we can influence academia, how success is manifests itself in the internet age.

So, what am I talking about? Things like Pavegen – power generating paving stones. They are all over the news! The press love them! BUT the science does not work, the physics does not work…

I don’t know if you heard about Theranos which promised all sorts of medical testing from one drop of blood, millions of investments, and it all fell apart. But she too had tons of coverage…

I really like science start ups, I like talking about science in a different way… But how can I convince the press, the wider audience what is good stuff, and what is just hype, not real… One of the problems we face is that if you are not engaged in research you either can’t access the science, and can’t read it even if they can access the science… This problem is really big and it influences where money goes and what sort of stuff gets done!

So, how can we change this? There are amazing tools to help (Authorea, overleaf, protocol.io, figshare, publons, labworm) and this is great and exciting. But I feel it is very short term… Trying to change something that doesn’t work anyway… Doing collaborative lab notes a bit better, publishing a bit faster… OK… But is it good for sharing science? Thinking about journalists and corporates, they don’t care about academic publishing, it’s not where they go for scientific information. How do we rethink that… What if we were to rethink how we share science?

AirBnB and Amazon are on my slide here to make the point of the difference between incremental change vs. real change. AirBnB addressed issues with hotels, issues of hotels being samey… They didn’t build a hotel, instead they thought about what people want when they traveled, what mattered for them… Similarly Amazon didn’t try to incrementally improve supermarkets.. They did something different. They dug to the bottom of why something exists and rethought it…

Imagine science was “invented” today (ignore all the realities of why that’s impossible). But imagine we think of this thing, we have to design it… How do we start? How will I ask questions, find others who ask questions…

So, a bit of a thought experiment here… Maybe I’d post a question on reddit, set up my own sub-reddit. I’d ask questions, ask why they are interested… Create a big thread. And if I have a lot of people, maybe I’ll have a Slack with various channels about all the facets around a question, invite people in… Use the group to project manage this project… OK, I have a team… Maybe I create a Meet Up Group for that same question… Get people to join… Maybe 200 people are now gathered and interested… You gather all these folk into one place. Now we want to analyse ideas. Maybe I share my question and initial code on GitHub, find collaborators… And share the code, make it open… Maybe it can be reused… It has been collaborative at every stage of the journey… Then maybe I want to build a microscope or something… I’d find the right people, I’d ask them to join my Autodesk 360 to collaboratively build engineering drawings for fabrication… So maybe we’ve answered our initial question… So maybe I blog that, and then I tweet that…

The point I’m trying to make is, there are so many tools out there for collaboration, for sharing… Why aren’t more researchers using these tools that are already there? Rather than designing new tools… These are all ways to engage and share what you do, rather than just publishing those articles in those journals…

So, maybe publishing isn’t the way at all? I get the “game” but I am frustrated about how we properly engage, and really get your work out there. Getting industry to understand what is going on. There are lots of people inventing in new ways.. YOu can use stuff in papers that isn’t being picked up… But see what else you can do!

So, what now? I know people are starved for time… But if you want to really make that impact, that you think is more interested… I undesrtand there is a concern around scooping… But there are ways to do that… And if you want to know about all these tools, do come talk to me!


Q1) I think you are spot on with vision. We want faster more collaborative production. But what is missing from those tools is that they are not designed for researchers, they are not designed for publishing. Those systems are ephemeral… They don’t have DOIs and they aren’t persistent. For me it’s a bench to web pipeline…

A1) Then why not create a persistent archived URI – a webpage where all of a project’s content is shared. 50% of all academic papers are only read by the person that published them… These stumbling blocks in the way of sharing… It is crazy… We shouldn’t just stop and not share.

Q2) Thank you, that has given me a lot of food for thought. The issue of work not being read, I’ve been told that by funders so very relevant to me. So, how do we influence the professors… As a PhD student I haven’t heard about many of those online things…

A2) My co-founder of Science Disrupt is a computational biologist and PhD student… My response would be about not asking, just doing… Find networks, find people doing what you want. Benefit from collaboration. Sign an NDA if needed. Find the opportunity, then come back…

Q3) I had a comment and a question. Code repositories like GitHub are persistent and you can find a great list of code repositories and meta-articles around those on the Journal of Open Research Software. My question was about AirBnB and Amazon… Those have made huge changes but I think the narrative they use now is different from where they started – and they started more as incremental change… And they stumbled on bigger things, which looks a lot like research… So… How do you make that case for the potential long term impact of your work in a really engaging way?

A3) It is the golden question. Need to find case studies, to find interesting examples… a way to showcase similar examples… and how that led to things… Forget big pictures, jump the hurdles… Show that bigger picture that’s there but reduce the friction of those hurdles. Sure those companies were somewhat incremental but I think there is genuinely a really different mindset there that matters.

And we now move to lunch. Coming up…

UNCONFERENCE SESSION 1: Best Footprint Forward – Nicola Osborne, EDINA

This will be me – talking about managing a digital footprint and how robust web links are part of that lasting digital legacy- so no post from me but you can view my slides on Managing Your Digital Footprint and our Reference Rot in Theses: A HiberActive Pilot here.

SESSION TWO: The Early Career Researcher Perspective: Publishing & Research Communication

Getting recognition for all your research outputs – Michael Markie, F1000

I’m going to talk about things you do as researchers that you should get credit for, not just traditional publications. This week in fact there was a very interesting article on the history of science publishing “Is the staggering profitable business of scientific publishing bad for science?”. Publishers came out of that poorly… And I think others are at fault here too, including the research community… But we do have to take some blame.

There’s no getting away from the fact that the journal is the coin of the realm, for career progression, institutional reporting, grant applications. For the REF, will there be impact factors? REF says maybe not, but institutions will be tempted to use that to prioritise. Publishing is being looked at by impact factor…

And it’s not just where you publish. There are other things that you do in your work and which you should get ore credit for. Data; software/code – in bioinformatics there are new softwares and tools that are part of the research, are they getting the recognition they should; all results – not just the successes but also the negative results… Publishers want cool and sexy stuff but realistically we are funded for this, we should be able to publish and be recognised for it; peer review – there is no credit for it, peer reviews often improve articles and warrant credit; expertise – all the authors who added expertise, including non-research staff, everyone should know who contributed what…

So I see research as being more than a journal article. Right now we just package it all up into one tidy thing, but we should be fitting into that bigger picture. So, I’m suggesting that we need to disrupt it a bit more and pubis in a different way… Publishing introduces delays – of up to a year. Journals don’t really care about data… That’s a real issue for reproducibility.  And there is bias involved in publishing, there is a real lack of transparency in publishing decisions. All of the above means there is real research waster. At the same time there is demand for results, for quicker action, for wider access to work.

So, at F1000 we have been working on ways to address these issues. We launched Wellcome Open Research, and after launching that the Bill & Melinda Gated Foundation contacted us to build a similar platform. And we have also built an open research model for UCL Child Health (at St Ormond’s Street).

The process involves sending a paper in, checking there is plagiarism and that ethics are appropriate. But no other filtering. That can take up to 7 days. Then we ask for your data – no data then no publication. Then once the publication and data deposition is made, the work is published and an open peer review and user commenting process begins, they are names and credited, and they contribute to improve that article and contribute to the article revision. Those reviewers have three options: approved, approved with reservations, or not approved as it stands. So yo get to PMC and indexed in PubMed you need two “approved” status of two “approved with reservations” and an “approved”.

So this connects to lots of stuff… For Data thats with DataCite, DigShare, Plotly, Resource Identification Initiative. For Software/code we work with code ocean, Zenodo, GitHub. For All results we work with PubMed, you can publish other formats… etc.

Why are funders doing this? Wellcome Trust spent £7m on APCs last year… So this platform is partly as a service to stakeholders with a complementary capacity for all research findings. We are testing new approach to improve science and its impact – to accelerate access and sharing of findings and data; efficiency to reduce waste and support reproducibility; alternative OA model, etc.

Make an impact, know your impact, show your impact – Anna Ritchie, Mendeley, Elsevier

A theme across the day is that there is increasing pressure and challenges for researchers. It’s never been easier to get your work out – new technology, media, platforms. And yet, it’s never been harder to get your work seen: more researchers, producing more outputs, dealing with competition. So how do you ensure you and your work make an impact? Options mean opportunities, but also choices. Traditional publishing is still important – but not enough. And there are both older and newer ways to help make your research stand out.

Publishing campus is a big thing here. These are free resources to support you in publishing. There are online lectures, interactive training courses, and expert advice. And things happen – live webinars, online lectures (e.g. Top 10 Tips for Writing a Really Terrible Journal Article!), interactive course. There are suits of materials around publishing, around developing your profile.

At some point you will want to look at choosing a journal. Metrics may be part of what you use to choose a journal – but use both quantitative and qualitative (e.g. ask colleagues and experts). You can also use Elsevier Journal Finder – you can search for your title and abstract and subject areas to suggest journals to target. But always check the journal guidance before submitting.

There is also the opportunity for article enrichments which will be part of your research story – 2D radiological data viewer, R code Viewer, Virtual Microscope, Genome Viewer, Audioslides, etc.

There are also less traditional journals: Heliyon is all disciplines so you report your original and technically sound results of primary research, regardless of perceived impact. Methodsx is entirely about methods work. Data in Brief allows you to describe your data to facilitate reproducibility, make it easier to cite, etc. And an alternative to a data article is to add datasets on Mendeley.

And you can also use Mendeley to understand your impact through Mendeley Stats. There is a very detailed dashboard for each publication – this is powered by Scopus so works for all articles indexed in Scopus. Stats like users, Mendeley users with that article in their library, citations, related works… And you can see how your article is being shared. You can also show your impact on Mendeley, with a research profile that is as comprehensive as possible –  not just your publications but with wider impacts, press mentions…. And enabling you to connect to other researchers, to other articles and opportunities. This is what we are trying to do to make Mendeley help you build your online profile as a researcher. We intend to grow those profiles to give a more comprehensive picture of you as a researcher.

And we want to hear from you. Every journal, platform, and product is co-developed with ongoing community input. So do get in touch!

How to share science with hard to reach groups and why you should bother – Becky Douglas

My background is physics, high energy gravitational waves, etc… As I was doing my PhD I got very involved in science engagement. Hopefully most of you think about science communication and public outreach as being a good thing. It does seem to be something that arise in job interviews and performance reviews. I’m not convinced that everyone should do this – not everyone enjoys or is good at it – but there is huge potential if you are enthusiastic. And there is more expectation on scientists to do this to gain recognition, to help bring trust back to scientists, and right some misunderstanding. And by the way talks and teaching don’t count here.

And not everyone goes to science festivals. It is up to us to provide alternative and interesting things for those people. There are a few people who won’t be interested in science… But there are many more people who don’t have time or don’t see the appeal to them. These people deserve access to new research… And there are many ways to communicate that research. New ideas are always worth doing, and can attract new people and get dialogue you’d never expect.

So, article writing is a great way to reach out… Not just in science magazines (or on personal blogs). Newspapers and magazines will often print science articles – reach out to them. And you can pitch other places too – Cosmo prints science. Mainstream publications are desperate for people who understand science to write about it in engaging ways – sometimes you’ll be paid for your work as well.

Schools are obvious, but they are great ways to access people from all backgrounds. You’ll do extra well if you can connect it to the current curriculum! Put the effort in to build a memorable activity or event. Send them home with something fun and you may well reach parents as well…

More unusual events would be things like theatre, for instance Lady Scientists Stitch and Bitch. Stitch and Bitch is an international thing where you get together and sew and craft and chat. So this show was a play which was about travelling back in time to gather all the key lady scientists, and they sit down to discuss science over some knitting and sewing. Because it was theatre it was an extremely diverse group, not people who usually go to science events. When you work with non scientists you get access to a whole new crowd.

Something a bit more unusual… Soapbox Science, I brought to Glasgow in 2015. It’s science busking where you talk about your cutting edge research. Often attached to science festivals but out in public, to draw a crowd from those shopping, or visiting museums, etc. It’s highly interactive. Most had not been to a science event before, they didn’t go out to see science, but they enjoyed it…

And finally, interact with local communities. WI have science events, Scouts and Guides, meet up groups… You can just contact and reach out to those groups. They have questions in their own effort. It allows you to speak to really interesting groups. But it does require lots of time. But I was based in Glasgow, now in Falkirk, and I’ve just done some of this with schools in the Goebbels where we knew that the kids rarely go on to science subjects…

So, this is really worth doing. You work, if it is tax-payer funded, should be accessible to the public. Some people don’t think they have an interest in science – some are right but others just remember dusty chalkboards and bland text books. You have to show them it’s something more than that.

What helps or hinders science communication by early career researchers? – Lewis MacKenzie

I’m a postdoc at the University of Leeds. I’m a keen science communicator and I try to get out there as much as possible… I want to talk about what helps or hinders science communication by early career researchers.

So, who are early career researchers? Well undergraduates are a huge pool of early career researchers and scientists which tend to be untapped; also PhDs; also postdocs. There are some shared barriers here: travel costs, time… That is especially the case in inaccessible parts of Scotland. There is a real issue that science communication is work (or training). And not all supervisors have a positive attitude to science communication. As well as all the other barriers to careers in science of course.

Let’s start with science communication training. I’ve been through the system as an undergraduate, PhD students and postdocs. A lot of training are (rightly) targeted at PhD students, often around writing, conferences, elevator pitches, etc. But there are issues/barriers for ECRs include… Pro-active sci comm is often not formally recognized as training/CPD/workload – especially at evenings and weekends. I also think undergraduate sci comm modules are minimal/non-existent. You get dedicated sci comm masters now, there is lots to explore. And there are relatively poor sci comm training opportunities for post docs. But across the board media skills training pretty much limited – how do you make youtube videos, podcasts, web comics, writing in a magazine – and that’s where a lot of science communication takes place!

Sci Comm in Schools includes some great stuff. STEMNET is an excellent way for ECRs, industry, retirees, etc as volunteers, some basic training, background checks, and a contact hub with schools and volunteers. However it is a confusing school system (especially in England) and curricula. How do you do age-appropriate communication. And just getting to the schools can be tricky – most PhDs and Sci Comm people won’t have a car. It’s basic but important as a barrier.

Science Communication Competitions are quite widespread. They tend to be aimed at PhD students, incentives being experience, training and prizes. But there are issues/barriers for ECRs – often conventional “stand and talk” format; not usually collaborative – even though team work can be brilliant, the big famous science communicators work with a team to put their shows and work together; intense pressure of competitions can be off putting… Some alternative formats would help with that.

Conferences… Now there was a tweet earlier this week from @LizyLowe suggesting that every conference should have a public engagement strand – how good would that be?!

Research Grant “Impact Plans”: major funders now require “impact plans” revolving around science communication. That makes time and money for science communication which is great. But there are issues. The grant writer often designate activities before ECRs are recruited. These prescriptive impact plans aren’t very inspiring for ECRS. Money may be inefficiently spent on things like expensive web design. I think we need a more agile approach to include input from ECRs once recruited.

Finally I wanted to finish with Science Communication Fellowships. These are run by people like Wellcome Trust Engagement Fellowships and the STFC. These are for the Olympic gold medallists of Sci Comm. But they are not great for ECRs. The dates are annual and inflexible – and the process is over 6 months – it is a slow decision making process. And they are intensively competitive so not very ECR friendly, which is a shame as many sci comm people are ECRs. So perhaps more institutions or agencies should offer sci comm fellowships? And  a continuous application process with shorter spells?

To sum up… ECRs at different career stages require different training and organisational support to enable science communication. And science communication needs to be recognised as formal work/training/education – not an out of hours hobby! There are good initiatives out there but there could be many more.

PANEL DISCUSSION – Michael Markie, F1000 (MM); Anna Ritchie, Mendeley, Elsevier (AR); Becky Douglas (BD); Lewis MacKenzie (LW) – chaired by Joanna Young (JY)

Q1 (JY): Picking up on what you said about Pathways to Impact statements… What advice would you give to ECRs if they are completing one of these? What should they do?

A1 (LM): It’s quite a weird thing to do… Two strands… This research will make loads of money and commercialise it; and the science communication strand. It’s easier to say you’ll do a science festival event, harder to say you’ll do press release… Can say you will blog you work once a month, or tweet a day in the lab… You can do that. In my fellowship application I proposed a podcast on biophysics that I’d like to do. You can be creative with your science communication… But there is a danger that people aren’t imaginative and make it a box-ticking thing. Just doing a science festival event and a webpage isn’t that exciting. And those plans are written once… But projects run for three years maybe… Things change, skills change, people on the team change…

A1 (BD): As an ECR you can ask for help – ask supervisors, peers, ask online, ask colleagues… You can always ask for advice!

A1 (MM): I would echo that you should ask experienced people for help. And think tactically as different funders have their own priorities and areas of interest here too.

Q2: I totally agree with the importance of communicating your science… But showing impact of that is hard. And not all research is of interest to the public – playing devil’s advocate – so what do you do? Do you broaden it? Do you find another way in?

A2 (LM): Taking a step back and talking about broader areas is good… I talk a fair bit about undergraduates as science communicators… They have really good broad knowledge and interest. They can be excellent. And this is where things like Science Soapbox can be so effective. There are other formats too.. Things like Bright Club which communicates research through comedy… That’s really different.

A2 (BD) I would agree with all of that. I would add that if you want to measure impact then you have to think about it from the outset – will you count people, some sort of voting or questionnaires. YOu have to plan this stuff in. The other thing is that you have to pitch things carefully to your audience. If I run events on gravitational waves I will talk about space and black holes… Whereas with a 5 year old I ask about gravity and we jump up and down so they understand what is relevant to them in their lives.

A2 (LM): In terms of metrics for science communication… At the British Science Association conference a few years back and this was a major theme… Becky mentioned getting kids to post notes in boxes at sessions… Professional science communicators think a great deal about this… Maybe not as much us “Sunday Fun Run” type people but we should engage more.

Comment (AR): When you prepare an impact statement are you asked for metrics?

A2 (LM): Not usually… They want impact but don’t ask about that…

A2 (BD): Whether or not you are asked for details of how something went you do want to know how you did… And even if you just ask “Did you learn something new today?” that can be really helpful for understanding how it went.

Q3: I think there are too many metrics… As a microbiologist… which ones should I worry about? Should there be a module at the beginning of my PhD to tell me?

A3 (AR): There is no one metric… We don’t want a single number to sum us up. There are so many metrics as one number isn’t enough, one isn’t enough… There is experimentation going on with what works and what works for you… So be part of the conversation, and be part of the change.

A3 (MM): I think there are too many metrics too… We are experimenting. Altmetrics are indicators, there are citations, that’s tangible… We just have to live with a lot of them all at once at the moment!

UNCONFERENCE SESSION 2: Preprints: A journey through time – Graham Steel

This will be a quick talk plus plenty of discussion space… From the onset of thinking about this conference I was very keen to talk about preprints…

So, who knows what a preprint is? There are plenty of different definitions out there – see Neylon et al 2017. But we’ll take the Wikipedia definition for now. I thought preprints dates to the 1990s. But I found a paper that referenced a pre-print from 1922!

Lets start there… Preprints were ticking along fine… But then a fightback began, In 1966 preprinte were made outlaws when Nature wanted to take “lethal steps” to end preprints. In 1969 we had a thing called the “Inglefinger Rule” – we’ll come back to that later… Technology wise various technologies ticked along… In 1989 Tim Berners Lee came along, In 1991 Cern set up, also ArXiv set up and grew swiftly… About 8k prepreints per month are uploaded to ArXiv each month as of 2016. Then, in 2007-12 we had Nature Preprints…

But in 2007, the fightback began… In 2012 the Ingelfinger rule was creating stress… There are almost 35k journals, only 37 still use the Ingelfinger rule… But they include key journals like Cell.

But we also saw the launch of BioaXiv in 2013. And we’ve had an explosion of preprints since then… Also 2013 there was a £5m Centre for Open Science set up. This is a central place for preprints… That is a central space, with over 2m preprints so far. There are now a LOT of new …Xiv preprint sites. In 2015 we saw the launch of the ASAPbio movement.

Earlier this year Mark Zuckerberg invested billions in boiXiv… But everything comes at a price…

Scottish spends on average £11m per year to access research through journals. The best average for APCs I could find is $906. Per pre-print it’s $10. If you want to post a pre-print you have to check the terms of your journal – usually extremely clear. Best to check in SHERPA/ROMEO.

If you want to find out more about preprints there is a great Twitter list, also some recommended preprints reading. Find these slides: slideshare.net/steelgraham and osf.io/zjps6/.


Q1: I found Sherpa/Romeo by accident…. But really useful. Who runs it?

A1: It’s funded by Jisc

Q2: How about findability…

A2: ArXiv usually points to where this work has been submitted. And you can go back and add the DOI once published.

Q2: It’s acting as a static archive then? To hold the green copy

A2: And there is collaborative activity across that… And there is work to make those findable, to share them, they are shared on PubMed…

Q2: One of the problems I see is purely discoverability… Getting it easy to find on Google. And integration into knowledgebases, can be found in libraries, in portals… Hard for a researcher looking for a piece of research… They look for a subject, a topic, to search an aggregated platform and link out to it… To find the repository… So people know they have legal access to preprint copies.

A2: You have COAR at OU which aggregates preprints, suggests additional items when you search. There is ongoing work to integrate with CRIS systems, frequently commercial so interoperability here.

Comment: ArXiv is still the place for high energy physics so that is worth researchers going directly too…

Q3: Can I ask about preprints and research evaluation in the US?

A3: It’s an important way to get the work out… But the lack of peer review is an issue there so emerging stuff there…

GS: My last paper was taking forever to come out, we thought it wasn’t going to happen… We posted to PeerJ but discovered that that journal did use the Inglefinger Rule which scuppered us…

Comment: There are some publishers that want to put preprints on their own platform, so everything stays within their space… How does that sit/conflict with what libraries do…

GS: It’s a bit “us! us! us!”

Comment: You could see all submitted to that journal, which is interesting… Maybe not health… What happens if not accepted… Do you get to pull it out? Do you see what else has been rejected? Could get dodgy… Some potential conflict…

Comment: I believe it is positioned as a separate entity but with a path of least resistance… It’s a question… The thing is.. If we want preprints to be more in academia as opposed to publishers… That means academia has to have the infrastructure to do that, to connect repositories discoverable and aggregated… It’s a potential competitive relationship… Interesting to see how it plays out…

Comment: For Scopus and Web of Science… Those won’t take preprints… Takes ages… And do you want to give up more rights to the journals… ?

Comment: Can see why people would want multiple copies held… That seems healthy… My fear is it requires a lot of community based organisation to be a sustainable and competitive workflow…

Comment: Worth noting the radical “platinum” open access… Lots of preprints out there… Why not get authors to submit them, organise into free, open journal without a publisher… That’s Tim Garrow’s thing… It’s not hard to put together a team to peer review thematically and put out issues of a journal with no charges…

GS: That’s very similar to open library of humanities… And the Wellcome Trust & Gates Foundation stuff, and big EU platform. But the Gates one could be huge. Wellcome Trust is relatively small so far… But EU-wide will be major ramifications…

Comment: Platinum is more about overlay journals… Also like Scope3 and they do metrics on citations etc. to compare use…

GS: In open access we know about green, gold and with platinum it’s free to author and reader… But use of words different in different contexts…

Q4: What do you think the future is for pre-prints?

A4 – GS: There is a huge boom… There’s currently some duplication of central open preprints platform. But information is clear on use and uptake is on the rise… It will plateau at some point like PLoSOne. They launched 2006 and they probably plateaued around 2015. But it is number 2 in the charts of mega-journals, behind Scientific Reports. They increased APCs (around $1450) and that didn’t help (especially as they were profitable)…

SESSION THREE: Raising your research profile: online engagement & metrics

Green, Gold, and Getting out there: How your choice of publisher services can affect your research profile and engagement – Laura Henderson, Editorial Program Manager, Frontiers

We are based in Lausanne in Switzerland. We are fully digital, fully open access publisher. All of 58 journals are published under CC-BY licenses. And the organisation was set up scientists that wanted to change the landscape. So I wanted to talk today about how this can change your work.

What is traditional academic publishing?

Typically readers pay – journal subscriptions via institution/library or pay per view. Given the costs and number of articles they are expensive – ¢14B journals revenue in 2014 works out at $7k per article. It’s slow too.. Journal rejection cascade can take 6 months to a year each time. Up to 1 million papers – valid papers – are rejected every year. And these limit access to research around 80% of research papers are behind subscription paywalls. So knowledge gets out very slowly and inaccessibly.

By comparison open access… Well Green OA allows you to publish an dthen self-archive your paper in a repository where it can be accessed for free. you can use an institutional or central repository, or I’d suggest both. And there can be a delay due to embargo. Gold OA makes research output immediately available from th epublisher and you retain the copyright so no embargoes. It is fully discoverable via indexing and professional promotion services to relevant readers. No subscription fee to reader but usually involves APCs to the institution.

How does Open Access publishing compare? Well it inverts the funding – institution/grant funder supports authors directly, not pay huge subscrition fees for packages dictates by publishers. It’s cheaper – Green OA is usually free. Gold OA average fee is c. $1000 – $3000 – actually that’s half what is paid for subscription publishing. We do see projections of open access overtaking subscription publishing by 2020.

So, what benefits does open access bring? Well there is peer-review; scalable publishing platforms; impact metrics; author discoverability and reputation.

And I’d now like to show you what you should look for from any publisher – open access or others.

Firstly, you should expect basic services: quality assurance and indexing. Peter Suber suggests checking the DOAJ – Directory of Open Access Journals. You can also see if the publisher is part of OASPA which excludes publishers who fail to meet their standards. What else? Look for peer review nad good editors – you can find the joint COPE/OASPA/DOAJ Principles of Transaparancy and Best Practice in Scholarly Publishing. So you need to have clear peer review proceses. And you need a governing board and editors.

At Frontiers we have an impact-neutral peer review oricess. We don’t screen for papers with highest impact. Authors, reviewers and handling Associate Editor interact directly with each other in the online forum. Names of editors and reviewers publishhed on final version of paper. And this leads to an average of 89 days from submission to acceptance – and that’s an industry leading timing… And that’s what won an ASPLP Innovation Award.

So, what are the extraordinary services a top OA publisher can provide? Well altmetrics are more readily available now. Digital articles are accessible and trackable. In Frontiers our metrics are built into every paper… You can see views, downloads, and reader demographics. And that’s post-publication analytics that doesn’t rely on impact factor. And it is community-led imapact – your peers decide the impact and importance.

How discoverable are you? We launched a bespoke built-in networking profile for every author and user: Loop. Scrapes all major index databases to find youe work – constatly updating. It’s linked to Orchid and is included in peer review process. When people look at your profile you can truly see your impact in the world.

In terms of how peers find your work we have article alerts going to 1 million people, and a newsletter that goes to 300k readers. And our articles have 250 million article views and downloads, with hotspots in Mountain View California, and in Shendeng, and areas of development in the “Global South”.

So when you look for a publisher, look for a publisher with global impact.

What are all these dots and what can linking them tell me? – Rachel Lammey, Crossref

Crossref are a not for profit organisation. So… We have articles out there, datasets, blogs, tweets, Wikipedia pages… We are really interested to understand these links. We are doing that through Crossref Event Data, tracking the conversation, mainly around objects with a DOI. The main way we use and mention publications is in the citations of articles. That’s the traditional way to discuss research and understand news. But research is being used in lots of different ways now – Twitter and Reddit…

So, where does Crossref fit in? It is the DOI registration agency for scholarly content. Publishers register their content with us. URLs do change and do break… And that means you need something ore persistent so it can still be used in their research… Last year at ReCon we tried to find DOI gaps in reference lists – hard to do. Even within journals publications move around… And switch publishers… The DOI fixes that reference. We are sort of a switchboard for that information.

I talked about citations and references… Now we are looking beyong that. It is about capturing data and relationships so that understanding and new services (by others) can be built… As such it’s an API (Application Programming Interface) – it’s lots of data rather than an interface. SO it captures subject, relation, object, tweet, mentions, etc. We are generating this data (As of yesterday we’ve seen 14 m events), we are not doing anything with it so this is a clear set of data to do further work on.

We’ve been doing work with NISO Working Group on altmetrics, but again, providing the data not the analysis. So, what can this data show? We see citation rings/friends gaming the machine; potential peer review scams; citation patterns. How can you use this data? Almost any way. Come talk to us about Linked Data; Article Level Metrics; general discoverability, etc.

We’ve done some work ourselves… For instant the Live Data from all sources – including Wikipedia citing various pages… We have lots of members in Korea, and started looking just at citations on Korean Wikipedia. It’s free under a CC0 license. If you are interested, go make something cool… Come ask me questions… And we have a beta testing group and we welcome you feedback and experiments with our data!

The wonderful world of altmetrics: why researchers’ voices matter – Jean Liu, Product Development Manager, Altmetric

I’m actually five years out of graduate school, so I have some empathy with PhD students and ECRs. I really want to go through what Altmetrics is and what measures there are. It’s not controversial to say that altmetrics have been experiencing a meteoric rise over the last few years… That is partly because we have so much more to draw upon than the traditional journal impact factors, citation counts, etc.

So, who are altmetrics.com? We have about 20 employees, founded in 2011 and all based in London. And we’ve started to see that people re receptive to altmetrics, partly because of the (near) instant feedback… We tune into the Twitter firehose – that phrase is apt! Altmetrics also showcase many “flavours” of attention and impact that research can have – and not just articles. And the signals we tracked are highly varies: policy documents, news, blogs, Twitter, post-publication peer review, Facebook, Wikipedia, LinkedIn, Reddit, etc.

Altmetrics also have limitations. They are not a replacement for peer review or citation-based metrics. They can be gamed – but data providers have measures in place to guard against this. We’ve seen interesting attempts at gamification – but often caught…

Researchers are not only the ones who receive attention in altmetrics, but they are also the ones generating attention that make up altmetrics – but not all attention is high quality or trustworthy. We don’t want to suggest that researchers should be judged just on altmetrics…

Meanwhile Universities are asking interesting questions: how an our researchers change policy? Which conference can I send people to which will be most useful, etc.

So, lets see the topic of “diabetic neuropathy”. Looking around we can see a blog, an NHS/Nice guidance document, and a The Conversation. A whole range of items here. And you can track attention over time… Both by volume, but also you can look at influencers across e.g. News Outlets, Policy Outlets, Blogs and Tweeters. And you can understand where researcher voices feature (all are blogs). And I can then compare news and policy and see the difference. The profile for News and Blogs are quite different…

How can researchers voices be heard? Well you can write for a different audience, you can raise the profile of your work… You can become that “go-to” person. You also want to be really effective when you are active – altmetrics can help you to understand where your audience is and how they respond, to understand what is working well.

And you can find out more by trying the altmetric bookmarking browser plugin, by exploring these tools on publishing platforms (where available), or by taking a look.

How to help more people find and understand your work – Charlie Rapple, Kudos

I’m sorry to be the last person on the agenda, you’ll all be overwhelmed as there has been so much information!

I’m one of the founders of Kudos and we are an organisation dedicated to helping you increase the reach and impact of your work. There is such competition for funding, a huge growth in outputs, there is a huge fight for visibility and usage, a drive for accountability and a real cult of impact. You are expected to find and broaden the audience for your work, to engage with the public. And that is the context in which we set up Kudos. We want to help you navigate this new world.

Part of the challenge is knowing where to engage. We did a survey last year with around 3000 participants to ask how they share their work – conferences, academic networking, conversations with colleagues all ranked highly; whilst YouTube, slideshare, etc. are less used.

Impact is built on readership – impacts cross a variety of areas… But essentially it comes down to getting people to find and read your work. So, for me it starts with making sure you increase the number of people reaching and engaging with your work. Hence the publication is at the centre – for now. That may well be changing as other material is shared.

We’ve talked a lot about metrics, there are very different ones and some will matter more to you than others. Citations have high value, but so do mentions, clicks, shares, downloads… Do take the time to think about these. And think about how your own actions and behaviours contribute back to those metrics… So if you email people about your work, track that to see if it works… Make those connections… Everyone has their own way and, as Nicola was saying in the Digital Footprint session, communities exist already, you have to get work out there… And your metrics have to be about correlating what happens – readership and citations. Kudos is a management tool for that.

In terms of justifying time here is that communications do increase impact. We have been building up data on how that takes place. A team from Nanyang Technological Institute did a study of our data in 2016 and they saw that the Kudos tools – promoting their work – they had 23% higher growth in downloads of full text on publisher sites. And that really shows the value of doing that engagement. It will actually lead to meaningful results.

So a quick look at how Kudos works… It’s free for researchers (www.growkudos.com) and it takes about 15 minutes to set up, about 10 minutes each time you publish something new. You can find a publication, you can use your ORCID if you have one… It’s easy to find your publication and once you have then you have page for that where you can create a plain language explanation of your work and why it is important – that is grounded in talking to researchers about what they need. For example: http://bit.ly/plantsdance. That plain text is separate from the abstract. It’s that first quick overview. The advantage of this is that it is easier for people within the field to skim and scam your work; people outside your field in academia can skip terminology of your field and understand what you’ve said. There are also people outside academia to get a handle on research and apply it in non-academic ways. People can actually access your work and actually understand it. There is a lot of research to back that up.

Also on publication page you can add all the resources around your work – code, data, videos, interviews, etc. So for instance Claudia Sick does work on baboons and why they groom where they groom – that includes an article and all of that press coverage together. That publication page gives you a URL, you can post to social media from within Kudos. You can copy the trackable link and paste wherever you like. The advantage to doing this in Kudos is that we can connect that up to all of your metrics and your work. You can get them all in one place, and map it against what you have done to communicate. And we map those actions to show which communications are more effective for sharing… You can really start to refine your efforts… You might have built networks in one space but the value might all be in another space.

Sign up now and we are about to launch a game on building up your profile and impact, and scores your research impact and lets you compare to others.

PANEL DISCUSSION – Laura Henderson, Editorial Program Manager, Frontiers (LH); Rachel Lammey, Crossref (RL); Jean Liu, Product Development Manager, Altmetric (JL); Charlie Rapple, Kudos (CR). 

Q1: Really interesting but how will the community decide which spaces we should use?

A1 (CR): Yes, in the Nangyang work we found that most work was shared on Facebook, but more links were engaged with on Twitter. There is more to be done, and more to filter through… But we have to keep building up the data…

A1 (LH): We are coming from the same sort of place as Jean there, altmetrics are built into Frontiers, connected to ORCID, Loop built to connect to institutional plugins (totally open plugin). But it is such a challenge… Facebook, Twitter, LinkedIn, SnapChat… Usually personal choice really, we just want to make it easier…

A1 (JL): It’s about interoperability. We are all working in it together. You will find certain stats on certain pages…

A1 (RL): It’s personal choice, it’s interoperability… But it is about options. Part of the issue with impact factor is the issue of being judged by something you don’t have any choice or impact upon… And I think that we need to give new tools, ways to select what is right for them.

Q2: These seem like great tools, but how do we persuade funders?

A2 (JL): We have found funders being interested independently, particularly in the US. There is this feeling across the scholarly community that things have to change… And funders want to look at what might work, they are already interested.

A2 (LH): We have an office in Brussels which lobbies to the European Commission, we are trying to get our voice for Open Science heard, to make difference to policies and mandates… The impact factor has been convenient, it’s well embedded, it was designed by an institutional librarian, so we are out lobbying for change.

A2 (CR): Convenience is key. Nothing has changed because nothing has been convenient enough to replace the impact factor. There is a lot of work and innovation in this area, and it is not only on researchers to make that change happen, it’s on all of us to make that change happen now.

Jo Young (JY): To finish a few thank yous… Thank you all for coming a lot today, to all of our speakers, and a huge thank you for Peter and Radic (our cameramen), to Anders, Graham and Jan for work in planning this. And to Nicola and Amy who have been liveblogging, and to all who have been tweeting. Huge thanks to CrossRef, Frontiers, F1000, JYMedia, and PLoS.

And with that we are done. Thanks to all for a really interesting and busy day!


Apr 052017
Cakes at the CIGS Web 2.0 and Metadata Event 2017

Today I’m at the Cataloguing and Indexing Group Scotland event – their 7th Metadata & Web 2.0 event – Somewhere over the Rainbow: our metadata online, past, present & future. I’m blogging live so, as usual, all comments, corrections, additions, etc. are welcome. 

Paul Cunnea, CIGS Chair is introducing the day noting that this is the 10th year of these events: we don’t have one every year but we thought we’d return to our Wizard of Oz theme.

On a practical note, Paul notes that if we have a fire alarm today we’d normally assemble outside St Giles Cathedral but as they are filming The Avengers today, we’ll be assembling elsewhere!

There is also a cupcake competition today – expect many baked goods to appear on the hashtag for the day #cigsweb2. The winner takes home a copy of Managing Metadata in Web-scale Discovery Systems / edited by Louise F Spiteri. London : Facet Publishing, 2016 (list price £55).

Engaging the crowd: old hands, modern minds. Evolving an on-line manuscript transcription project / Steve Rigden with Ines Byrne (not here today) (National Library of Scotland)

Ines has led the development of our crowdsourcing side. My role has been on the manuscripts side. Any transcription is about discovery. For the manuscripts team we have to prioritise digitisation so that we can deliver digital surrogates that enable access, and to open up access. Transcription hugely opens up texts but it is time consuming and that time may be better spent on other digitisation tasks.

OCR has issues but works relatively well for printed texts. Manuscripts are a different matter – handwriting, ink density, paper, all vary wildly. The REED(?) project is looking at what may be possible but until something better comes along we rely on human effort. Generally the manuscript team do not undertake manual transcription, but do so for special exhibitions or very high priority items. We also have the challenge that so much of our material is still under copyright so cannot be done remotely (but can be accessed on site). The expected user community generally can be expected to have the skill to read the manuscript – so a digital surrogate replicates that experience. That being said, new possibilities shape expectations. So we need to explore possibilities for transcription – and that’s where crowd sourcing comes in.

Crowd sourcing can resolve transcription, but issues with copyright and data protection still have to be resolved. It has taken time to select suitable candidates for transcription. In developing this transcription project we looked to other projects – like Transcribe Bentham which was highly specialised, through to projects with much broader audiences. We also looked at transcription undertaken for the John Murray Archive, aimed at non specialists.

The selection criteria we decided upon was for:

  • Hands that are not too troublesome.
  • Manuscripts that have not been re-worked excessively with scoring through, corrections and additions.
  • Documents that are structurally simple – no tables or columns for example where more complex mark-up (tagging) would be required.
  • Subject areas with broad appeal: genealogies, recipe book (in the old crafts of all kinds sense), mountaineering.

Based on our previous John Murray Archive work we also want the crowd to provide us with structure text, so that it can be easily used, by tagging the text. That’s an approach that is borrowed from Transcribe Bentham, but we want our community to be self-correcting rather than doing QA of everything going through. If something is marked as finalised and completed, it will be released with the tool to a wider public – otherwise it is only available within the tool.

The approach could be summed up as keep it simple – and that requires feedback to ensure it really is simple (something we did through a survey). We did user testing on our tool, it particularly confirmed that users just want to go in, use it, and make it intuitive – that’s a problem with transcription and mark up so there are challenges in making that usable. We have a great team who are creative and have come up with solutions for us… But meanwhile other project have emerged. If the REED project is successful in getting machines to read manuscripts then perhaps these tools will become redundant. Right now there is nothing out there or in scope for transcribing manuscripts at scale.

So, lets take a look at Transcribe NLS

You have to login to use the system. That’s mainly to help restrict the appeal to potential malicious or erroneous data. Once you log into the tool you can browse manuscripts, you can also filter by the completeness of the transcription, the grade of the transcription – we ummed and ahhed about including that but we though it was important to include.

Once you pick a text you click the button to begin transcribing – you can enter text, special characters, etc. You can indicate if text is above/below the line. You can mark up where the figure is. You can tag whether the text is not in English. You can mark up gaps. You can mark that an area is a table. And you can also insert special characters. It’s all quite straight forward.


Q1) Do you pick the transcribers, or do they pick you?

A1) Anyone can take part but they have to sign up. And they can indicate a query – which comes to our team. We do want to engage with people… As the project evolves we are looking at the resources required to monitor the tool.

Q2) It’s interesting what you were saying about copyright…

A2) The issues of copyright here is about sharing off site. A lot of our manuscripts are unpublished. We use exceptions such as the 1956 Copyright Act for old works whose authors had died. The selection process has been difficult, working out what can go in there. We’ve also cheated a wee bit

Q3) What has the uptake of this been like?

A3) The tool is not yet live. We thin it will build quite quickly – people like a challenge. Transcription is quite addictive.

Q4) Are there enough people with palaeography skills?

A4) I think that most of the content is C19th, where handwriting is the main challenge. For much older materials we’d hit that concern and would need to think about how best to do that.

Q5) You are creating these documents that people are reading. What is your plan for archiving these.

A5) We do have a colleague considering and looking at digital preservation – longer term storage being more the challenge. As part of normal digital preservation scheme.

Q6) Are you going for a Project Gutenberg model? Or have you spoken to them?

A6) It’s all very localised right now, just seeing what happens and what uptake looks like.

Q7) How will this move back into the catalogue?

A7) Totally manual for now. It has been the source of discussion. There was discussion of pushing things through automatically once transcribed to a particular level but we are quite cautious and we want to see what the results start to look like.

Q8) What about tagging with TEI? Is this tool a subset of that?

A8) There was a John Murray Archive, including mark up and tagging. There was a handbook for that. TEI is huge but there is also TEI Light – the JMA used a subset of the latter. I would say this approach – that subset of TEI Light – is essentially TEI Very Light.

Q9) Have other places used similar approaches?

A9) TRanscribe Bentham is similar in terms of tagging. The University of Iowa Civil War Archive has also had a similar transcription and tagging approach.

Q10) The metadata behind this – how significant is that work?

A10) We have basic metadata for these. We have items in our digital object database and simple metadata goes in there – we don’t replicate the catalogue record but ensure it is identifiable, log date of creation, etc. And this transcription tool is intentionally very basic at th emoment.

Coming up later…

Can web archiving the Olympics be an international team effort? Running the Rio Olympics and Paralympics project / Helena Byrne (British Library)

I am based at the UK Web Archive, which is based at the British Library. The British Library is one of the six legal deposit libraries. The BL are also a member of the International Internet Preservation Consortium – as are the National Library of Scotland. The Content Development Group works on any project with international relevance and a number of interested organisations.

Last year I was lucky enough to be lead curator on the Olympics 2016 Web Archiving project. We wanted to get a good range of content. Historically our archives for Olympics have been about the events and official information only. This time we wanted the wider debate, controversy, fandom, and the “e-Olympics”.

We received a lot of nominations for sites. This is one of the biggest we have been involved in. There was 18 IIPC members involved in the project, but nominations also came from wider nominations. We think this will be a really good resource for those researching the events in Rio. We had material in 34 languages in total. English was the top language collected – reflecting IIPC memberships to some extent. In terms of what we collected it included Official IOC materials – but few as we have a separate archive across Games for these. But subjects included athletes, teams, gender, doping, etc. There were a large number of website types submitted. Not all material nominated were collected – some incomplete metadata, unsuccessful crawls, duplicate nominations, and the web is quite fragile still and some links were already dead when we reached them.

There were four people involved here, myself, my line manager, the two IIPC chairs, and the IIPC communications person (also based at BL). We designed a collection strategy to build engagement as well as content. The Olympics is something with very wide appeal and lots of media coverage around the political and Zika situation so we did widen the scope of collection.

Thinking about our user we had collaborative tools that worked with contributors context: Webex, Google Drive and Maps, and Slack (free for many contexts) was really useful. Chapter 8 in “Altmetrics” is great for alternatives to Google – it is important to have those as it’s simply not accessible in some locations.

We used mostly Google Sheets for IIPC member nominations – 15 fields, 6 of which were obligatory. For non members we used a (simplified) Google Form – shared through social media. Some non IIPC member organisations used this approach – for instance a librarian in Hawaii submitted lots of pacific islands content.

In terms of communicating the strategy we developed instructional videos (with free tools – Screencastomatic and Windows Movie Maker) with text and audio commentary, print summaries, emails, and public blog posts. Resources were shared via Google Drive so that IIPC members could download and redistributed.

No matter whether IIPC member or through the nomination form, we wanted six key fields:

  1. URL – free form
  2. Event – drop down option
  3. Title – free form (and English translation option if relevant)
  4. Olympic/Paralympic sport – drop down option
  5. Country – free form
  6. Contributing organisation – free form (for admin rather than archive purposes)

There are no international standards for cataloguing web archive data. OCLC have a working group looking at this just now – they are due to report this year. One issue that has been raised is the context of those doing the cataloguing – cataloguing versus archiving.

Communications are essential on a regular basis – there was quite a long window of nomination and collection across the summer. We had several pre-event crawl dates, then also dates during and after both the Olympics and the Paralympics. I would remind folk about this, and provide updates on that, on what was collected, to share that map of content collected. We also blogged the projects to engage and promote what we were doing. The Participants enjoyed the updates – it helped them justify time spent on the project to their own managers and organisations.

There were some issues along the way…

  • The trailing backslash is required for the crawler – so if there is no trailing backslash the crawler takes everything it can find – attempting all of BBC or Twitter is a problem.
  • Not tracking the date of nomination – e.g. organisations adding to the spreadsheet without updating date of nomination – that was essential to avoid duplication so that’s a tip for Google forms.
  • Some people did not fill in all of the six mandatory fields (or didn’t fill them in completely.
  • Country name vs Olympic team name. That is unexpectedly complex. Team GB includes England, Scotland, Wales and Northern Ireland… But Northern Ireland can also compete in Ireland. Palestine isn’t recognised as a country in all places, but it is in the Olympics. And there was a Refugee Team as well – with no country to tie to. Similar issues of complexity came out of organisation names – there are lots of ways to write the name of the British Library for instance.

We promoted the project with four blog posts sharing key updates and news. We had limited direct contact – mostly through email and Slack/messaging. We also had a unique hashtag for the collection #Rio2016WA – not catchy but avoids confusion with Wario (Nintendo game) – and Twitter chat, a small but international chat.

Ethically we only crawl public sites but the IIPC also have a take down policy so that anyone can request their site be removed.

Conclusions… Be aware of any cultural differences with collaborators. Know who your users are. Have a clear project plan, available in different mediums. And communicate regularly – to keep enthusiasm going. And, most importantly, don’t assume anything!

Finally… Web Archiving Week is in London in June, 12th-16th 2017. There is a “Datathon” but the deadline is Friday! Find out more at http://netpreserve.org/general-assembly/2017/overview. And you can find out more about the UK Web Archive via our website and blog: webarchive.org.uk/blog. You can also follow us and the IIPC on Twitter.

Explore the Olympics archive at: https://archive-it.org/collections/7235


Q1) For British Library etc… Did you use a controlled vocabulary

A1) No but we probably will next time. There were suggestions/autocomplete. Similarly for countries. For Northern Irish sites I had to put them in as Irish and Team GB at the same time.

Q2) Any interest from researchers yet? And/or any connection to those undertaking research – I know internet researchers will have been collecting tweets…

A2) Colleagues in Rio identified a PhD project researching the tweets – very dynamic content so hard to capture. Not huge amount of work yet. I want to look at the research projects that took place after the London 2012 Olympics – to see if the sites are still available.

Q3) Anything you were unable to collect?

A3) In some cases articles are only open for short periods of time – we’d do more regular crawls of those nominations next time I think.

Q4) What about Zika content?

A4) We didn’t have a tag for Zika, but we did have one for corruption, doping, etc. Lots of corruption post event after the chair of the Irish Olympic Committee was arrested!

Statistical Accounts of Scotland / Vivienne Mayo (EDINA)

I’m based at EDINA and we run various digital services and projects, primarily for the education sector. Today I’m going to talk about the Statistical Accounts of Scotland. These are a hugely rich and valuable collection of statistical data that span both the agricultural and industrial revolutions in Scotland. The online service launched in 2001 but was thoroughly refreshed and relaunched next year.

There are two accounts. The first set was created (1791-1799) by Sir John Sinclair of Ulbster. He had a real zeal for agricultural data. There had been attempts to collect data in the 16th and 17th centuries. So Sir John set about a plan to get every minister in Scotland to collect data on their parishes. He was inspired by German surveys but also had his own ideas for his project:

“an inquiry into the state of a country, for the purpose of ascertaining the quantum of happiness enjoyed by its inhabitants, and the means of its future improvement”

He also used the word “Statistics” as a kind of novel, interesting term – it wasn’t in wide use. And the statistics in the accounts are more qualitative then the quantitative data we associate with the word today.

Sir John sent minister 160 questions, then another 6, then another set a year late so that there were 171 in total. So you can imagine how delighted they were to receive that. And the questions (you can access them all in the service) were hard to answer – asking about the wellbeing of parishioners, how their circumstances could be ameliorated… But ministers were paid by the landowners who employed their parishioners so that data also has to be understood in context. There were also more factual questions on crops, pricing, etc.

It took a long time – 8 years – to collect the data. But it was a major achievement. And these accounts were part of a “pyramid” of data for the agricultural reports. He had country reports, but also higher level reports. This was at the time of the Enlightenment and the idea was that with this data you could improve the condition of life.

Even though the ministers did complete their returns, for some it was struggle – and certainly hard to be accurate. Population tables were hard to get correct, especially in the context of scepticism that this data might be used to collect taxes or other non-beneficial purposes.

The Old Account was a real success. And the Church of Scotland commissioned a New Account from 1834-45 as a follow up to that set of accounts.

The online service was part of one of the biggest digitisation projects in Scotland in the late 1990s, with the accounts going live in 2001. But much had changed since then in terms of functionality that any user might expect. In this new updated service we have added the ability to tag, to annotate, to save… Transcriptions have been improved, the interface has been improved. We have also made it easier to find associated resources – selected by our editorial board drawn from libraries, archives, specialists on this data.

When Sir John published the Old Accounts he printed them in volumes as they were received – that makes it difficult to browse and explore those. And there can be multiple accounts for the same parish. So we have added a way to browse each of the 21 volumes so that it is easier to find what you need. Place is key for our users and we wanted to make the service more accessible. Page numbers were an issue too – our engineers provide numbering of sections – so if you look for Portpatrick – you can find all of the sections and volumes where that area occurs. Typically sections are a parish report, but it can be other types of content too – title pages, etc.

Each section is associated with a Parish – which is part of a county. And there may be images (illustrations such as coal seams, elevations of notable buildings in the parish, etc.). Each section is also associated with pages – including images of the pages – as well as transcripts and indexed data used to enable searching.

So, if I search for tea drinking… Described as a moral menace in some of the earlier accounts! When you run a search like this identifies associated sections, the related resources, and associated words – those words that often occur with the search term. For tea-drinking “twopenny” is often associated… Following that thread I found a county of forfar from 1793… And this turns out to be the slighly alarming sounding home brew…

“They make their own malt, and brew it into that kind of drink called Two-penny which, till debased in consequence of multiplied taxes, was long the favourite liquor of all ranks of people in Dundee.”

When you do look at a page like this you can view the transcription – which tends to be easier to read than the scanned pages with their flourishes and “f” instead of “s”. You can tag, annotate, and share the pages. There are lots of ways to explore and engage with the text.

There are lots of options to search the service – simple search, advanced search, and new interactive maps of areas and parishes – these use historic maps from the NLS collections and are brand new to the service.

With all these new features we’d love to hear your feedback when you do take a look at the service – do let us know how you find it.

I wanted to show an example of change and illustration here. In the old Accounts of Dumfries (Vol 5, p. 119) talks about the positive improvements to housing and the idea of “improvement” as a very positive thing. We also see an illustration from the New Accounts of old habitations and new modern house of the small tenants – but that was from a Parish owned by the Duke of Sutherland who had a notorious reputation as a brutal landlord for clearing land and murdering tenants to make these “improvements”. So, again one has to understand the context of this content.

Looking at Dumfries in the Old Accounts things looked good, some receiving poor support. The increase in industry means that by the New Accounts the population has substantially grown, as has poverty. The minister also comments on the impact of the three inns in town, the increase in poaching. Transitory population can also effect health – there is a vivid account of a cholera outbreak from 15th Sept – 27th Nov in 1832. That seems relatively recent but at that point they thought transmission was through the air, they didn’t realise it was water born until some time later.

Some accounts, like that one, are highly descriptive. But many are briefer or less richly engaging. Deaths are often carefully captured. The minister for Dumfries put together a whole table of deaths – causes of which include, surprisingly, teething. And there are also records of healthcare and healthcare costs – including one individual paying for several thousand children to be inoculated against smallpox.

Looking at the schools near us here in central Edinburgh there was free education for some poor children. But schooling mostly wasn’t free. The costs for one child for reading and writing, if you were a farm labourer, it would be a 12th of your salary. To climb the social ladder with e.g. French, Latin, etc. the teaching was far more expensive. And indeed there is a chilling quote in the New Accounts from Cadder, County of Lanark (Vol 8, P. 481) spoke of attitudes that education was corrupting for the poor. This was before education became mandatory (in 1834).

There is also some colourful stuff in the Accounts. There is a lot of witchcraft, local stories, and folk stories. One of my colleagues found a lovely story about a tradition that the last person buried in one area “manned the gates” until the next one arrived. Then one day two people died and there were fisticuffs!

I was looking for something else entirely and, in Fife, a story of a girl who set sale from Greenock, was captured by pirates, was sold into a Hareem, and became a princess in Morroco – there’s a book called The Fourth Queen based on that story.

There is an anvil known as the “Reformation Cloth” – pre-reformation there was a blacksmith thought the catholic priest was having an affair with his wife… And took his revenge by attacking the offending part of the minister on that anvil. I suspect that there may have been some ministerial stuff at play here too – the parish minister notes that “no other catholic minister replaced him” – but it is certainly colourful.

And that’s all I wanted to share today. Hopefully I’ve peaked your interest. You can browse the accounts for free and then some of the richer features are part of our subscription service. Explore the Statistical Accounts of Scotland at: http://stataccscot.edina.ac.uk/. You can also follow us on Twitter, Facebook, etc.


Q1) SOLR indexing and subject headings – can you say more?

A1) They used subject headings from original transcriptions. And then there was some additions made based on those.

Comment) The Accounts are also great for Wikipedia editing! I found references to Christian Shaw, a thread pioneer I was looking to build a page about. In the Accounts as she was mentioned in a witchcraft trial that is included there. It can be a really useful way to find details that aren’t documented elsewhere.

Q2) You said it was free to browse – how about those related resources?

A2) Those related resources are part of the subscription services.

Q3) Any references to sports and leisure?

A3) Definitely to festivals, competitions, events etc. As well as some regular activities in the parish.

Beyond bibliographic description: emotional metadata on YouTube / Diane Pennington (University of Strathclyde)

I want to start with this picture of a dog in a dress…. How do you feel when you see this picture? How do you think she was feeling? [people in the room guess the pup might be embarrassed].

So, this is Tina, she’s my dog. She’s wearing a dress we had made for her when we got married… And when she wears it she always looks so happy… And people, when I shared it on social media, also thought she looked happy. And that got me curious about emotion and emotional responses… That isn’t accommodated in bibliographic metadata. As a community we need to think about how this material makes us feel, how else can we describe things? When you search for music online mood is something you might want to see… But usually it’s recommendations like “this band is similar to…”. My favourite band is U2 and I get recommended Coldplay… And that makes me mad, they aren’t similar!

So, when we teach and practice ILS, we think about information as text that sits in a database, waiting for a user to write a query and get a match. The problem is that there are so many other ways that people also want to look for information – not just bibliographic information, full text, but in other areas too, like bodily – what pain means (Yates 2015); photographs, videos, music (Rasmussen Neal, 2012) – where the full text doesn’t include the search terms or keywords inherantly; “matter and energy” (Bates, 2006) – that there is information everywhere and the need to think more broadly to describe this.

I’ve been working in this area for a while and I started looking at Flickr, at pictures that are tagged “happy”. Those tend to include smiling people, holiday photos, sunny days, babies, cute animals. Relevance rankings showed “happy” more often, people engaged and liked more with happy photos… But music is different. We often want music that matches our mood… There were differences to tags and understanding music… Heavy metal sounds angy, slower or minor key music sounds sad…

So, the work I’m talking about you can also find in an article published last year.

My work was based on the U2 song, Song for Someone. And there are over 150 fan videos created for this song.. And if I show you this one (by Dimas Fletcher) you’ll see it is high production values… The song was written by Bono for his wife – they’ve been together since they were teenagers, and it’s very slow and emotional, and reminisces about being together. So this video is a really different interpretation.

Background to this work, and theoretical framework for it, includes:

  • “Basic emotions” from cognition, psychology, music therapy (Ekman, 1992)
  • Emotional Information Retrieval
  • omains of fandom and aca-fandom (Stein & Busse, 2009; Bennett, 2014)
  • Online participatory culture, such as writing fan fiction or making cover versions of videos for loves songs (Jenkins, 2013)
  • U2 acadeic study – and u2conference.com
  • Intertexuality as a practic in online participatory culture (Varmacelli 2013?)

So I wanted to do a discourse analysis (Budd & Raber 1996, Iedema 2003) applied to intertextuality. And I wanted to analyse the emotional information conveyed in 150 YouTUbe cover videos of U2’s Song for Someone. And also a quantitative view of views, comments, likes and dislikes – indicating response to them.

The producers of these videos created lots of different types of videos. Some were cover versions. Some were original versions of the song with new visual content. Some were tutorials on how to play the song. And then there were videos exhibiting really deep personal connections with the song.

So the cover versions are often very emotional – a comment says that. That emotion level is metadata. There are videos in context – background details, kids dancing, etc. But then some are filmed out of a plane window. The tutorials include people, some annotated “kareoke piano” tutorials…

Intertextuality… You need to understand your context. So one of the videos shows a guy in a yellow cape who is reaching and touching the Achtung Baby album cover before starting to sing. In another video a person is in the dark, in shadow… But here Song for Someone lyrics and title on the wall, but then playing and mashing up with another song. In another video the producer and his friend try to look like U2.

Then we have the producers comments and descriptions that add greatly to understanding those videos. Responses from consumers – more likes than dislikes; almost all positive comments – this is very different from some Justin Bieber YouTube work I did a while back. You see comments on the quality of the cover, on the emotion of the song.

The discussion is an expression of emotion. The producers show tenderness, facial expressions, surrounds, music elements. And you see social construction here…

And we can link this to something like FRBR… U2 as authoritative version, and FRBR relationships… Is there a way we can show the relationship between Songs of Innocence by William Blake, Songs of Innocence as an album, cover versions, etc.

As we move forward there is so much more we need to do when we design systems for description that accommodate more than just keywords/bibliographic records. There is no full text inherent in a video or other non-textual document – an indexing problem. And we need to account for not only emotion, but also socially constructed and individually experienced emotional responses to items. Ultimate goal – help people to find things in meaningful ways to even potentially be useful in therapies (Hanser 2010).


Q1) Comment more than a question… I work with film materials in the archive, and we struggle to bring that alive, but you do have some response from the cataloguer and their reactions – and reactions at the access centre – and that could be part of the record.

A1) That’s part of archives – do we need it in every case… Some of the stuff I study gets taken down… Do we need to archive (some of) them?

Q1) Also a danger that you lose content because catalogue records are not exciting enough… Often stuff has to go on YouTube to get seen and accessed – but then you lose that additional metadata…

A1) We do need to go where our audience is… Maybe we do need to be on YouTube more… And maybe we can use Linked Data to make things more findable. Catalogue records rarely come up high enough in search results…

Q2) This is a really subjective way to mark something up… So, for instance, Songs of Innocence was imposed on my iPhone and I respond quite negatively to that… How do you catalogue emotion with that much subjectivity at play?

A2) This is where we have happy songs versus individual perspectives… Most people think The Beatles’ Here Comes the Sun is mostly seen is happy… But if someone broke up with you during it…  How do we build into algorithms to tune into those different opinions..

Q3) How do producers choose to tag things – the lyrics, the tune, their reaction… But you kind of answered that… I mean people have Every Breath You Take by the Police as their first song at a wedding but it’s about a jilted lover stalking his ex…

A3) We need to think about how we provide access, and how we can move forward with this… My first job was in a record store and people would come in and ask “can I buy this record that was on the radio at about 3pm” and that was all they could offer… We need those facets, those emotions…

Q4) I had the experience of seeing quite a neutral painting but then with more context that painting meant something else entirely… So how do we account for that, that issue of context and understanding of the same songs in different ways…

A4) There isn’t one good solution to that but part of the web 2.0 approach is about giving space for the collective and the individual perspective.

Q5) How about musical language?

A5) Yeah.. I took an elective on musical librarianship. My tutor there showed me the tetrachords in Dido & Aeneid as a good example of an opera that people respond in very particular ways. There are musical styles that map to particular emotions.

Our 5Rights: digital rights of children and young people / Dev Kornish, Dan Dickson, Bethany Wilson (5Rights Youth Commission)

We are from Young Scot and Young Scot

1 in 5 young people have missed food or sleep because of the internet.

How many unemployed young people struggle with entering work due to the lack of digital skills? It’s 1 in 10 who struggle with CVs, online applications, and jobs requiring digital skills.

How young do people start building their digital footprint? Before birth – an EU study found that 80% of mothers had shared images, including scans, of their children.

Bethany: We are passionate about our rights and how our rights can be maintained in a digital world. When it comes to protecting young people online it can be scary… But that doesn’t mean we shouldn’t use the internet or technology, when used critically The 5Rights campaign aims to do ensure we have that understanding.

Dan: The UNCRC outlines rights and these are: the right to remove; the right to know – who has your data and what they are doing with it; the right to safety and support; the right to informed and conscious use – we should be able to opt out or remove ourselves if we want to; right to digital literacy – to use and to create.

Bethany: Under the right to remove, we do sometimes post things we shouldn’t but we should be able to remove things if we want to. In terms of the right to know – we don’t read the terms and conditions but we have the right to be informed, we need support. The right to safety and support requires respect – dismissing our online life can make us not want to talk about it openly with you. If you speak to us openly and individually then we will appreciate your support but restrictions cannot be too restrictive. Technology is designed to be addictive and that’s a reality we need to engage with. Technology is a part of most aspects of our lives, teaching and curriculum should reflect that. It’s not just about coding, it’s about finding information, and to understand what is reliable, what sources we can trust. And finally you need to listen to us, to our needs, to be able to support us.

And a question for us: What challenges have you encountered when supporting young people online? [a good question]

And a second question: What can you do in your work to realise young people’s rights in the digital world?

Q1) What digital literacy is being taught in schools right now?

A1) It’s school to school, depends on the educational authority. Education Scotland have it as a priority but only over the last year… It depends…

Q2) My kid’s 5 and she has library cards…

Comment) The perception is that kids are experts by default

A2 – Dan) That’s not the case but there is that perception of “digital natives” knowing everything. And that isn’t the case…

Dan: Do you want to share what you’ve been discussing?

Comment: It’s not just an age thing… Some love technology, some hate it… But it’s hard to be totally safe online… How do you protect people from that…

Dan: It is incredibly difficult, especially in education.

Comment [me]: There is a real challenge when the internet is filtered and restricted – it is hard to teach real world information literacy and digital literacy when you are doing that in an artificial school set up. That was something that came up in the Royal Society of Edinburgh Digital Participation Inquiry I was involved in a few years ago. I also wanted to add that we have a new MOOC on Digital Footprints that is particularly aimed at those leaving school/coming into university.

Bethany: We really want that deletion when we use our right to remove to be proper deleted. We really want to know where our data is held. And we want everyone to have access to quality information online and offline. And we want to right to disengage when we want to. And we want digital literacy to be about more than just coding, but also what we do and can do online.

Dan: We invite you all to join our 5Rights Coalition to show your support and engagement with this work. We are now in the final stages of this work and will be publishing our report soon. We’ve spoken to Google, Facebook, Education Scotland, mental health organisations, etc. We hope our report will provide great guidance for implementing the 5Rights.

You can find out more and contact us: 5Rights@young.scot, #5RightsYC, http://young.scot/5rights.


Q1) Has your organisation written any guidance for librarians in putting these rights into action?

A1) Not yet but that report should include some of that guidance.

Playing with metadata / Gavin Willshaw and Scott Renton (University of Edinburgh)

Gavin: Scott and I will be talking about our metadata games project which we’ve been working on for the last few years. My current focus is on PhD digitisation but I’m also involved in this work. I’ll give an overview, what we’ve learned… And then Scott will give more of an idea of the technical side of things.

A few years ago we had 2 full time photographers working on high quality digital images. Now there are three photographers, 5 scanning assistants, and several specialists all working in digitisation. And that means we have a lot more digital content. A few years ago we launched collections.ed.ac.uk which is the one stop shop into our digital collections. You can access the images at: http://images.is.ed.ac.uk/. We have around 30k images, and most are CC BY licenced at high resolution.

Looking at the individual images we tend to have really good information of the volume the image comes from, but prior to this project we had little information on what was actually in the image. That made them hard to find. We didn’t really have anyone to catalogue this. A lot of these images are as much as 10 years old – for projects but not neccassarily intended to go online. So, we decided to create this game to improve the description of our collections…

The game has a really retro theme – we didn’t want to spend too long on the design side of things, just keep it simple. And the game is open to everyone.

So, stage 1: tag. You harvest initial tags, it’s an open text box, there is no quality review, and there are points for tags entered. We do have some safety measures to avoid swear or stop words.

Stage 2: vote. You vote on the quality of others’ tags. It’s a closed system – good/bad/don’t know. That filters out any initial gobbldegook. You get points…

The tags are QAed and imported into our image management system. We make a distinction between formal metadata and crowdsourced tags. We show that on the record and include a link to the tool – so others can go and play.

We don’t see crowdsourcing as being just about free labour, but about communities of people with an interest and knowledge. We see it as a way to engage and connect with people beyond the usual groups – members of the public, educators, anyone really. People playing the game range from 7 to 70’s and we are interest to have the widest audience possible. And obviously the more people use the system, the more tags and participation we get. We also get feedback for improvements – some features in the game came from feedback. In theory it frees up staff time, but it takes time to run. But it lets us reach languages, collections, special knowledge that may not be in our team.

To engage our communities we took the games on tour across our sites. We’ve also brought the activity into other events – Innovative Learning Week/Festival of Creative Learning; Ada Lovelace Day; exhibitions – e.g. the Where’s Dolly game that coincided with the Towards Dolly exhibition. Those events are vital to get interest – it doesn’t work to expect people to just find it themselves.

In terms of motivation people like to do something good, some like to share their skills, and some just enjoy it because it is fun and a wee bit competitive. We’ve had a few (small) prizes… We also display real time high scores at events which gets people in competitive mode.

This also fits into an emerging culture of play in Library and Information Services… Looking at play in learning – it being ok to try things whether or not they succeed. These have included Board Game Jam sessions using images from the collections, learning about copyright and IP in a fun context. Ada Lovelace day I’ve mentioned – designing your own Raspberry Pi case out of LEGO, Making music… And also Wikipedia Editathons – also fun events.

There is also an organisatoin called Tiltfactor who have their own metadata games looking at tagging and gaming. They have Zen Tag – like ours. But also Nextag for video and audio. And also Guess What! a multiplier game of description. We put about 2000 images into the metadatagames platform Tiltfactor run and got huge numbers of tags quickly. They are at quite a different scale.

We’ve also experimented with Lady Grange’s correspondence in the Zooniverse platform, where you have to underline or indicate names and titles etc.

We’ve also put some of our images into Crowdcrafting to see if we can learn more about the content of images.

There are Pros and Cons here…


  • Hosted service
  • Easy to create an account
  • Easy to set up and play
  • Range of options – not just tagging
  • Easy to load in images from Dropbox/Flickr


  • Some limitations of what you can do
  • Technical expertise needed for best value – especially in platforms like Crowdcrafting.

What we’ve learned so far is that it is difficult to create engaging platform but combining with events and activities – with target theme and collections – work well. Incentives and prizes help. Considerable staff time is needed. And crowdsourced tags are a compliment rather than an alternative to the official record.

Scott: So I’ll give the more technical side of what we’ve done. Why we needed them, how we built them, how we got on, and what we’ve learned.

I’ve been hacking away at workflows for a good 7 years. We have a reader who sees something they want, and they request the photograph of the page. They don’t provide much information – just about what is needed. These make for skeleton records – and we now have about 30k of these. It also used to be the case that buying a high end piece of kit can be easier to buy in for a project than a low level cataloguer… That means we end up with data being copied and pasted in by photographers rather than good records.

We have all these skeletons… But we need some meat on our bones… If we take an image from the Incunabula we want to know that there’s a skeleton on a horse with a scyth. Now the image platform we have does let us annotate an image – but it’s hidden away and hard to use. We needed something better and easier. That’s where we came up with an initial front end. When I came in it was a module for us to use. It was Gavin that said “hey, this should be a game”. So the nostalgic computer games thing is weirdly appealing (like the Google Maps Pacman Aprils Fool!). So it’s super simple, you put in a few words…

And it is truly lo-fi. It’s LAMP (Linux, Apache, MySQL, PHP) – not cool! Front end design retrofit. Authentication added to let students and staff login. In terms of design decisions we have a moderation module, we have a voting module, we have a scoreboard, we have stars for high contributors. And now more complex games: set no of items, clock, featured items, and Easter Eggs within the game. For instance in the Dolly the Sheep game we hid a few images with hideous comic sans that you could stumble upon if you tagged enough images!

Where we do have moderation, voting module, thresholds, demarcation… Tiltfactor told us we’re the only library putting data back in from the crowd to our system – people are really nervous about this but we demarcate it really carefully.

We now have a codebase we can clone. We skin it up differently for particular events or exhibitions – like Dolly – but it’s all the same idea with different design and collections. This all connects up through (authenticated) APIs back into the image management system (Luna).

So, how have we gotten on?

  • 283 users
  • 34070 tags in system
  • 15616 tags from our game
  • 18454 tags from Tiltfactor metadata games pushed in
  • 6212 tags pushed back into our system – that’s because of backlog in the moderation (upvotes may be good enough).

So, what next? Well we have MSc projects coming up. We are having a revamp with an intern signed up for the summer – responsiveness, links to social media, more gamification, more incentives, authentication for non UoE users, etc.

And also we are excited about IIIF – about beautification of websites with embedded viewers, streamlining (thumbnails through URL; photoshopping through URL etc) and annotations. You can do deep zoom into images without having to link out to do that with an image.

We also have the Polyglot Project – coming soon – which is a paleography project for manuscripts in our collections of any age, in any language. We asked an intern to find a transcription and translation module using IIIF. She’s come up with something fantastic… Ways to draw around text, for users to add in annotations, to discuss annotations, etc. She’s got 50-60 keyboards so almost all languages supported. Not sure how to bring back into core systems but really excited about this.

That’s basically where we’ve gotten to. And if you want to try the games, come and have a play.


Q1) That example you showed for IIIF tagging has words written in widely varied spellings… You wouldn’t key it in as written in the document.

A1 – Scott) We do have a project looking at this. We have a girl looking for dictionaries to find variance and different spellings.

A1 – Gavin) There are projects like Transcribe Bentham who will have faced that issue…

Comment – Paul C) It’s a common issue… Methods like fuzzy searching help with that…

Q2) I’m quite interested about how you identify parts of images, and how you feed that back to the catalogue?

A2 – Scott) Right now I think the scope of the project is… Well it will be interesting to see how best to feed into catalogue records. Still to be addressed.

Q3 – Paul C) You built this in-house… How open is it? Can others use it?

A3 – Gavin) It is using Luna image management system…

A3 – Scott) It’s based on Luna for derivatives and data. It’s on Github and it is open. The website is open to everyone. You login through EASE – you can join as an “EASE Friend” if you aren’t part of the University. Others can use the code if they want it…

And finally it was me up to present…

Managing your Digital Footprint : Taking control of the metadata and tracks and traces that define us online / Nicola Osborne (EDINA)

Obviously I didn’t take notes on my session, but you can explore the slides below:

Look out for a new blogpost very soon on some of the background to our new Digital Footprint MOOC, which launched on Monday 3rd April. You can join the course now, or sign up to join the next run of the course next month, here: https://goo.gl/jgHLQs

And with that the event drew to a close with thank you’s to all of the organisers, speakers, and attended!


 April 5, 2017  Posted by at 11:08 am Events Attended, LiveBlogs, Presentation and Performance Tagged with:  No Responses »
Mar 152017

Today I’m still in Birmingham for the Jisc Digifest 2017 (#digifest17). I’m based on the EDINA stand (stand 9, Hall 3) for much of the time, along with my colleague Andrew – do come and say hello to us – but will also be blogging any sessions I attend. The event is also being livetweeted by Jisc and some sessions livestreamed – do take a look at the event website for more details. As usual this blog is live and may include typos, errors, etc. Please do let me know if you have any corrections, questions or comments. 

Part Deux: Why educators can’t live without social media – Eric Stoller, higher education thought-leader, consultant, writer, and speaker.

I’ve snuck in a wee bit late to Eric’s talk but he’s starting by flagging up his “Educators: Are you climbing the social media mountain?” blog post. 

Eric: People who are most reluctant to use social media are often those who are also reluctant to engage in CPD, to develop themselves. You can live without social media but social media is useful and important. Why is it important? It is used for communication, for teaching and learning, in research, in activisim… Social media gives us a lot of channels to do different things with, that we can use in our practice… And yes, they can be used in nefarious ways but so can any other media. People are often keen to see particular examples of how they can use social media in their practice in specific ways, but how you use things in your practice is always going to be specific to you, different, and that’s ok.

So, thinking about digital technology… “Digital is people” – as Laurie Phipps is prone to say… Technology enhanced learning is often tied up with employability but there is a balance to be struck, between employability and critical thinking. So, what about social media and critical thinking? We have to teach students how to determine if an online source is reliable or legitimate – social media is the same way… And all of us can be caught out. There was piece in the FT about the chairman of Tesco saying unwise things about gender, and race, etc. And I tweeted about this – but I said he was the CEO – and it got retweeted and included in a Twitter moment… But it was wrong. I did a follow up tweet and apologised but I was contributing to that..

Whenever you use technology in learning it is related to critical thinking so, of course, that means social media too. How many of us here did our educational experience completely online… Most of us did our education in the “sage on the stage” manner, that’s what was comfortable for us… And that can be uncomfortable (see e.g. tweets from @msementor).

If you follow the NHS on Twitter (@NHS) then you will know it is phenomenal – they have a different member of staff guest posting to the account. Including live tweeting an operation from the theatre (with permissions etc. of course) – if you are medical student this would be very interesting. Twitter is the delivery method now but maybe in the future it will be Hololens or Oculus Rift Live or something. Another thing I saw about a year ago was Phil Baty (Inside Higher Ed – @Phil_Baty) talked about Liz Barnes revealing that every academic at Staffordshire will use social media and will build it into performance management. That really shows that this is an organisation that is looking forward and trying new things.

Any of you take part in the weekly #LTHEchat. They were having chats about considering participation in that chat as part of staff appraisal processes. That’s really cool. And why wouldn’t social media and digital be a part of that.

So I did a Twitter poll asking academics what they use social media for:

  • 25% teaching and learning
  • 26% professional development
  • 5% research
  • 44% posting pictures of cats

The cool thing is you can do all of those things and still be using it in appropriate educational contexts. Of course people post pictures of cats.. Of course you do… But you use social media to build community. It can be part of building a professional learning environment… You can use social media to lurk and learn… To reach out to people… And it’s not even creepy… A few years back and I could say “I follow you” and that would be weird and sinister… Now it’s like “That’s cool, that’s Twitter”. Some of you will have been using the event hashtag and connecting there…

Andrew Smith, at the Open University, has been using Facebook Live for teaching. How many of your students use Facebook? It’s important to try this stuff, to see if it’s the right thing for your practice.

We all have jobs… Usually when we think about networking and professional networking we often think about LinkedIn… Any of you using LinkedIn? (yes, a lot of us are). How about blogging on LinkedIn? That’s a great platform to blog in as your content reaches people who are really interested. But you can connect in all of these spaces. I saw @mdleast tweeting about one of Anglia Ruskin’s former students who was running the NHS account – how cool is that?

But, I hear some of you say, Eric, this blurs the social and the professional. Yes, of course it does. Any of you have two Facebook accounts? I’m sorry you violate the terms of service… And yes, of course social media blurs things… Expressing the full gamut of our personality is much more powerful. And it can be amazing when senior leaders model for their colleagues that they are a full human, talking about their academic practice, their development…

Santa J. Ono (@PrezOno/@ubcprez) is a really senior leader but has been having mental health difficulties and tweeting openly about that… And do you know how powerful that is for his staff and students that he is sharing like that?

Now, if you haven’t seen the Jisc Digital Literacies and Digital Capabilities models? You really need to take a look. You can use these to use these to shape and model development for staff and students.

I did another poll on Twitter asking “Agree/Disagree: Universities must teach students digital citizenship skills” (85% agree) – now we can debate what “digital citizenship” means… If any of you have ever gotten into it with a troll online? Those words matter, they effect us. And digital citizenship matter.

I would say that you should not fall in love with digital tools. I love Twitter but that’s a private company, with shareholders, with it’s own issues… And it could disappear tomorrow… And I’d have to shift to another platform to do the things I do there…

Do any of you remember YikYak? It was an anonymous geosocial app… and it was used controversially and for bullying… So they introduced handles… But their users rebelled! (and they reverted)

So, Twitter is great but it will change, it will go… Things change…

I did another Twitter poll – which tools do your students use on a daily basis?

  • 34% snapchat
  • 9% Whatsapp
  • 19% Instagram
  • 36% use all of the above

A lot of people don’t use Snapchat because they are afraid of it… When Facebook first appeared that response was it’s silly, we wouldn’t use it in education… But we have moved that there…

There is a lot of bias about Snapchat. @RosieHare posted “I’m wondering whether I should Snapchat #digifest17 next week or whether there’ll be too many proper grown ups there who don’t use it.” Perhaps we don’t use these platforms yet, maybe we’ll catch up… But will students have moved on by then… There is a professor in the US who was using Snapchat with his students every day… You take your practice to where your students are. According to global web index (q2-3 2016) over 75% of teens use Snapchat. There are policy challenges there but students are there every day…

Instagram – 150 M people engage with daily stories so that’s a powerful tool and easier to start with than Snapchat. Again, a space where our students are.

But perfection leads to stagnation. You have to try and not be fixated on perfection. Being free to experiment, being rewarded for trying new things, that has to be embedded in the culture.

So, at the end of the day, the more engaged students are with their institution – at college or university – the more successful they will be. Social media can be about doing that, about the student experience. All parts of the organisation can be involved. There are so many social media channels you can use. Maybe you don’t recognise them all… Think about your students. A lot will use WhatsApp for collaboration, for coordination… Facebook Messenger, some of the asian messaging spaces… Any of you use Reddit? Ah, the nerds have arrived! But again, these are all spaces you can develop your practice in.

The web used to involve having your birth year in your username (e.g. @purpledragon1982), it was open… But we see this move towards WhatsApp, Facebook Messenger, WeChat, these different types of spaces and there is huge growth predicted this year. So, you need to get into the sandbox of learning, get your hands dirty, make some stuff and learn from trying new things #alldayeveryday


Q1) What audience do you have in mind… Educators or those who support educators? How do I take this message back?

A1) You need to think about how you support educators, how you do sneaky teaching… How you do that education… So.. You use the channels, you incorporate the learning materials in those channels… You disseminate in Medium, say… And hopefully they take that with them…

Q2) I meet a strand of students who reject social media and some technology in a straight edge way… They are in the big outdoors, they are out there learning… Will they not be successful?

A2) Of course they will. You can survive, you can thrive without social media… But if you choose to engage in those channels and spaces… You can be succesful… It’s not an either/or

Q3) I wanted to ask about something you tweeted yesterday… That Prensky’s idea of digital natives/immigrants is rubbish…

A3) I think I said “#friendsdontletfriendsprensky”. He published that over ten years ago – 2001 – and people grasped onto that. And he’s walked it back to being about a spectrum that isn’t about age… Age isn’t a helpful factor. And people used it as an excuse… If you look at Dave White’s work on “visitors and residents” that’s much more helpful… Some people are great, some are not as comfortable but it’s not about age. And we do ourselves a disservice to grasp onto that.

Q4) From my organisation… One of my course leaders found their emails were not being read, asked students what they should use, and they said “Instagram” but then they didn’t read that person’s posts… There is a bump, a challenge to get over…

A4) In the professional world email is the communications currency. We say students don’t check email… Well you have to do email well. You send a long email and wonder why students don’t understand. You have to be good at communicating… You set norms and expectations about discourse and dialogue, you build that in from induction – and that can be email, discussion boards and social media. These are skills for life.

Q5) You mentioned that some academics feel there is too much blend between personal and professional. From work we’ve done in our library we find students feel the same way and don’t want the library to tweet at them…

A5) Yeah, it’s about expectations. Liverpool University has a brilliant Twitter account, Warwick too, they tweet with real personality…

Q6) What do you think about private social communities? We set up WordPress/BuddyPress thing for international students to push out information. It was really varied in how people engaged… It’s private…

A6) Communities form where they form. Maybe ask them where they want to be communicated with. Some WhatsApp groups flourish because that’s the cultural norm. And if it doesn’t work you can scrap it and try something else… And see what

Q7) I wanted to flag up a YikYak study at Edinburgh on how students talk about teaching, learning and assessment on YikYak, that started before the handles were introduced, and has continued as anonymity has returned. And we’ll have results coming from this soon…

A7) YikYak may rise and fall… But that functionality… There is a lot of beauty in those anonymous spaces… That functionality – the peers supporting each other through mental health… It isn’t tools, it’s functionality.

Q8) Our findings in a recent study was about where the students are, and how they want to communicate. That changes, it will always change, and we have to adapt to that ourselves… Do you want us to use WhatsApp or WeChat… It’s following the students and where they prefer to communicate.

A8) There is balance too… You meet students where they are, but you don’t ditch their need to understand email too… They teach us, we teach them… And we do that together.

And with that, we’re out of time… 

Are you future ready? Preparing Students fro living and working in the digital world

Introduction –  Lisa Gray, senior co-design manager, Jisc.

Connected Curricula model is about ensuring that employability is built into the curricuum, in T-profile curricule; employer engagement; and assessment for learning. That assessment is about assessing throughout the student experience as they progress through the curriculum.

The Jisc employability toolkit talks more about how this can be put into action. Looking at Technology for employability aspects include enhanced authentic and simulated learning experiences; enhanced lifelong learning and employability; and digital communications and engagement with employers; enhanced employability skills development – and learner skills diagnostics and self-led assessment; employer focused digital literacy development.

The employable student in the digital age model. The toolkit unpicks the capabilities that map into that context.

You can find out more, along with other resources, at: http://ji.sc/

The Employer View: Preparing students for a digital world – Deborah Edmondson, talent director, Cohesion Recruitment

We manage early talent recruitment processes. Whilst it is clear that automation is replacing some roles, it won’t replace creativity, emotional awareness, and similar skills and expertise.

Graduate vacancies are reducing this year – this has been the third time in the last four years. Some of that is associated with Brexit – especially in construction – but also represents a rise in apprentice roles. Many employers are replacing existing training programmes to the new Apprenticeship model (and levy). Recruitment is typically, for early talent, online application, video interview, psychometric testing, assessment centre. Some employers gamify that process. And we are also seeing a big influence of parental role as well.

Employers have had to up their own digital skills in order to recruit graduates. We’ve had to ensure application forms are online and mobile enabled. And we know that online forms are not the best predictor of who will succeed in graduate recruitment so we’ve reduced or removed them. Video interviews are becoming much more frequent as they give the best idea of a candidates skills, confidence, communication. We still see psychometric testing but there is less focus there, it’s more about contextual recruitment and focusing less on scores, more on the context of that student and achievement. We are also starting to see virtual reality in final stages of recruitment – this is about understanding authentic reactions and responses rather than pre-prepared responses.

So, what do employers want in terms of digital skills? It’s not about skills a lot of the time, often it’s about willingness to use digital skills and capabilities. There are nine key attributes and I’d particularly like to draw your attention to business communications. Students often focus on immediacy… But realities of business and their tools is that things can move slowly, so graduates need real flexibility. The other area I wanted to raise is etiquette: one client mentioned a graduate recruited colleague sending multiple chasers in a single email – that’s just annoying. Similarly use of text speak – wholly inappropriate. Also hiding behind the screen – only emailing and reluctant to call or meet face to face…

Graduates have great skills but they are also described as entitled, hard to manage, etc. So, how can universities help? Well expectations – around success and job satisfaction, as well as about the kinds of technologies they will be using. There isn’t immediacy or instant gratification in the world of work, patience is required. It is about business communication – that emails are long enough, professional enough, and that text speak or emoji in emails – or phrases like “in my oils” which won’t mean much to employers! We also need graduates who are able and willing to have conversations, face to face conversations, phone conversations – they have to be able to talk about their work. And with digital footprint – this can come back to haunt you. We have recruiters looking for high security roles that even check online purchase history – if it’s out there, we will find it. And it’s about perceptions too – those with ambitious career plans have to bear that in mind in how they present themselves from day one. And Excel – it’s important in business but not all students have experience of it. Research… graduates need to be professional on LinkedIn (including photographs) and be able to do the research, to understand the employer, but not to be too stalkery. And it’s about employer interaction – we receive abusive, sweary, etc. responses to rejections but graduates need to be asking for feedback and being graceful in dealing with rejection.

Note: for those interested in digital footprint you should take a look at our new #dfmooc which launches next month and is already open for registration: https://www.coursera.org/learn/digital-footprint.

SERC – Kieran McKenna, South Eastern Regional College

At SERC a students first few weeks are abou entrepreneurship, with guest speakers, student volunteers, and project based learning built around PBL/Enterprise Fairs. We see success in a number of areas and skills contests because of this model. We use the CAST/CAPS approach – Conference for Advancement of Science and Technology – with students working with industry standard PBL and enterprise learning. We also take a “whole-brain learning” approach – ensuring students understand how they learn best.

So, now we will look at three ways we have enabled this. We created a Whole Brain eLearning resource – called EntreBRAINeur – where students understand typical skills of entrepreneurs, have information about the brain, and answer questions that report back to them on their left brain/right brain placement, their learning styles… One message to take home is the language we use.. That the following information “may be of benefit to your working styles” – encouraging the learner in a positive way. The learner knows best how they learn best. And we link results with activity planning – so you can look at a group with their right/left brain dominance.

So, with that, we are going to see a short video on this…

So, having created this tool we set up an enterprise portal. This has objectives including sharing enterprise and entrepreneurship best practice across multiple campuses. So the PBL activities create a web presence and they are explaining how they undertook the PBL design cycle, and they are looking for votes on their projects. They are then assessed against creativity; innovation, team working; and solutions matching the challenge.

So, are we future ready? Looking at students who completed the e-resource found that only about 10% of our students have an entrepreneurial mindset… But we are confident that the tools, the learning tools, the peer assessment will give our students the edge they need.

Self-designed learning and “future proofing” graduates – Ian Pirie, Emeritus Professor, University of Edinburgh

I am going to talk about self-designed learning. We are two years into a pilot programme in Edinburgh where students literally design their own project, it is approved, they manage it, it is assessed, and ends up in an eportfolio online. Edinburgh is a large university – 3 colleges, 22 schools – and we don’t always do things the same way. We had a number of factors colliding – we have a QAA Enhancement theme around learning and a large careers team which was looking for more self-led opportunities; and employers were also saying they valued graduates but felt some skills could be stronger; and for students in e.g. humanities your tutor would tell you what you must do, but you also have a choice of modules – from over 8.5k courses which is quite intimidating.. And staff also wanted to teach their specialist areas which is a challenge.

So I’ll talk in four areas here…

A rapidly changing world… Students can now access all information very quickly, globally, 24/7. It often isn’t the students ability to use technology, it’s often universities and employers that can fall behind. For education the challenge can be that the kind of teaching we are used to doing isn’t necessarily fit for purpose. Traditionally teaching is information rich and assessed a few times in a semester, and that isn’t what they need and frustrating. And we also see a socially mobile environment – university and private coffee shops used socially and professionally by students. And in fact the Kaplan Graduate Recruitment Report 2014 suggests 1 in 2 will become future leaders – and 60% of businesses are looking for graduates with leadership skills.

Looking at the CBI Survey Data – as already mentioned earlier – really isn’t about the subject area. It is about having studied to a particular level… Not what you have learned in the course in terms of subject content. So how can that be taught? And when we survey our own students we find frustration amongst some students about the way they are taught. And indeed the importance of understanding that equality doesn’t mean treating everyone the same – there is a lot of literature here and it is hard to see how we implement this, particularly at scale.

Students are consistently very clear about what they would like… They would like to be treated professionally and individually, they want clarity about what is expected of them and what they can expect in return. They want clarity in assessment critiera with associated timely and effective feedback – an issue across the sector. They also want an academic community comprised of vertical peer groups and academic staff. They want 24/7 access to online information, ideally in one place. And they increasingly want assurance that they are being prepared for the future.

And, for so many reasons, there is a lot of change. HE can be slow to change… But we need to move away from a teaching model towards a learning model where the tutor supports that learning. It is about accepting responsibility for “future proofing” the whole person, and part of that is about ensuring that “digital literacy” is embedded in the curriculum, as well as the abstract skills.

So, three years ago we developed our future vision for a future curriculum. Some of the steps here look innocuous, but some will really radically upset academics – we wanted to design out passive learning. If a student can sleep through a lecture, hand in an essay, do an exam, and that’s them completed the course, that’s not good enough. We also wanted appropriate use of technology – there is no substitutive for the face to face experience. Each student are also required to use online learning in some form, to prepare them for the future, for elearning, for their ongoing development…

And that takes us to the SLICCS. This is a university-wide framework contextualised to the discipline by each student. And there is one framework, the student then contextualises their own course. Student creates, owns, manages and are formatively assessed. There is deliberately minimal input and supervision from academic staff – it’s a lot of work but for the student, not the staff. Inductions are done by Institute for Academic Development staff… the academic input is at the “front end” for induction and presentation of proposal. But students then reflect on their experience.

In order to do this our inductions are face to face – not online – to make sure students are able to take on the SLICC. They also cannot take on a SLICC if they have any fails – academically they have to be solid to go into this phase of their learning. So, the process is for the student to identify and select a learning experience – often a work placement related project; they develop a proposal and work plan; and then engage in ongoing reflection – sometimes once a day. Then there is formative self-assessment by the student, and summative assessment by staff. Staff don’t see the formative assessment until they have marked the work but in our pilots we had over 96% correlation between those assessments.

We are used to seeing staff responsibility for returning marked work etc. But we also make it clear what the student expectations are in terms of giving and receiving feedback (separate from the SLICC), with students needing to submit that self-graded assessment constructively aligned to the LOs. A critically-selective web folio is submitted along with an (up to 2000 word) report. Initially there was concern that SLICCs were 20 credits and students wouldn’t do the work… But they have done mountains of work and really produced fantastic engaged pieces. Students gave us feedback on the courses, but the technology is barely mentioned – the staff struggled more – as the students learned most from the self-management and self-direction. Students from pilot 1 immediately signed up for pilot 2… And now it is mainstream. As one student says “it made me take control of my own learning”. I can’t show you all the portfolios now but if you look at our website, you’ll find out much more: http://www.ed.ac.uk/employability/slicc. Contact Simon Riley and Gavin McGabe for more information.


Q1) Coming back to the first speaker I was quite concerned about the phrase “early talent” as it implies all graduates are young.

A1 – DE) That’s fair. It is a collective term but employers tend to separate into apprenticeships and graduate programmes. But graduate programmes aren’t dependent on age.

Q2) On PebblePads and ePortfolios – do students use those with employers…. Are they effective tools for jobs

A2 – DE) From employers perspective we don’t see them in high volume. We follow it quite closely. We see more of universities encouraging students to use LinkedIn profiles instead.

A2 – IP) For many this approach is new to the students and staff. But in medicine the idea of portfolios is well embedded, and those courses have just adopted PebblePad for that purpose. But it’s discipline specific… And students thought about it before being asked and staff see enthusiastic.

Q3) About the neurological approach to learning… Isn’t there a real risk of thinking of learning being only for employment… What about motivation, what about changes in the market?

A3 – KM) We predominantly try to develop “whole brain” learners. We have electricians and plasterers taking that whole brain learning questionnaire – it’s interesting for them to look at that, to look back at their school experience and how their preference shapes that. The response from students has been quite positive.

Q4) We talked about this on Twitter already but I really hope that you use “left brain” and “right brain” and “learning styles” lightly – these have been debunked so perhaps give students a false sense of security… We are complex organisms… And maybe its just a way to articulate different potential… [Thank you to this person, it was a concern I had too!]

A4 – KM) We do try to address a lot of different learning styles… There is a wide variety of how that phrase is used… A real range of different skills that learners can have. It is important not to pigeon hole… But it is useful to raise awareness of how we can develop as people, regardless of how we label this. There are a range of approaches to this… This is the one that we are using.

Q5) There can be this sense of higher education as being to train the best people for employers – the best meat almost. What is the role and responsibility for employers to train graduates?

A5 – DE) There are training schemes, employers are aware of the need to train students and graduates – around 35% of students who complete a year long industrial placement will be offered a role with that employer in recognition of the training investment and and importance to employers.

Closing plenary and keynote from Lauren Sager Weinstein, chief data officer at Transport for London

The host for this session is Andy McGregor, deputy chief innovation officer, Jisc. He is introducing the session with the outcome of the start up competition that has been running over the last few days. The pitches took place last night. The winners will go into the Jisc Business Accelerator programme, providing support and some funding to take their ideas forward. And we are keen and happy to involve you in this programme so do get in touch… You’ll see us present the results digitally – an envelope seemed just too risky!

The winner of the public vote is Wildfire. And the further teams entering the project are Hubbub, Lumici Slate, Ublend, VineUp. We were hugely impressed with the quality of all of the entries – those who entered, those who were shortlisted, and the small cross section you’ve seen over the last two days.

And now… Lauren Seger Weinstein

I wanted to start by talking about the “why”… TfL has a diverse offering of transport across London – trains, buses, bikes… What are we trying to achieve? We want to deliver transport and mobility services, and to deliver for the mayor. We want to keep London working and growing. And when we think about my team and the work that we do… Our goal is to do things that help influence and contribute to the goals of the wider organisation – putting our customers and users at the core of all of our decision making; to drive improvement in reliability and safety; to be cost effective; to improve what we do.

Our customers want to understand what we stand for: excellent reliability and customer experience; value for money; and progress and innovation. And they want to know that we have a level of trust, that guides what we do and underpins how we use data. And I want to talk about how we use data that is personal, how we strip identifying data out. It is incredibly important that we respect our customers privacy. We tell our customers about how we collect data, we also have more information online. We work closely with our Privacy and Data protection team, and all new data initiatives undergo a Privacy Impact Assessment and have regular engagement with the ICO and rely on their guidance. When we do share any sensitive data we make use of Non-disclosure agreement.

So, our data – we are very lucky as we are data rich. We have 19 million smartcard ticketing transactions a dat from 12 million active cards. We know where our buses are – capturing 4.5 million bus locations a day using ibus geo-located events. We have 500k rows of train diagnostic data on the Central Line alone. We have 250l train locations. We have data from the TfL website. That is brilliant, but how do we make that useful? How do we translate that data into something we can use – that’s where my role comes in.

So we take this data and we use it to create a lot of integrated travel information that is used on our website, in tailored emails, in 600 travel apps powered by open data and created by third party app developers. We also provide advise to customers on travel options… This is where we use data to see which data is most useful… We use data on areas that are busy in terms of entrances and exists – and use that in posters in stations to help customers shift behaviours… If we tell them they have the ability to make a change, whether or not they do.

We also look at customer patterns – based on taps from cards. We anonymise the users but keep a (new) unique id to understand patterns of travel… Some users follow clear commuter patterns – Monday to Friday, we can see where home and work are, etc. But others do not fit clear patterns – part time workers, occasional attenders etc. But understanding that data lets us understand demand, peaks, and planning of shops for an area too. We also use data to help us put things right when they go wrong – paying for delays on the underground or overground. If things go *really* wrong we will look at pattern analysis and automatically refund them – that shows customers that we value them and their time, and means we have fewer forms to process.

We also use data to manage maintenance schedules, so that we can fix small things quickly to avoid bigger issues that would need fixing later on. We also use data to understand where our staff are deployed. If we know where hotspots for breakdowns are, we can deploy recovery teams more strategically. We also use data in real time operations so controllers can change the road network to manage the traffic flows most effectively.

We have also done work to consider the future and growth. We have created an algorithm to answer a question we used to have to do with surveys… With the underground you tap on and off… But on the buses  you only taps off… So we looked at inferring bus journeys… So we take our bus boarding entry taps, plus other modal taps, and iBus event data to work out where they likely exited the bus. We use it to plan busy parts of the network – where more buses may be required at busy times. To also plan out interchanges – we are changing our road layout considerably to make it better for vulnerable road users. We are also thinking about interchanges, and to understand at a granular level how customers use our network.

We are always looking to solve problems and do so in an innovative way… We are industry leaders in a number of areas. We have had wifi on the tube since 2012. We are currently looking to see if wifi data will enable us to plan better. In 2016 we ran a four week pilot to explore value of wifi connection data. When wifi tried to connect with routers in stations we grabbed timestamp, location and a (scrambled) device id. We are analysing that data… But the test was about easier use case. The cases we are currently looking at are about what we can learn about customer patterns from wifi data… And we were deliberately very transparent in that trial, with posters in situ, information online, and a real push to ensure that people were informed about what we were collecting, and how to opt out. 

Finally we have an open data policy. We support developers and the developer economy. this is delivered at very little cost. and our web presence is seen as industry leading. We also do work with universities around six key areas, and we then work with academics on proof of concept with TfL support. Then that can become TfL proof of concept and eventually end up being operational.

So, we are keen to engage with students to come and work with us. So we are planning for ways to support STEM/STEAM in schools activities, to create targeted interventions – it helps us develop the next generation and enables us to deliver the mayors education strategy. We’ve done coding events, work with the Science Museum, with local schools.

To finish my big data principles focus on protecting the privacy of our customers, that is paramount. focus on the right problems you face. Interesting or not enough and don’t start with data… Instead we think of an approach along the lines of… 

  • As a [my job title]
  • I need [big data insights]
  • So that I can [make a decision my job expects me to]

Operational infrastructure generates data… so it is crucial to interpret, translate and understand that data to make it useful. 


Q1) What have you done in terms of data from disabled travellers

A1) We have users with freedom passes… but it depends on what the disability is… so data is hard to tease out. Need a combination of automatic data and talk to our users – so you can take patterns to small groups… Nad to test and discuss those.

Q2) You mentioned that you provide open data for others. Have you thought about student projects… can you provide databank of problems or projects that students could work on?

A2) We are just beginning this now. We have ongoing research projects that require in depth knowledge of work. We also have an opportunity for key questions and key samples – you can see that data today. It isn’t packagers for schools but there is an opportunity on air quality, travel patterns, whether students can find local stops, etc. there is real opportunity but still more to do

Q3) As cities become increasingly populated with self driving autonomous vehicles the data may inform those, but also uber and tesla already collect huge amounts of data…

A3) We have some data on cars but it’s high level. To understand our road customers though we are keen to work with the appropriate companies – some are more open than others – and to understand how we can work with our customers. Historical data is easier but real time analysis is really where we want to be. 

Q4) About information and data protection… you could argue that marginal impact is low for the individual… but compared to cost of security after a data breach… I was wondering how you decided on that balance, and the rights and expectations…

A4) Well we asked our customers and asked them if they were comfortable with the approach. They were asked tangible questions about how data could be used… when we focus on  what is tangible and will improve the network for Londoners, that helps. And that pseudonymous data means you have a hashed number, not full card number but it is still sensitive. Customers can opt into giving us more data – including with wifi where we advised customers to switch off wifi to be part of the study. it’s about customers to be comfortable to engage with us at the level that they want. 

Sincere apologies for the quality of my liveblogging for Laura’s talk – my computer decided to crash about two thirds of the way through and only part of the post was successfully autosaved, with remaining notes made on my phone. Look at the tweets and others write ups for further detail or check out the excellent TfL site where I know there is already a lot of good information on their open data and their recent wifi work. 

And with that Digifest is over for another year. Particular thanks to all who dropped by EDINA’s stand and chatted with Andrew and I – we were delighted to catch up with so many EDINA customers and people interested in our project work and possible opportunities to work together in the future. We are always delighted to meet and hear from our colleagues across the sector so do leave a comment here or drop us a line if you have any comments, questions or ideas you’d like to discuss.  

 March 15, 2017  Posted by at 10:10 am Digital Education, LiveBlogs Tagged with: , , ,  No Responses »
Mar 142017

Today and tomorrow I’m in Birmingham for the Jisc Digifest 2017 (#digifest17). I’m based on the EDINA stand (stand 9, Hall 3) for much of the time, along with my colleague Andrew – do come and say hello to us – but will also be blogging any sessions I attend. The event is also being livetweeted by Jisc and some sessions livestreamed – do take a look at the event website for more details. As usual this blog is live and may include typos, errors, etc. Please do let me know if you have any corrections, questions or comments. 

Plenary and Welcome

Liam Earney is introducing us to the day, with the hope that we all take some away from the event – some inspiration, an idea, the potential to do new things. Over the past three Digifest events we’ve taken a broad view. This year we focus on technology expanding, enabling learning and teaching.

LE: So we will be talking about questions we asked through Twitter and through our conference app with our panel:

  • Sarah Davies (SD), head of change implementation support – education/student, Jisc
  • Liam Earney (LE), director of Jisc Collections
  • Andy McGregor (AM), deputy chief innovation officer, Jisc
  • Paul McKean (PM), head of further education and skills, Jisc

Q1: Do you think that greater use of data and analytics will improve teaching, learning and the student experience?

  • Yes 72%
  • No 10%
  • Don’t Know 18%

AM: I’m relieved at that result as we think it will be important too. But that is backed up by evidence emerging in the US and Australia around data analytics use in retention and attainment. There is a much bigger debate around AI and robots, and around Learning Analytics there is that debate about human and data, and human and machine can work together. We have several sessions in that space.

SD: Learning Analytics has already been around it’s own hype cycle already… We had huge headlines about the potential about a year ago, but now we are seeing much more in-depth discussion, discussion around making sure that our decisions are data informed.. There is concern around the role of the human here but the tutors, the staff, are the people who access this data and work with students so it is about human and data together, and that’s why adoption is taking a while as they work out how best to do that.

Q2: How important is organisational culture in the successful adoption of education technology?

  • Total make or break 55%
  • Can significantly speed it up or slow it down 45%
  • It can help but not essential 0%
  • Not important 0%

PM: Where we see education technology adopted we do often see that organisational culture can drive technology adoption. An open culture – for instance Reading College’s open door policy around technology – can really produce innovation and creative adoption, as people share experience and ideas.

SD: It can also be about what is recognised and rewarded. About making sure that technology is more than what the innovators do – it’s something for the whole organisation. It’s not something that you can do in small pockets. It’s often about small actions – sharing across disciplines, across role groups, about how technology can make a real difference for staff and for students.

Q3: How important is good quality content in delivering an effective blended learning experience?

  • Very important 75%
  • It matters 24%
  • Neither 1%
  • It doesn’t really matter 0%
  • It is not an issue at all 0%

LE: That’s reassuring, but I guess we have to talk about what good quality content is…

SD: I think materials – good quality primary materials – make a huge difference, there are so many materials we simply wouldn’t have had (any) access to 20 years ago. But also about good online texts and how they can change things.

LE: My colleague Karen Colbon and I have been doing some work on making more effective use of technologies… Paul you have been involved in FELTAG…

PM: With FELTAG I was pleased when that came out 3 years ago, but I think only now we’ve moved from the myth of 10% online being blended learning… And moving towards a proper debate about what blended learning is, what is relevant not just what is described. And the need for good quality support to enable that.

LE: What’s the role for Jisc there?

PM: I think it’s about bringing the community together, about focusing on the learner and their experience, rather than the content, to ensure that overall the learner gets what they need.

SD: It’s also about supporting people to design effective curricula too. There are sessions here, talking through interesting things people are doing.

AM: There is a lot of room for innovation around the content. If you are walking around the stands there is a group of students from UCL who are finding innovative ways to visualise research, and we’ll be hearing pitches later with some fantastic ideas.

Q4: Billions of dollars are being invested in edtech startups. What impact do you think this will have on teaching and learning in universities and colleges?

  • No impact at all 1%
  • It may result in a few tools we can use 69%
  • We will come to rely on these companies in our learning and teaching 21%
  • It will completely transform learning and teaching 9%

AM: I am towards the 9% here, there are risks but there is huge reason for optimism here. There are some great companies coming out and working with them increases the chance that this investment will benefit the sector. Startups are keen to work with universities, to collaborate. They are really keen to work with us.

LE: It is difficult for universities to take that punt, to take that risk on new ideas. Procurement, governance, are all essential to facilitating that engagement.

AM: I think so. But I think if we don’t engage then we do risk these companies coming in and building businesses that don’t take account of our needs.

LE: Now that’s a big spend taking place for that small potential change that many who answered this question perceive…

PM: I think there are saving that will come out of those changes potentially…

AM: And in fact that potentially means saving money on tools we currently use by adopting new, and investing that into staff..

Q5: Where do you think the biggest benefits of technology are felt in education?

  • Enabling or enhancing learning and teaching activities 55%
  • In the broader student experience 30%
  • In administrative efficiencies 9%
  • It’s hard to identify clear benefits 6%

SD: I think many of the big benefits we’ve seen over the last 8 years has been around things like online timetables – wider student experience and administrative spaces. But we are also seeing that, when used effectively, technology can really enhance the learning experience. We have a few sessions here around that. Key here is digital capabilities of staff and students. Whether awareness, confidence, understanding fit with disciplinary practice. Lots here at Digifest around digital skills. [sidenote: see also our new Digital Footprint MOOC which is now live for registrations]

I’m quite surprised that 6% thought it was hard to identify clear benefits… There are still lots of questions there, and we have a session on evidence based practice tomorrow, and how evidence feeds into institutional decision making.

PM: There is something here around the Apprentice Levy which is about to come into place. A surprisingly high percentage of employers aren’t aware that they will be paying that actually! Technology has a really important role here for teaching, learning and assessment, but also tracking and monitoring around apprenticeships.

LE: So, with that, I encourage you to look around, chat to our exhibitors, craft the programme that is right for you. And to kick that off here is some of the brilliant work you have been up to. [we are watching a video – this should be shared on today’s hashtag #digifest17]
And with that, our session ended. For the next few hours I will mainly be on our stand but also sitting in on Martin Hamilton’s session “Loving the alien: robots and AI in education” – look out for a few tweets from me and many more from the official live tweeter for the session, @estherbarrett.

Plenary and keynote from Geoff Mulgan,chief executive and CEO, Nesta (host: Paul Feldman, chief executive, Jisc)

Paul Feldman: Welcome to Digifest 2017, and to our Stakeholder Meeting attendees who are joining us for this event. I am delighted to welcome Geoff Mulgan, chief executive of Nesta.

Geoff: Thank you all for being here. I work at Nesta. We are an investor for quite a few ed tech companies, we run a lot of experiments in schools and universities… And I want to share with you two frustrations. The whole area of ed tech is, I think, one of the most exciting, perhaps ever! But the whole field is frustrating… And in Britain we have phenomenal tech companies, and phenomenol universities high in the rankings… But too rarely we bring these together, and we don’t see that vision from ministers either.

So, I’m going to talk about the promise – some of the things that are emerging and developing. I’ll talk about some of the pitfalls – some of the things that are going wrong. And some of the possibilities of where things could go.

So, first of all, the promise. We are going through yet another wave – or series of waves – of Google Watson, Deepmind, Fitbits, sensors… We are at least 50 years into the “digital revolution” and yet the pace of change isn’t letting up – Moore’s Law still applies. So, finding the applications is as exciting and challenging as possible.

Last year Deep Mind defeated a champion of Go. People thought that it was impossible for a machine to win at Go, because of the intuition involved. That cutting edge technology is now being used in London with blood test data to predict who may be admitted to hospital in the next year.

We have also seen these free online bitesize platforms – Coursera, Udacity, etc. – these challenges to trditional courses. And we have Google Translate in November 2016 adopting a neural machine translation engine that can translate whole sentences… Google Translate may be a little clunky still but we are moving toward that Hitchikers Guide to the Galaxy idea of the Babelfish. In January 2017 a machine-learning powered poker bot outcompeted 20 of the world’s best. We are seeing more of these events… The Go contest was observed by 280 million people!

Much of this technology is feeding into this emerging Ed Tech market. There are MOOCs, there are learning analytics tools, there is a huge range of technologies. The UK does well here… When you talk about education you have to talk about technology, not just bricks and mortar. This is a golden age but there are also some things not going as they should be…

So, the pitfalls. There is a lack of understanding of what works. NESTA did a review 3 years ago of school technologies and that was quite negative in terms of return on investment. And the OECD similarly compared spend with learning outcomes and found a negative correlation. One of the odd things about this market is that it has invested very little in using control groups, and gathering the evidence.

And where is the learning about learning? When the first MOOCs appeared I thought it was extraordinary that they showed little interested in decades of knowledge and understanding about elearning, distance learning, online learning. They just shared materials. It’s not just the cognitive elements, you need peers, you need someone to talk to. There is a common finding over decades that you need that combination of peer and social elements and content – that’s one of the reasons I like FutureLearn as it combines that more directly.

The other thing that is missing is the business models. Few ed tech companies make money… They haven’t looked at who will pay, how much they should pay… And I think that reflects, to an extent, the world view of computer scientists…

And I think that business model wise some of the possibilities are quite alarming. Right now many of the digital tools we use are based on collecting our data – the advertisor is the customer, you are the product. And I think some of our ed tech providers, having failed to raise income from students, is somewhat moving in that direction. We are also seeing household data, the internet of things, and my guess is that the impact of these will raise much more awareness of privacy, security, use of data.

The other thing is jobs and future jobs. Some of you will have seen these analyses of jobs and the impact of computerisation. Looking over the last 15 years we’ve seen big shifts here… Technical and professional knowledge has been relatively well protected. But there is also a study (Frey, C and Osborne, M 2013) that looks at those at low risk of computerisation and automation – dentists are safe! – and those at high risk which includes estate agents, accountants, but also actors and performers. We see huge change here. In the US one of the most popular jobs in some areas is truck drivers – they are at high risk here.

We are doing work with Pearson to look at job market requirements – this will be published in a few months time – to help educators prepare students for this world. The jobs likely to grow are around creativity, social intelligence, also dexterity – walking over uneven ground, fine manual skills. If you combine those skills with deep knowledge of technology, or specialised fields, you should be well placed. But we don’t see schools and universities shaping their curricula to these types of needs. Is there a concious effort to look ahead and to think about what 16-22 year olds should be doing now to be well placed in the future?

In terms of more positive possibilities… Some of those I see coming into view… One of these, Skills Route, which was launched for teenagers. It’s an open data set which generates a data driven guide for teenagers about which subjects to study. Allowing teenagers to see what jobs they might get, what income they might attract, how happy they will be even, depending on their subject choices. These insights will be driven by data, including understanding of what jobs may be there in 10 years time. Students may have a better idea of what they need than many of their teachers, their lecturers etc.

We are also seeing a growth of adaptive learning. We are an investor in CogBooks which is a great example. This is a game changer in terms of how education happens. The way AI is built it makes it easier for students to have materials adapt to their needs, to their styles.

My colleagues are working with big cities in England, including Birmingham, to establish Offices of Data Analytics (and data marketplaces), which can enable understanding of e.g. buildings at risk of fire that can be mitigated before fire fighting is needed. I think there are, again, huge opportunities for education. Get into conversations with cities and towns, to use the data commons – which we have but aren’t (yet) using to the full extent of its potential.

We are doing a project called Arloesiadur in Wales which is turning big data into policy action. This allowed policy makers in Welsh Government to have a rich real time picture of what is taking place in the economy, including network analyses of investors, researchers, to help understand emerging fields, targets for new investment and support. This turns the hit and miss craft skill of investment into something more accurate, more data driven. Indeed work on the complexity of the economy shows that economic complexity maps to higher average annual earnings. This goes against some of the smart cities expectation – which wants to create more homogenous environments. Instead diversity and complexity is beneficial.

We host at NESTA the “Alliance for Useful Evidence” which includes a network of around 200 people trying to ensure evidence is used and useful. Out o fthat we have a serues of “What Works” centres – NiCE (health and care); Education Endowment Fund; Early Intervention Foundation; Centre for Ageing Better; College of Policing (crime reduction); Centre for Local Econoic Growth; What Works Well-being… But bizarrely we don’t have one of these for education and universities. These centres help organisations to understand where evidence for particular approaches exists.

To try and fill the gap a bit for universities we’ve worked internationally with the Innovation Growth Lab to understand investment in research, what works properly. This is applying scientific methods to areas on the boundaries of university. In many ways our current environment does very little of that.

The other side of this is the issue of creativity. In China the principal of one university felt it wasn’t enough for students to be strong in engineering, they needed to solve problems. So we worked with them to create programmes for students to create new work, addressing problems and questions without existing answers. There are comparable programmes elsewhere – students facing challenges and problems, not starting with the knowledge. It’s part of the solution… But some work like this can work really well. At Harvard students are working with local authorities and there is a lot of creative collaboration across ages, experience, approaches. In the UK there isn’t any uniersity doing this at serious scale, and I think this community can have a role here…

So, what to lobby for? I’ve worked a lot with government – we’ve worked with about 40 governments across the world – and I’ve seen vice chancellors and principles who have access to government and they usually lobby for something that looks like the present – small changes. I have never seen them lobby for substantial change, for more connection with industry, for investment and ambition at the very top. The leaders argue for the needs of the past, not the present. That is’t true in other industries they look ahead, and make that central to their case. I think that’s part of why we don’t see this coming together in an act of ambition like we saw in the 1960s when the Open University founded.

So, to end…

Tilt is one of the most interesting things to emerge in the last few years – a 3D virtual world that allows you to paint with a Tilt brush. It is exciting as no-one knows how to do this. It’s exciting because it is uncharted territory. It will be, I think, a powerful learning tool. It’s a way to experiment and learn…

But the other side of the coin… The British public’s favourite painting is The Fighting Temorare… An ugly steamboat pulls in a beautiful old sailing boat to be smashed up. It is about technological change… But also about why change is hard. The old boat is more beautiful, tied up with woodwork and carpentry skills, culture, songs… There is a real poetry… But it’s message is that if you don’t go through that, we don’t create space for the new. We are too attached to the old models to let them go – especially the leaders who came through those old models. We need to create those Google Tilts, but we also have to create space for the new to breath as well.


Q1 – Amber Thomas, Warwick) Thinking about the use of technology in universities… There is research on technology in education and I think you point to a disconnect between the big challenges from research councils and how research is disseminated, a disconnect between policy and practice, and a lack of availability of information to practitioners. But also I wanted to say that BECTA used to have some of that role for experimentation and that went in the “bonfire of the quangos”. And what should Jisc’s role be here?

A1) There is all of this research taking place but it is often not used, That emphasis on “Useful Evidence” is important. Academics are not always good at this… What will enable a busy head teacher, a busy tutor, to actually understand and use that evidence. There are some spaces for education at schools level but there is a gap for universities. BECTA was a loss. There is a lack of Ed Tech strategy. There is real potential. To give an example… We have been working with finance, forcing banks to open up data, with banks required by the regulator to fund creative use of that data to help small firms understand their finance. That’s a very different role for the regulator… But I’d like to see institutions willing to do more of that.

A1 – PF) And I would say we are quietly activist.

Q2) To go back to the Hitchhikers Guide issue… Are we too timid in universities?

A2) There is a really interesting history of radical universities – some with no lectures, some no walls, in Paris a short-lived experiment handing out degrees to strangers on buses! Some were totally student driven. My feeling is that that won’t work, it’s like music and you need some structure, some grammars… I like challenge driven universities as they aren’t *that* groundbreaking… You have some structure and content, you have an interdisciplinary teams, you have assessment there… It is a space for experimentation. You need some systematic experimentation on the boundaries… Some creative laboritories on the edge to inform the centre, with some of that quite radical. And I think that we lack those… Things like the Coventry SONAR (?) course for photography which allowed input from the outside, a totally open course including discussion and community… But those sorts of experiments tend not to be in a structure… And I’d like to see systematic experimentation.

Q3 – David White, UAL) When you put up your ed tech slide, a lot of students wouldn’t recognise that as they use lots of free tools – Google etc. Maybe your old warship is actually the market…

A3) That’s a really difficult question. In any institution of any sense, students will make use of the cornucopia of free things – Google Hangouts and YouTube. That’s probably why the Ed Tech industry struggles so much – people are used to free things. Google isn’t free – you indirectly pay through sale of your data as with Facebook. Wikipedia is free but philanthropically funded. I don’t know if that model of Google etc. can continue as we become more aware of data and data use concerns. We don’t know where the future is going… We’ve just started a new project with Barcelona and Amsterdam around the idea of the Data Commons, which doesn’t depend on sale of data to advertisors etc. but that faces the issue of who will pay. My guess is that the free data-based model may last up to 10 years, but then something will change…

How can technology help us meet the needs of a wider range of learners

Pleasing Most of the People Most of the Time – Julia Taylor, subject specialist (accessibility and inclusion), Jisc.

I want to tell you a story about buying LEGO for a young child… My kids loved LEGO and it’s changed a lot since then… I brought a child this pack with lots of little LEGO people with lots of little hats… And this child just sort of left all the people on the carpet because they wanted the LEGO people to choose their own hats and toys… And that was disappointing… And I use that example is that there is an important role to help individuals find the right tools. The ultimate goal of digital skills and inclusion is about giving people the skills and confidence to use the appropriate tools. The idea is that the tools magically turn into tools…

We’ve never had more tools for giving people independence… But what is the potential of technology and how it can be selected and used. We’ll hear more about delivery and use of technology in this context. But I want to talk about what technology is capable of delivering…

Technology gives us the tools for digital diversity, allowing the student to be independent about how they access and engage with our content. That kind of collaboration can also be as meaningful in the context internationally, as it is for learners who have to fit studies around, say, shift work. It allows learners to do things the way they want to do it. That idea of independent study through digital technology is really important. So these tools afford digital skills, the tools remove barriers and/or enable students to overcome the. Technology allows learners with different needs to overcome challenges – perhaps of physical disability, perhaps remote location, perhaps learners with little free time. Technology can help people take those small steps to start or continue their education. It’s as much about that as those big global conversations.

It is also the case that technology can be a real motivator and attraction for some students. And the technology can be about overcoming a small step, to deal with potential intimidation at new technology, through to much more radical forms that keeps people engaged… So when you have tools aimed at the larger end of the scale, you also enable people at the smaller end of the scale. Students do have expectations, and some are involved in technology as a lifestyle, as a life line, that supports their independence… They are using apps and tools to run their life. That is the direction of travel with people, and with young people. Technology is an embedded part of their life. And we should work with that, perhaps even encouraged to use more technology, to depend on it more. Many of us in this room won’t have met a young visually impaired person who doesn’t have an iPhone as those devices allow them to read, to engage, to access their learning materials. Technology is a lifeline here. That’s one example, but there are others… Autistic students may be using an app like “Brain in Hand” to help them engage with travel, with people, with education. We should encourage this use, and we do encourage this use of technology.

We encourage learners to check if they can:

  • Personalise and customise the learning environment
  • Get text books in alternative formats – that they can adapt and adjust as they need
  • Find out about the access features of loan devices and platforms – and there are features built into devices and platforms you use and require students to use. How much do you know about the accessibility of learning platforms that you buy into.
  • Get accessible course notes in advance of lectures – notes that can be navigated and adapted easily, taking away unnecessary barriers. Ensuring documents are accessible for the maximum number of people.
  • Use productivity tools and personal devices everywhere – many people respond well to text to speech, it’s useful for visually impaired students, but also for dyslexic students too.

Now we encourage organisations to make their work accessible to the most people possible. For instance a free and available text to speech tool provides technology that we know works for some learners, for the wide range of learners. That helps those with real needs, but will also benefits other learners, including some who would never disclose a challenge or disability.

So, when you think about technology, think about how you can reach the widest possible range of learners. This should be part of course design, staff development… All areas should include accessible and inclusive technologies.

And I want you now to think about the people and infrastructure required and involved in these types of decisions…  So I have some examples here about change…

What would you need to do to enable a change in practice like this learner statement:

“Usually I hate fieldwork. I’m disorganised, make illegible notes, can’t make sense of the data because we’ve only got little bits of the picture until the evening write up…” 

This student isn’t benefitting from the fieldwork until the information is all brought together. The teacher dealt with this by combining data, information, etc. on the learner’s phone, including QR codes to help them learn… That had an impact and the student continues:

“But this was easy – Google forms. Twitter hashtags. Everything on the phone. To check a technique we scanned the QR code to watch videos. I felt like a proper biologist… not just a rubbish notetaker.”

In another example a student who didn’t want to speak in a group and was able to use a Text Wall to enable their participation in a way that worked for them.

In another case a student who didn’t want to blog but it was compulsory in their course. But then the student discovered they could use voice recognition in GoogleDocs and how to do podcasts and link them in… That option was available to everyone.

Comment: We are a sixth form college. We have a student who is severely dyslexic and he really struggled with classwork. Using voice recognition software has been transformative for that student and now they are achieving the grades and achievements they should have been.

So, what is needed to make this stuff happen. How can we make it easy for change to be made… Is inclusion part of your student induction? It’s hard to gauge from the room how much of this is endemic in your organisations. You need to think about how far down the road you are, and what else needs to be done so that the majority of learners can access podcasts, productivity tools, etc.

[And with that we are moving to discussion.]

Its great to hear you all talking and I thought it might be useful to finish by asking you to share some of the good things that are taking place…

Comment: We have an accessibility unit – a central unit – and that unit provides workshops on technologies for all of the institution, and we promote those heavily in all student inductions. Also I wanted to say that note taking sometimes is the skill that students need…

JT: I was thinking someone would say that! But I wanted to make the point that we should be providing these tools and communicating that they are available… There are things we can do but it requires us to understand what technology can do to lower the barrier, and to engage staff properly. Everyone needs to be able to use and promote technology for use…

The marker by which we are all judged is the success of our students. Technology must be inclusive for that to work.

You can find more resources here:

  • Chat at Todaysmeet.com/DF1734
  • Jisc A&I Offer: TinyURL.com/hw28e42
  • Survey: TinyURL.com/jd8tb5q

How can technology help us meet the needs of a wider range of learners? – Mike Sharples, Institute of Educational Technology, The Open University / FutureLearn

I wanted to start with the idea of accessibility and inclusion. As you may already know the Open University was established in the 1970s to open up university to a wider range of learners… In 1970 19% of our students hadn’t been to University before, now it’s 90%. We’re rather pleased with that! As a diverse and inclusive university accessibility and inclusivity is essential for that. As we move towards more interactive courses, we have to work hard to make fieldtrips accessible to people who are not mobile, to ensure all of our astronomy students access to telescopes, etc.

So, how do we do this? The learning has to be future orientated, and suited to what they will need in the future. I like the idea of the kinds of jobs you see on Careers 2030 – Organic Voltaics Engineer, Data Wrangler, Robot Counsellor – the kinds of work roles that may be there in the future. At the same time of looking to the future we need to also think about what it means to be in a “post truth era” – with accessibility of materials, and access to the educational process too. We need a global open education.

So, FutureLearn is a separate but wholly owned company of the Open University. There are 5.6 million learners, 400 free courses. We have 70 partner institutions, with 70% of learners from outside the UK, 61% are female, and 22% have had no other tertiary education.

When we came to build FutureLearn we had a pretty blank slate. We had EdX and similar but they weren’t based on any particular pedagogy – built around extending the lectures, and around personalised quizzes etc. And as we set up FutureLearn we wanted to encourage a social constructivist model, and the idea of “Learning as Conversation”, based on the idea that all learning is based on conversation – with oursleves, with our teachers and their expertise, and with other learners to try and reach shared understanding. And that’s the brief our software engineers took on. We wanted it to be scalable, for every piece of content to have conversation around it – so that rather than sending you to forums, the conversation sat with the content. And also the idea of peer review, of study groups, etc.

So, for example, the University of Auckland have a course on Logical and Critical thinking. Linked to a video introducing the course is a conversation, and that conversation includes facilitative mentors… And engagement there is throughout the conversation… Our participants have a huge range of backgrounds and locations and that’s part of the conversation you are joining.

Now 2012 was the year of the MOOC, but now they are becoming embedded, and MOOCs need to be taken seriously as part of campus activities, as part of blended learning. In 2009 the US DoE undertook a major meta-study of comparisons of online and face to face teaching in higher education. On average students in online learning conditions performed better than those receiving face to face online, but those undertaking a blend of campus and online did better.

So, we are starting to blend campus and online, with campus students accessing MOOCs, with projects and activities that follow up MOOCs, and we now have the idea of hybrid courses. For example FutureLearn has just offered its full post graduate course with Deakin University. MOOCs are no longer far away from campus learning, they are blending together in new ways of accessing content and accessing conversation. And it’s the flexibility of study that is so important here. There are also new modes of learning (e.g. flipped learning), as well as global access to higher education, including free coures, global conversation and knowledge sharing. The idea of credit transfer and a broader curriculum enabled by that. And the concept of disaggregation – affordable education, pay for use? At the OU only about a third of our students use the tutoring they are entitled to, so perhaps those that use tutoring should pay (only).

As Geoff Mulgan said we do lack evidence – though that is happening. But we also really need new learning platforms that will support free as well as accredited courses, that enables accreditation, credit transfer, badging, etc.


Q1) How do you ensure the quality of the content on your platform?

A1) There are a couple of ways… One was in our selective choice of which universities (and other organisations) we work with. So that offers some credibility and assurance. The other way is through the content team who advise every partner, every course, who creates content for FutureLearn. And there are quite a few quality standards – quite a lot of people on FutureLearn came from the BBC and they come with a very clear idea of quality – there is diversity of the offer but the quality is good.

Q2) What percentage of FutureLearn learners “complete” the course?

A2) In general its about 15-20%. Those 15% ish have opportunities they wouldn’t have other have had. We’ve also done research on who drops out and why… Most (95%) say “it’s not you, it’s me”. Some of those are personal and quite emptional reasons. But mainly life has just gotten in the way and they want to return. Of those remaining 5% about half felt the course wasn’t at quite the right level for them, the other half just didn’t enjoy the platform, it wasn’t right for them.

So, now over to you to discuss…

  1. What pedagogy, ways of doing teaching and learning, would you bring in.
  2. What evidence? What would consitute success in terms of teaching and learning.


Comments: MOOCs are quite different from modules and programmes of study.. Perhaps there is a branching off… More freestyle learning… The learner gets value from whatever paths they go through…

Comments: SLICCs at Edinburgh enable students to design their own module, reflecting and graded against core criteria, but in a project of their own shaping. [read more here]

Comments: Adaptive learning can be a solution to that freestyle learning process… That allows branching off, the algorithm to learn from the learners… There is also the possibility to break a course down to smallest components and build on that.

I want to focus a moment on technology… Is there something that we need.

Comments: We ran a survey of our students about technologies… Overwhelmingly our students wanted their course materials available, they weren’t that excited by e.g. social media.

Let me tell you a bit about what we do at the Open University… We run lots of courses, each looks difference, and we have a great idea of retention, student satisfaction, exam scores. We find that overwhelmingly students like content – video, text and a little bit of interactivity. But students are retained more if they engage in collaborative learning. In terms of student outcomes… The lowest outcomes are for courses that are content heavy… There is a big mismatch between what students like and what they do best with.

Comment: There is some research on learning games that also shows satisfaction at the time doesn’t always map to attainment… Stretching our students is effective, but it’s uncomfortable.

Julia Taylor: Please do get in touch if you more feedback or comments on this.

Feb 222017

This afternoon I am delighted to be at the Inaugeral Lecture of Prof. Jonathan Silvertown from the School of Biological Sciences here at the University of Edinburgh.

Vice Chancellor Tim O’Shea is introducing Jonathan, who is Professor of Evolutionary Ecology and Chair in Technology Enhanced Science Education, and who came to Edinburgh from the Open University.

Now to Jonathan:

Imagine an entire city turned into an interactive learning environment. Where you can learn about the birds in the trees, the rock beneath your feet. And not just learn about them, but contribute back to citizen science, to research taking place in and about the city. I refer to A City of Learning… As it happens Robert Louis Stevenson used to do something similar, carrying two books in their pocket: one for reading, one for writing. That’s the idea here. Why do this in Edinburgh? We have the most fantastic history, culture and place.

Edinburgh has an increadible history of enlightenment, and The Enlightenment. Indeed it was said that you could, at one point, stand on the High Street and shake the hands of 50 men of genius. On the High Street now you can shake Hume (his statue) by the toe and I shall risk quoting him: “There is nothing to be learned from a professor which is not to be met within books”. Others you might have met then include Joseph Black, and also James Hutton, known as the “father of modern geology” and he walked up along the crags and a section now known as “Huttons section” (an unconformity to geologists) where he noted sandstone, and above it volcanic rock. He interpreted this as showing that rocks accumulate by ongoing processes that can be observed now. That’s science. You can work out what happened in the past by understanding what is happening now. And from that he concluded that the earth was more than 6000 years old, as Bishop Usher had calculated. In his book The Theory of the Earth he coined this phrase “No vestige of a beginning, no prospect of an end”. And that supported the emerging idea of evolutionary biology which requires a long history to work. That all happened in Edinburgh.

Edinburgh also has a wealth of culture. It is (in the New Town) a UNESCO World Heritage site. Edinburgh has the Fringe Festival, the International Festival, the Book Festival, the Jazz Festival… And then there is the rich literary heritage of Edinburgh – as J.K. Rowling says “Its impossible to live in Edinburgh without sensing it’s literary heritage”. Indeed if you walk in the Meadows you will see a wall painting celebrating The Prime of Miss Jean Brodie. And you can explore this heritage yourself through the LitLong Website and App. He took thousands of books with textmining and a gazeteer of Edinburgh Places, extracting 40,000 snippets of text associated with pinpoints on the map. And you can do this on an app on your phone. Edinburgh is an extraordinary place for all sorts of reasons…

And a place has to be mapped. When you think of maps these days, you tend to think of Google. But I have something better… Open Street Map is to a map what Wikipedia is to the Encyclopedia Britannica. So, when my wife and I moved into a house in Edinburgh which wasn’t on Ordnance Survey, wasn’t on Google Maps, but was almost immediately on OpenStreetMap. It’s Open because there are no restrictions on use so we can use it in our work. Not all cities are so blessed… Geographic misconceptions are legion, if you look at one of th emaps in the British Library you will see the Cable and Wireless Great Circle Map – a map that is both out of date and prescient. It is old and outdated but does display the cable and wireless links across the world… The UK isn’t the centre of the globe as this map shows, wherever you are standing is the centre of the globe now. And Edinburgh is international. At least year’s Edinburgh festival the Deep Time event projected the words “Welcome, World” just after the EU Referendum. Edinburgh is a global city, University of Edinburgh is a global university.

Before we go any further I want to clarify what I mean by learning when I talk about making a city of learning… Kolb (1984) is “How we transform experience into knowledge”, it is learning by discovery. And, wearing my evolutionary hat, it’s a major process of human adaptation. Kolb’s learning cycle takes us from Experience, to Reflect (observe), Conceptualise (Ideas), Experiment (Test), and back to Experience. It is of course also the process of scientific discovery.

So, lets apply that cycle of learning to iSpot, to show how that experiential learning and discovery and what extraordinary things that can do. iSpot is designed to crowdsource the identification of organisms (see Silvertown, Harvey, Greenwood, Dodd, Rosewell, Rebelo, Ansine, McConway 2015). If I see “a white bird” it’s not that exciting, but if I know its a Kittywake then that’s interesting – has it been seen before? Are they nesting elsewhere? You can learn more from that. So you observe an orgnism, you reflect, you start to get comment from others.

So, we have over 60,000 registered users of iSpot, 685k observations, 1.3 million photos, and we have identified over 30,000 species. There are many many stories contained within that. But I will share one of these. So this observation came in from South Africa. It was a picture of some seeds with a note “some children in Zululand just ate some of these seeds and are really ill”. 35 seconds later someone thousands of miles away in Capetown, others agreed on the id. And the next day the doctor who posted the image replied to say that the children were ok, but that it happens a lot and knowing what plant they were from helps them to do something. It wasn’t what we set this up to do but that’s a great thing to happen…

So, I take forward to this city of learning, the lessons of a borderless community; the virtuous circle of learning which empowers and engages people to find out more; and encourage repurposing – use the space as they want and need (we have added extra functions to support that over time in iSpot).

Learning and discovery lends itself to research… So I will show you two projects demonstrating this which gives us lessons to take forward into Edinburgh City of Learning. Evolution Megalab.org was created at the Open University to mark Darwins double centenary in 2009, but we also wanted to show that evolution is happening right now in your own garden… So the snails in your garden have colours and banding patterns, and they have known genetic patterns… And we know about evolution in the field. We know what conditions favour which snails. So, we asked the public to help us test the hypothesis about the snails. So we had about 10,000 populations of snails captured, half of which was there already, half of which was contributed by citizens over a single year. We had seen, over the last 50 years, an increase in yellow shelled snails which do not warm up too quickly. We would expect brown snails further north, yellow snails further south. So was that correct? Yes and No. There was an increase in sanddunes, but not elsewhere. But we also saw a change in patterns of banding patterns, and we didn’t know why… So we went back to pre Megalab data and that issue was provable before, but hadn’t previously been looked for.

Lessons from Megalab included that all can contribute, that it must be about real science and real questions, and that data quality matters. If you are ingenious about how you design your project, then all people can engage and contribute.

Third project, briefly, this is Treezilla, the monster map of trees – which we started in 2014 just before I came here – and the idea is that we have a map of the identity, size and location of trees and, with that, we can start to look at ecosystem impact of these trees, they capture carbon, they can ameliorate floods… And luckily my colleague Mike Dodd spotted some software that could be used to make this happen. So one of the lessons here is that you should build on existing systems, building projects on top of projects, rather than having to happen at the same time.

So, this is the Edinburgh Living Lab, and this is a collaboration between schools and the kinds of projects they do include bike counters and traffic – visualised and analysed – which gives the Council information on traffic in a really immediate way that can allow them to take action. This set of projects around the Living Lab really highlighted the importance of students being let loose on data, on ideas around the city. The lessons here is that we should be addressing real world problems, public engagement is an important part of this, and we are no longer interdisiplinary, we are “post disciplinary” – as is much of the wider world of work and these skills will go with these students from the Living Lab for instance.

And so to Edinburgh Cityscope, a project with synergy across learning, research and engagement. Edinburgh Cityscope is NOT an app, it is an infrastructure. It is the stuff out of which other apps and projects will be built.

So, the first thing we had to do was made Cityscope futureproof. When we built iSpot the iPhone hadn’t been heard of, now maybe 40% of you here have one. And we’ve probably already had peak iPhone. We don’t know what will be used in 5 years time. But there are aspects they will always need… They will need Data. What kinds of data? For synergy and place we need maps. And maps can have layers – you can relate the nitrogen dioxide to traffic, you can compare the trees…. So Edinburgh Cityscope is mapable. And you need a way to bring these things together, you need a workbench. Right now that includes Jupyter, but we are not locked in, so we can change in future if we want to. And we have our data and our code open on Github. And then finally you need to have a presentation layer – a place to disseminate what we do to our students and colleagues, and what they have done.

So, in the last six months we’ve made progress in data – using Scottish Government open data portal we have Lung Cancer registrations that can be mapped and changes seen. We can compare and investigate and our students can do that. We have the SIMD (Scottish Index of Multiple Deprivation) map… I won’t show you a comparison as it has hardly changed in decades – one area has been in poverty since around 1900. My colleague Leslia McAra is working in public engagement, with colleagues here, to engage in ways that make this better, that makes changes.

The workbench has been built. It isn’t pretty yet… You can press a button to create a Notebook. You can send your data to a phone app – pulling data from Cityscope and show it in an app. You can start a new tour blog – which anybody can do. And you create a survey for used for new information…

So let me introduce one of these apps. Curious Edinburgh is an app that allows you to learn about the history of science in Edinburgh, to explore the city. The genius idea – and I can say genius because I didn’t build it, Niki and the folks at EDINA did – is that you can create this tour from a blog. You fill in forms essentially. And there is an app which you can download for iOS, and a test version for Android – full one coming for the Edinburgh International Science Festival in April. Because this is an Edinburgh Cityscope project I’ve been able to use the same technology to create a tour of the botanical gardens for use in my teaching. We used to give out paper, now we have this app we can use in teaching, in teaching in new ways… And I think this will be very popular.

And the other app we have is Fieldtrip, a survey tool borrowed from EDINA’s FieldTrip Open. And that allows anyone to set up a data collection form – for research, for social data, for whatever. It is already open, but we are integrating this all into Edinburgh Cityscope.

So, this seems a good moment to talk about the funding for this work. We have had sizable funding from Information Services. The AHRC has funded some of the Curious Edinburgh work, and ESRC have funded work which a small part of which Edinburgh Cityscope will be using in building the community.

So, what next? We are piloting Cityscope with students – in the Festival of Creative Learning this week, in Informatics. And then we want to reach out to form a community of practice, including schools, community groups and citizens. And we want to connect with cultural institutions and industry – already working with the National Museum of Scotland. And we want to interface with the Internet of Things – anything with a chip in it really. You can interact with your heating systems from anywhere in the world – that’s the internet of things, things connected to the web. And I’m keen on creating an Internet of Living Things. The Atlas of Living Scotland displays all the biological data of Scotland on the map. But data gets out of date. It would be better to updated in real time. So my friend Kate Jones from UCL is working with Intel creating real time data from bats – allowing real time data to be captured through connected sensors. And also in that space Graham Stone (Edinburgh) is working on a project called Edinburgh Living Landscape which is about connecting up green spaces, improve biodiversity…

So, I think what we should be going for is for recognition of Edinburgh as the First UNESCO City of Learning. Edinburgh was the first UNESCO City of Literature and the people who did that are around, we can make our case for our status as City of Learning in much the same way.

So that’s pretty much the end. Nothing like this happens without lots and lots of help. So a big thanks here to Edinburgh Cityscope’s steering group and the many people in Information Services who have been actually building it.

And the final words are written for me: Four Quartets, T.S. Eliot:

“We shall not cease from exploration

And the end of all our exploring 

Will be to arrive where we started

And know the place for the first time”

 February 22, 2017  Posted by at 6:17 pm LiveBlogs Tagged with: , ,  No Responses »
Nov 242016

This morning I’m at the Edinburgh Tourism Action Group‘s Digital Solutions for Tourism Conference 2016. Why am I along? Well EDINA has been doing some really interesting cultural heritage projects for years, particularly Curious Edinburgh – history of science tours app and our citizen science apps for COBWEBFieldTrip Open which are used by visitors to locations, not just residents. And of course services like Statistical Accounts of Scotland which have loads of interest from tourists and visitors to Scotand. We are also looking at new mobile, geospatial, and creative projects so this seems like a great chance to hear what else is going on around tourism and tech in Edinburgh.

Introduction James McVeigh, Head of Marketing and Innovation, Festivals Edinburgh

Welcome to our sixth Digital Solutions for Tourism Conference. In those last six years a huge amount has changed, and our programme reflects that, and will highlight much of the work in Edinburgh, but also picking up what is taking place in the wider world, and rolling out to the wider world.

So, we are in Edinburgh. The home of the world’s first commercially available mobile app – in 1999. And did you also know that Edinburgh is home to Europe’s largest tech incubator? Of course you do!

Welcome Robin Worsnop, Rabbie’s Travel, Chair, ETAG

We’ve been running these for six years, and it’s a headline event in the programme we run across the city. In the past six years we’ve seen technology move from business add on to fundamental to what we do – for efficiency, for reach, for increased revenue, and for disruption. Reflecting that change this event has grown in scope and popularity. In the last six years we’ve had about three and a half thousand people at these events. And we are always looking for new ideas for what you want to see here in future.

We are at the heart of the tech industry here too, with Codebase mentioned already, Sky Scanner, and the School of Informatics at the University of Edinburgh all of which attracts people to the city. As a city we have free wifi around key cultural venues, on the buses, etc. It is more and more ubiquitous for our tourists to have access to free wifi. And technology is becoming more and more about how those visitors enhance their visit and experience of the city.

So, we have lots of fantastic speakers today, and I hope that you enjoy them and you take back lots of ideas and inspiration to take back to your businesses.

What is new in digital and what are the opportunities for tourism Brian Corcoran, Director, Turing Festival

There’s some big news for the tech scene in Edinburgh today: SkyScanner have been brought by a Chinese company for 1.5bn. And FanDual just merged with its biggest rival last week. So huge things are happening.

So, I thought today technology trends and bigger trends – macro trends – might be useful today. So I’ll be looking at this through the lens of the companies shaping the world.

Before I do that, a bit about me, I have a background in marketing and especially digital marketing. And I am director of the Turing Festival – the biggest technology festival in Scotland which takes place every August.

So… There are really two drivers of technology… (1) tech companies and (2) users. I’m going to focus on the tech companies primarily.

The big tech companies right now include: Uber, disrupting the transport space; Netflix – for streaming and content commissioning; Tesla – dirupting transport and energy usage; Buzzfeed – influential with huge readership; Spotify – changing music and music payments; banking… No-one has yet dirupted banking but they will soon… Maybe just parts of banking… we shall see.

And no-one is influencing us more than the big five. Apple, mainly through the iPhone. I’ve been awaiting a new MacBook for five years… Apple are moving computing PCs for top end/power users, but also saying most users are not content producers, they are passive users – they want/expect us to move to iPads. It’s a mobile device (running iOS) and a real shift. iPhone 7 got coverage for headphones etc. but cameras didn’t get much discussion, but it is basically set up for augmented reality with two cameras. Air Pods – the cable-less headphones – is essentially a new wearable, like/after the iWatch. And we are also seeing Siri opening up.

Over at Google… Since Google’s inception the core has been search and the Google search index and ranking. And they are changing it for the first time ever really… And building a new one… They are building a Mobile-only search index. They aren’t just building that they are prioritising it. Mobile is really the big tech trend. And in line with that we have their Pixel phone – a phone they are manufacturing themselves… That’s getting them back into wearables after their Google Glass misstep. And Google Assistant is another part of the Pixel phone – a Siri competitor… Another part of us interacting with phones, devices, data, etc. in a new way.

Microsoft is one of the big five that some thing shouldn’t be there… They have made some missteps… They missed the internet. They missed – and have written off phones (and Nokia). But they have moved to Surface – another mobile device. They have abandoned Windows and moved to Microsoft 365. They brought LinkedIn for £26bn (in cash!). One way this could effect us… LinkedIn has all this amazing data… But it is terrible at monetising it. That will surely change. And then we have HoloLens – which means we may eventually have some mixed reality actually happening.

Next in the Big Five is Amazon. Some very interesting things there… We have Alexa – the digital assistant service here. They have, as a device, Echo – essentially a speaker and listening device for your home/hotel etc. Amazon will be in your home listening to you all the time… I’m not going to get there! And we have Amazon Prime… And also Prime Instant Video. Amazon moving into television. Netflix and Amazon compete with each other, but more with traditional TV. And moving from Ad income to subscriptions. Interesting to think where TV ad spend will go – it’s about half of all ad spend.

And Facebook. They are at ad saturation risk, and pushing towards video ads. With that in mind they may also become defacto TV platform. Do they have new editorial responsibility? With Fake News etc. are they a tech company? Are they a media company? At the same time they are caving completely to Chinese state surveillance requests. And Facebook are trying to diversify their ecosystem so they continue to outlast their competitors – with Instagram, WhatsApp, Oculus, etc.

So, that’s a quick look at tech companies and what they are pushing towards. For us, as users the big moves have been towards messaging – Line, Wiichat, Messaging, WhatsApp, etc. These are huge and there has been a big move towards messaging. And that’s important if we are trying to reach the fabled millennials as our audience.

And then we have Snapchat. It’s really impenetrable for those under 30. They have 150 Daily Active Users, they have 1 bn snaps daily, 10bn videos daily. They are the biggest competitor to Facebook, to ad revenue. They have also gone for wearables – in a cheeky cool upstart way.

So, we see 10 emergent patterns:

  1. Mobile is now *the* dominant consumer technology, eclipsing PCs. (Apple makes more from the iPhone than all their other products combined, it is the most successful single product in history).
  2. Voice is becoming in an increasingly important UI. (And interesting how answers there connect to advertising).
  3. Wearables bring tech into ever-closer physical and psychological proximity to us. It’s now on our wrist, or face… Maybe soon it will be inside you…
  4. IoT is getting closer, driven by the intersection of mobile, wearables, APIs and voice UI. Particularly seeing this in smart home tech – switching the heat on away from home is real (and important – it’s -3 today), but we may get to that promised fridge that re-orders…
  5. Bricks and mortar retail is under threat, and although we have some fulfillment challenges, they will be fixed.
  6. Messaging marks generational shift in communification preferences – asynchronous prferred
  7. AR and VR will soon be commonplace in entertainment – other use cases will follow… But things can take time. Apple watch went from unclear use case to clear health, sports, etc. use case.
  8. Visual cmmunications and replacing textural ones for millenials: Snapchat defines that.
  9. Media is increasingly in the hands of tech companies – TV ads will be disrupted (Netflix etc.)
  10. TV and ad revenue will move to Facebook, Snapchat etc.

What does this all mean?

Mobile is crucial:

  • Internet marketing in tourism now must be mobile-centric
  • Ignore Google mobile index at your peril
  • Local SEO is increasing in importance – that’s a big opportunity for small operators to get ahead.
  • Booking and payments must be designed for mobile – a hotel saying “please call us”, well Millennials will just say no.

It’s unclear where new opportunities will be, but they are coming. In Wearables we see things like twoee – wearable watches as key/bar tab etc. But we are moving to a more seamless place.

Augmented reality is enabling a whole new set of richer, previously unavailable interactive experiences. Pokemon Go has opened the door to location-based AR games. That means previously unexciting places can be made more engaging.

Connectivity though, that is also a threat. The more mobile and wearables become conduits to cloud services and IoT, the more the demand for free, flawless internet connectivity will grow.

Channels? Well we’ve always needed to go where the market it. It’s easier to identify where they are now… But we need to adapt to customers behaviours and habits, and their preferences.

Moore’s law: overall processing power for computers will double every two year (Gordon Moore, INTEL, 1965)… And I wonder if that may also be true for us too.

Shine the Light – Digital Sector

Each of these speakers have just five minutes…

Joshua Ryan-Saha, Skills Lead, The Data Lab – data for tourism

I am Skills Lead at The Data Lab, and I was previously looking at Smart Homes at Nesta. The Data Lab works to find ways that data can benefit business, can benefit Scotland, can benefit your work. So, what can data do for your organisation?

Well, personalised experiences… That means you could use shopping habits to predict, say, a hotel visitors preferences for snacks or cocktails etc. The best use I’ve seen of that is in a museum using heart rate monitors to track experience, and areas of high interest. And as an exhibitor you can use phone data to see how visitors move around, what do they see, etc.

You can also use data in successful marketing – Tripadvisor data being a big example here.

You can also use data in efficient operations – using data to ensure things are streamlined. Things like automatic ordering – my dentist did this.

What can data do for Tourism in Scotland? Well we did some work with Glasgow using SkyScanner data, footfall data, etc. to predict hotel occupancy rates and with machine learning and further data that has become quite accurate over time. And as you start to predict those patterns we can work towards seamless experience. At the moment our masters students are running a competition around business data and tourism – talk to me to be involved as I think a hack in that space would be excellent.

What can data lab do for you? Well we fund work – around £70k per project, also smaller funds. We do skills programmes, masters and Phd students. And we have expertise – data scientists who can come in and work with you to sort your organisation a bit. If you want to find out more, come and talk to me!

Brian Smillie, Beezer – app creation made affordable and easy

1 in 5 people own a smart phone, desktop is a secondary touchpoint. The time people spend using mobile app has increased 21% since last year. There are 1 bn websites, only 2 million apps. Why are business embracing mobile apps? Well speed and convenience are key – an app enables 1 click access. Users expect that. And they can also reduce staff time on transations, etc. It allows building connection, build loyalty… Wouldn’t it be great to be able to access that. But the cost can be £10k or more per single app. When I was running a digital agency in Australia I heard the same thing over and over again – that they had spent a small fortune then no-one downloaded it. Beezer enables you to build an app in a few hours, without an app store, and it works on any platforms. SMEs need a quick, cheap, accessible way to build apps and right now Beezer are the only ones who do this…

Ben Hutton, XDesign – is a mobile website enough?

I’m Ben from XDesign – we build those expensive apps Brian was just talking about… A few years ago I was working on analytics of purchasing and ads… I was working on that Crazy Frog ad… We found the way that people would download that ringtone was to batter people into submission, showing it again again again… And that approach has distorted mobile apps and their potential. But actually so has standardised paper… We are so used to A4 that it is the default digital size too… It was a good system for paper merchants in the C17th. It has corrupted the ideas we have about apps… We think that apps are extensions of those battering/paper skillsets.

A mobile phone is a piece of engineering, software that sits in your pocket. It requires software engineers, designers, that will ensure quality assurance, that is focused on that medium. We have this idea of the Gigabit Society… We have 4.5G, the rocket fuel for mobile… And it’s here in London, in Manchester, in Birmingham… It is coming… And to work with that we need to think about the app design. It isn’t meant to be easy. You have to know about how Google is changing, about in-app as well as app sales, you need to know deep linking. To build a successful app you need to understand that you don’t know what you are doing but you have to give it a try anyway… That’s how we got to the moon!

Chris Torres, Director, Senshi Digital – affordable video

We develop tourism brands online to get the most out of online, out of sales. And I’ve been asked today to talk specifically about video. Video has become one of the best tools you can use in tourism. One of the reasons is that on your website or social media if you use video your audience can learn about your offering 60k times faster than if they read your content.

The average user watches 32 videos per month; 79% of travellers search YouTube for travel ideas – and many of them don’t know where they are going. By 2018 video will be 84% of web traffic. And it can really engage people.

So what sort of video do we do? Well we do background video for homepages… That can get across the idea of a place, of what they will experience when they get to your tourism destination.

What else? Staff/tour guide videos is huge. We are doing this for Gray Line at the moment and that’s showing a big uptick in bookings. When people see a person in a video, then meet at your venue, that’s a huge connection, very exciting.

We also have itinerary videos, what a customer can experience on a tour (again my example is Gray Line).

A cute way to do this is to get customers to supply video – and give them a free tour, or a perk – but get them to share their experiences.

And destination videos – it’s about the destination, not neccassarily you, your brand, your work – just something that entices customers to your destination.

Video doesn’t need to be expensive. You can film on your iPhone. But also you can use stock supplies for video – you’ve no excuse not to use video!

Case Study – Global Treasure Apps and Historic Environment Scotland Lorraine Sommerville and Noelia Martinez, Global Treasure Apps

Noella: I am going to talk about the HES project with Edinburgh Castle, Edinburgh College, Young Scot. The project brought together young people and cultural heritage information. The process is a co-production process, collecting images, information, stories and histories of the space with the Global Treasure Apps, creating content. The students get an idea of how to create a real digital project for a real client. (Cue slick video on this project outlining how it worked).

Noella: So, the Global Treasure Apps are clue driven trails, guiding visitors around visitor attractions. For this Edinburgh Castle project we had 20 young people split into 5 groups. They researched at college and drafted trails around the space. Then they went to the castle and used their own mobile devices to gather those digital assets. And we ended up with 5 trails for the castle that can be used. Then, we went back to the college, uploaded content to our database, and then set the trails live. Then we go ESOL students to test the trails, give feedback and update it.

Lorraine: Historic Environment Scotland were delighted with the project, as were Edinburgh College. We are really keen to expand this to other destinations, especially as we enter The Year of Young People 2018, for your visitors and destinations.

Apps that improve your productivity and improve your service Gillian Jones, Qikserve

Before I start I’m going to talk a wee bit about SnapChat… SnapChat started as a sexting app… And I heard about it from my mum – but she was using it for sharing images of her house renovation! And if she can use that tech in new ways, we all can!

I am from Edinburgh and nothing makes me happier than seeing a really diverse array of visitors coming to this city, and I think that SkyScanner development will continue to see that boom.

A few months ago I was in Stockholm. I walked out of the airport and saw a fleet of Teslas as their taxis. It was a premium, innovative, thing to see. I’m not saying we should do that here, I’m saying the tourist experience starts from the moment they see the city, especially the moment that they arrive. And, in this day and age, if I was to guest coming to a restaurant, hotel, etc. what would I want? What would I see? It’s hard as a provider to put yourself in your customers shoes. How do we make tourists and guests feel welcome, feel able to find what they need. Where do we want to go and how to get there? There is a language barrier. There is unfamiliar cuisine – and big pictorial menus aren’t always the most appealing solution.

So, “Francesco” has just flown to Edinburgh from Rome. He speaks little English but has the QikServe app, he can see all the venues that uses that. He’s impatient as he has a show to get to. He is in a rush… So he looks at a menu, in his native language on his phone – and can actually find out what haggis or Cullen Skink is. And he is prompted there for wine, for other things he may want. He gets his food… And then he has trouble finding  a waiter to pay. He wants to pay by Amex – a good example of ways people want to pay, but operators don’t want to take – But in the app he can pay. And then he can share his experience too. So, you have that local level… If they have a good experience you can capitalise on it. If they have a bad experience, you can address it quickly.

What is the benefit of this sort of system? Well money for a start. Mobile is proven for driving up sales – I’ve ordered a steak, do I want a glass of red with that? Yeah, I probably do. So it can increase average transaction value. It can reduce pressure on staff during busy times, allowing them to concentrate on great service. That Starbucks app – the idea of ordering ahead and picking up – is normal now…  You can also drive footfall by providing information in tourists native language. And you can upsell, cross sell and use insights for more targeted campaigns – more sophisticated than freebies, and more enticing. It is about convenience tailored to me. And you can keen your branding at the centre of the conversation, across multiple channels.

There are benefits for tourists here through greater convenience with reduced wait-ties and queues; by identifying restaurant of choice and order in native language and currency; find and navigate to restaurant of choice with geo-location capabilities; order what you want, how you want it with modifiers, upsell and cross sell prompts in native language – we are doing projects in the US with a large burger chain who are doing brilliantly because of extra cheese ordered through the app!; and you can easily share and recommend experience through social media.

We work across the world but honestly nothing would make me happier than seeing us killing it in Edinburgh!

Virtual reality for tourism Alexander Cole, Peekabu Studios

Thank you for having me along, especially in light of recent US events (Alex is American).

We’ve talked about mobile. But mobile isn’t one thing… There are phones, there have been robot sneakers, electronic photo frames, all sorts of things before that are now mixed up and part of our phones. And that’s what we are dealing with with VR. Screens, accelerometers, buttons have all been there for a while! But if I show you what VR looks like… Well… It’s not like an app or a film or something, it’s hard to show. You look like a dork using it…

VR is abou

Right now VR is a $90m industry (2014) but by 2018 we expect it to be at least $5.2bn, and 171m users – and those are really conservative estimates.

So, VR involves some sort of headset… Like an HTC Vive, or Oculus Rift, etc. They include an accelorometer to see where you are looking, tilting, turning. Some include additional sensors. A lot of these systems have additional controllers, that detect orientation, presses, etc. that means the VR knows where I am, where I’m looking, what I’m doing with my hands. It’s great, but this is top end. This is about £1000 set up AND you need a PC to render and support all of this.

But this isn’t the only game in town… Google have the “Daydream” – a fabric covered mobile phone headset with lens. They also have the Google Cardboard. In both cases you have a phone, strap in, and you have VR. But there are limitations… It doesn’t track your movement… But it gives you visuals, it tracks how you turn, and you can create content from your phones – like making photospheres – image and audio – when on holiday.

Capture is getting better, not just on devices. 360 degree cameras are now just a few hundred pounds, you can take it anywhere, it’s small and portable and that makes for pretty cool experiences. So, if you want to climb a tower (Alex is showing a vertigo-inducing Moscow Tower video), you can capture that, you can look down! You can look around. For tourism uses it’s like usual production – you bring a camera, and you go to a space, and you show what you would like, you just do it with a 360 degree camera. And you can share it on YouTube’s 360 video channel…

And with all of this tech together you can set up spaces where sensors are all around that properly track where you are and give much more immersive emotional experiences… Conveying emotion is what VR does better than anything when it is done well.

So, you can do this two ways… You can create content so that someone not in a particular physical space, can feel they are there. OR you can create a new space and experience that. It requires similar investment of time and effort. It’s much like video creation with a little more stitching together that is required.

So, for example this forthcoing space game with VR is beautiful. But that’s expensive. But for tourism the production can be more about filming – putting a camera in a particular place. And, increasingly, that’s live. But, note…

You still look like a ninny taking place! That’s a real challenge and consideration in terms of distribution, an dhow many people engage at the same time… But you can use that too – hence YouTube videos all usually including both what’s on screen, and what’s going on (the ninny view).  And now you have drones and drone races with VR used by the controller… That’s a vantage point you cannot get any other way. That is magical and the cost is not extortionate… You can take it further than this… You can put someone in a rig with wings, with fans, with scents, and with VR, so you can fly around and experience a full sensory experience… This is stupid expensive… But it is the most awesome fun! It conveys a sense of doing that thing VR was always meant to do. When we talk about where VR is going… We have rollercoasters with VR – so you can see Superman flying around you. There are some on big elastic bands – NASA just launched one for Mars landing.

So, tourism and VR is a really interesting marriage. You can convey a sense of place, without someone being there. Even through 360 degree video, YouTube 360 degree video… And you can distribute it in more professional way for Vive, for Oculus Rift… And when you have a space set up, when you have all those sensors in a box… That’s a destination, that’s a thing you can get people too. There is a theme park destination like experiences. You can service thousands+ people with one set up and one application.

So, the three E’s of VR: experience, exploration – you drive this; and emotion – nothing compares to VR for emotion. Watching poeple use VR for the first time is amazing… They have an amazing time!

But we can’t ignore the three A’s of VR: access – no one platform, and lots of access issues; affordability – the biggest most complex units are expensive, your customers won’t have one, but you can put it in your own space; applicability – when you have new tech you can end up treating everything as a nail for your shiny new hammer. Don’t have your honeymoon in VR. Make sure what you do works for the tech, for the space, for the audience’s interest.

Using Data and Digital for Market Intelligence for Destinations and Businesses Michael Kessler, VP Global Sales, Review Pro

I’m going to be talking about leveraging guest intelligence to deliver better experiences and drive revenue. And this isn’t about looking for “likes”, it’s about using data to improve revenue, to develop business.

So, for an example of this, we analysed 207k online reviews in 2016 year to date for 339 3*, 4* and 5* hotels in Glasgow and Edinburgh. We used the Global Review Index (GRI) – which we developed and is an industry-standard reputation score based on review data collected from 175+ OTAs and review sites in over 45 languages. To do that we normalise scores – everyone uses their own scale. From that data we see Edinburgh’s 5* hotels have 90.2% satisfaction in Edinburgh (86.4% in Glasgow), and we can see the variance by * rating (Glasgow does better for satisfaction at 3*).

You can explore satisfaction by traveler types – solo, couples, families, business. The needs are very different. For any destination or hotel this lets you optimise your business, to understand and improve what we do.

We run sentiment analysis, using machine learning, across reviews. We do this by review but also aggregate it so that you can highlight strengths and weaknesses in the data. We show you trends… You will understand many of these but those trends allow you to respond and react to those trends (e.g. Edinburgh gets great scores on Location, Staff, Reception; poorer scores on Internet; Bathroom; Technology. Glasgow gets great Location, Staff, Reception, poorer scores for Internet, Bathroom; Room). We do this across 16 languages and this is really helpful.

We also highlight management response rates. So if guests post on TripAdvisor, you have to respond to them. You can respond and use as a marketing channel too. Looking across Edinburgh and Glasgow we can see a major variation between (high) response rates to TripAdvisor versus (low) response to Booking.com or Expedia.

The old focus of marketing was Product/Promotion/Price/Place. But that has changed forever. It’s all about experience now. That’s what we want. I think we have 4 Es instead of 4 Ps. So, those 4E’s are: Experience; Evangelism; Exchange; Everyplace. In the past I shared experience with friends and families, but now I evangelise, I share much more widely. And everyplace reflects sending reviews too – 60-70% of all reviews and feedback to accommodation is done via mobile. You can’t make better marketing than authentic feedback from guests, from customers.

And this need to measure traveller experience isn’t just about hotels/hostels/services apartments, it is also about restaurants; transportation; outdoor attractions; theme parks; museums; shopping. And those reviews have a solid impact on revenue – 92% of travelers indicate that their decisions are highly influenced by reviews and ratings.

So, how do we use all this data? Well there is a well refined cycle: Online reviews; we can have post-stay/event surveys; and in-stay surveys. Online reviews and post-stay surveys are a really good combination to understand what can be improved, where change can be made. And using that cycle you can get to a place of increased guest satisfaction, growth in review volume, improved online rankings (TripAdvisor privileges more frequently reviewed places for instance), and increased revenue.

And once you have this data, sharing it across the organisation has a huge positive value, to ensure the whole organisation is guest-centric in their thinking and practice.

So, we provide analytics and insights for each of your departments. So, for housekeeping, what happened in the room space in reviews; we can do semantic data checking for cleanliness, clean, etc.

In-stay reviews also helps reduce negative reviews – highlighting issues immediately, make the experience great whilst your guest is still there. And we have talked about travellers being mobile, but our solution is also mobile so that we can use it in all spaces.

How else can we use this? We can use it to increase economic development by better understanding our visitors. How do we do this? Well for instance in Star Ratings Australia we have been benchmarking hotel performances across 5000+ hotels across a range of core KPIs. Greece (SETE) is a client of ours and we help them to understand how they as a country, as cities, as islands, compete with other places and cities across the world.

So our system works for anyone with attractions, guests, reviews, clients, where we can help. Operators can know guests – but that’s opinion. We try to enable decisions based on real information. That allows understanding of weaknesses and drive change. There is evidence that increasing your Global Review Index level will help you raise revenue. It also lets you refine your marketing message based on what you perform best at in your reviews, make a virtue of your strengths on your website, on TripAdvisor, etc.

And with reviews, do also share reviews on your own site – don’t just encourage them to go to Tripadvisor. Publishing reviews and ratings means your performance is shown without automatically requiring an indirect/fee-occuring link, you keep them on your site. And you do need to increase review volume on key channels to keep your offering visible and well ranked.

So, what do we offer?

We have our guest intelligence system, with reputation management, guest surveys, revenue optimiser and data. All of these create actionable insights for a range of tourism providers – hotels, hostels, restaurants, businesses etc. We have webinars, content, and information that we share back with the community for free.

Tech Trends and the Tourism Sector

Two talks here…

Jo Paulson, Events and Experiences Manager, Edinburgh Zoo and Jon-Paul Orsi, Digital Manager, Edinburgh Zoo – Pokemon Go

Jon-Paul: As I think everyone knows Pokemon Go appeared and whether you liked it or not it was really popular. So we wanted to work out what we could do. We are spread over a large site and that was great – loads of pokestops – but an issue too: one was in our blacksmith shop, another in our lion enclosure! So we quickly mapped the safe stops and made that available – and we only had a few issues there. By happy accident we also had some press coverage as one of the best places to find Pokemon – because a visitor happened to have found a poketung on our site.

With that attention we also decided to do some playful things with social media – making our panda a poke-cake; sharing shots of penguins and pokemon. And they were really well received.

Jo: Like many great ideas we borrowed from other places for some of our events. Bristol zoos had run some events and we borrowed ideas – with pokestops, pokedex charging points, and we had themed foods, temporary tattoos etc. We wanted to capitalise on the excitement so we had about a week and a half to do this. As usual we checked with keepers first, closing off areas where the animals could be negatively impacted.

Jon-Paul: In terms of marketing this we asked staff to tell their friends… And we were blown away by how well that went. On August 4th we had 10k hits as they virally shared the details. We kind of marketed it by not marketing it publicly. It being a viral, secret, exciting thing worked well. We sold out in 2 hours and that took us hugely be surprise. Attendees found the event through social primarily – 69% through facebook, 19% by word of mouth.

We didn’t have a great picture of demographics etc. Normally we struggle to get late teens, twenties, early thirties unless they are there as a couple or date. But actually here we saw loads of people in those age ranges.

Jo: We had two events, both of which we kept the zoo opened later than usual. Enclosures weren’t open – though you could see the animals. But it was a surreal event – very chatty, very engaged, and yet a lot of heads down without animal access. For the first event we gave away free tickets, but asked for donations (£5k) and sold out in 2 hours; for the second event we charged £5 in advance (£6500) and sold in around a week. We are really pleased with that though, that all goes into our conservation work. If popularity of Pokemon continues then we will likely run more of these as we reach the better weather/longer light again.

Rob Cawston, Interim Head of Digital Media, National Museum of Scotland – New Galleries and Interactive Exhibitions

One of the advantages of having a 7 year old son is that you can go to Pokemon Go events and I actually went to the second Zoo event which was amazing, if a little Black Mirror.

Here at the NMS we’ve just completed a major project opening 4 new fashion and design galleries, 6 new science and technology galleries, and a new piazza (or expanded pavement if you like). Those ten new galleries allow us to show (75% of 3000+) items for the first time in generations, but we also wanted to work out how to engage visitors in these attractions. So, in the new galleries we have 150+ interactive exhibits in the new galleries – some are big things like a kid sized hamster wheel, hot air balloon, etc. But we also now have digital labels… This isn’t just having touch screens for the sake of it, it needed to add something new that enhances the visitor experience. We wanted to reveal new perspectives, to add fun and activity – including games in the gallery, and providing new knowledge and learning.

We have done research on our audiences and they don’t just want more information – they have phones, they can google stuff, so they want more. And in fact the National Museum of Flight opened 2 new hangers and 30 new digital labels that let us trial some of our approaches with visitors first.

So, on those digital labels and interactives we have single stories, multiple chapters, bespoke interactives. These are on different sorts of screens, formats, etc. Now we are using pretty safe tech. We are based on the umbraco platform, as is our main website. We set up a CMS with colours, text, video, etc. And that content is stored on particular PCs that send data to specific screens in the museums. There is so much content going into the museum, so we were able to prep all this stuff ahead of gallery opening, and without having to be in the gallery space whilst they finished installing items.

We didn’t just put these in the gallery – we put them on the website too. Our games are there, and we know they are a major driver of traffic to the website. That multiple platform digital content includes 3D digital views of fashion; we have a game built with Aardman…

We have learned a lot from this. I don’t think we realised how much would be involved in creating this content, and I think we have created a new atmosphere of engagement. After this session do go and explore our new galleries, our new interactives, etc.

Wrap Up James McVeigh, Festivals Edinburgh

I’m just going to do a few round ups. You’ve heard a lot today. We’ve got exhibitors who are right on your doorstep. We are trying to show you that digital is all around you, it’s right on your doorstep. I got a lot from this myself… I like that the zoo borrowed the ideas – we don’t always need to reinvent the wheel! The success of the Japanese economy is about adopting, not inventing.

Everything we have heard today is about UX, how audiences, share, engage, how they respond afterwards.

And as we finish I’d like to thank ETAG, to Digital Tourism Scotland, to Scottish Enterprise, and to the wider tourism industry in Edinburgh.

And finally, the next events are:

  • 29th November – Listening to our Visitors
  • 6th December – Running Social Media Campaigns
  • 26th January – ETAG Annual Conference

And with that we just have lunch, networking and demos of Bubbal and Hydra Research. Thanks to all from me for a really interesting event – lots of interesting insights into how tech is being used in Edinburgh tourism and where some of the most interesting potential is at the moment. 

Oct 082016

Today is the last day of the Association of Internet Researchers Conference 2016 – with a couple fewer sessions but I’ll be blogging throughout.

As usual this is a liveblog so corrections, additions, etc. are welcomed. 

PS-24: Rulemaking (Chair: Sandra Braman)

The DMCA Rulemaking and Digital Legal Vernaculars – Olivia G Conti, University of Wisconsin-Madison, United States of America

Apologies, I’ve joined this session late so you miss the first few minutes of what seems to have been an excellent presentation from Olivia. The work she was presenting on – the John Deere DMCA case – is part of her PhD work on how lay communities feed into lawmaking. You can see a quick overview of the case on NPR All Tech Considered and a piece on the ruling at IP Watchdog. The DMCA is the Digital Millennium Copyright Act (1998). My notes start about half-way through Olivia’s talk…

Property and ownership claims made of distinctly American values… Grounded in general ideals, evocations of the Bill of Rights. Or asking what Ben Franklin would say… Bringing the ideas of the DMCA as being contrary to the very foundations of the United Statements. Another them was the idea of once you buy something you should be able to edit as you like. Indeed a theme here is the idea of “tinkering and a liberatory endeavour”. And you see people claiming that it is a basic human right to make changes and tinker, to tweak your tractor (or whatever). Commentators are not trying to appeal to the nation state, they are trying to perform the state to make rights claims to enact the rights of the citizen in a digital world.

So, John Deere made a statement that tractro buyers have an “implied license” to their tractor, they don’t own it out right. And that raised controversies as well.

So, the final register rule was that the farmers won: they could repair their own tractors.

But the vernacular legal formations allow us to see the tensions that arise between citizens and the rights holders. And that also raises interesting issues of citizenship – and of citizenship of the state versus citizenship of the digital world.

The Case of the Missing Fair Use: A Multilingual History & Analysis of Twitter’s Policy Documentation – Amy Johnson, MIT, United States of America

This paper looks at the multilingual history and analysis of Twitter’s policy documentation. Or policies as uneven scalar tools of power alignment. And this comes from the idea of thinking of the Twitter as more than just the whole complete overarching platform. There is much research now on moderation, but understanding this type of policy allows you to understand some of the distributed nature of the platforms. Platforms draw lines when they decide which laws to tranform into policies, and then again when they think about which policies to translate.

If you look across at a list of Twitter policies, there is an English language version. Of this list it is only the Fair Use policy and the Twitter API limits that appear only in English. The API policy makes some sense, but the Fair Use policy does not. And Fair Use only appears really late – in 2014. It sets up in 2005, and many other policies come in in 2013… So what is going on?

So, here is the Twitter Fair Use Policy… Now, before I continue here, I want to say that this translation (and lack of) for this policy is unusual. Generally all companies – not just tech companies – translate into FIGS: French, Italian, German, Spanish languages. And Twitter does not do this. But this is in contrast to the translations of the platform itself. And I wanted to talk in particularly about translations into Japanese and Arabic. Now the Japanese translation came about through collaboration with a company that gave it opportunities to expand out into Japen. Arabic is not put in place until 2011, and around the Arab Spring. And the translation isn’t doen by Twitter itself but by another organisaton set up to do this. So you can see that there are other actors here playing into translations of platform and policies. So this iconic platforms are shaped in some unexpected ways.

So… I am not a lawyer but… Fair Use is a phenomenon that creates all sorts of internet lawyering. And typically there are four factors of fair use (Section 107 of US Copyright Act of 1976): purpose and character of use; nature of copyright work; amount and substantiality of portion used; effect of use on potential market for or value of copyright work. And this is very much an american law, from a legal-economic point of view. And the US is the only country that has Fair Use law.

Now there is a concept of “Fair Dealing” – mentioned in passing in Fair Use – which shares some characters. There are other countries with Fair Use law: Poland, Israel, South Korea… Well they point to the English language version. What about Japanese which has a rich reuse community on Twitter? It also points to the English policy.

So, policy are not equal in their policynesss. But why does this matter? Because this is where rule of law starts to break down… And we cannot assume that the same policies apply universally, that can’t be assumed.

But what about parody? Why bring this up? Well parody is tied up with the idea of Fair Use and creative transformation. Comedy is protected Fair Use category. And Twitter has a rich seam of parody. And indeed, if you Google for the fair use policy, the “People also ask” section has as the first question: “What is a parody account”.

Whilst Fair Use wasn’t there as a policy until 2014, parody unofficially had a policy in 2009, an official one in 2010, updates, another version in 2013 for the IPO. Biz Stone writes about, when at Google, lawyers saying about fake accounts “just say it is parody!” and the importance of parody. And indeed the parody policy has been translated much more widely than the Fair Use policy.

So, policies select bodies of law and align platforms to these bodies of law, in varying degree and depending on specific legitimation practices. Fair Use is strongly associated with US law, and embedding that in the translated policies aligns Twitter more to US law than they want to be. But parody has roots in free speech, and that is something that Twitter wishes to align itself with.

Visual Arts in Digital and Online Environments: Changing Copyright and Fair Use Practice among Institutions and Individuals Abstract – Patricia Aufderheide, Aram Sinnreich, American University, United States of America

Patricia: Aram and I have been working with the College Art Association and it brings together a wide range of professionals and practitioners in art across colleges in the US. They had a new code of conduct and we wanted to speak to them, a few months after that code of conduct was released, to see if that had changed practice and understanding. This is a group that use copyrighted work very widely. And indeed one-third of respondents avoid, abandon, or are delayed because of copyrighted work.

Aram: four-fifths of CAA members use copyrighted materials in their work, but only one fifth employ fair use to do that – most or always seek permission. And of those that use fair use there are some that always or usually use Fair Use. So there are real differences here. So, Fair Use are valued if you know about it and undestand it… but a quarter of this group aren’t sure if Fair Use is useful or not. Now there is that code of conduct. There is also some use of Creative Commons and open licenses.

Of those that use copyright materials… But 47% never use open licenses for their own work – there is a real reciprocity gap. Only 26% never use others openly licensed work. and only 10% never use others’ public domain work. Respondents value creative copying… 19 out of 20 CAA members think that creative appropriation can be “original”, and despite this group seeking permissions they also don’t feel that creative appropriation shouldn’t neccassarily require permission. This really points to an education gap within the community.

And 43% said that uncertainty about the law limits creativity. They think they would appropriate works more, they would public more, they would share work online… These mirror fair use usage!

Patricia: We surveyed this group twice in 2013 and in 2016. Much stays the same but there have been changes… In 2016, 2/3rd have heard about the code, and a third have shared that information – with peers, in teaching, with colleagues. Their associations with the concept of Fair Use are very positive.

Arem: The good news is that the code use does lead to change, even within 10 months of launch. This work was done to try and show how much impact a code of conduct has on understanding… And really there was a dramatic differences here. From the 2016 data, those who are not aware of the code, look a lot like those who are aware but have not used the code. But those who use the code, there is a real difference… And more are using fair use.

Patricia: There is one thing we did outside of the survey… There have been dramatic changes in the field. A number of universities have changed journal policies to be default Fair Use – Yale, Duke, etc. There has been a lot of change in the field. Several museums have internally changed how they create and use their materials. So, we have learned that education matters – behaviour changes with knowledge confidence. Peer support matters and validates new knowledge. Institutional action, well publicized, matters .The newest are most likely to change quickly, but the most veteran are in the best position – it is important to have those influencers on board… And teachers need to bring this into their teaching practice.

Panel Q&A

Q1) How many are artists versus other roles?

A1 – Patricia) About 15% are artists, and they tend to be more positive towards fair use.

Q2) I was curious about changes that took place…

A2 – Arem) We couldn’t ask whether the code made you change your practice… But we could ask whether they had used fair use before and after…

Q3) You’ve made this code for the US CAA, have you shared that more widely…

A3 – Patricia) Many of the CAA members work internationally, but the effectiveness of this code in the US context is that it is about interpreting US Fair Use law – it is not a legal document but it has been reviewed by lawyers. But copyright is territorial which makes this less useful internationally as a document. If copyright was more straightforward, that would be great. There are rights of quotation elsewhere, there is fair dealing… And Canadian law looks more like Fair Use. But the US is very litigious so if something passes Fair Use checking, that’s pretty good elsewhere… But otherwise it is all quite territorial.

A3 – Arem) You can see in data we hold that international practitioners have quite different attitudes to American CAA members.

Q4) You talked about the code, and changes in practice. When I talk to filmmakers and documentary makers in Germany they were aware of Fair Use rights but didn’t use them as they are dependent on TV companies buy them and want every part of rights cleared… They don’t want to hurt relationships.

A4 – Patricia) We always do studies before changes and it is always about reputation and relationship concerns… Fair Use only applies if you can obtain the materials independently… But then the question may be that will rights holders be pissed off next time you need to licence content. What everyone told me was that we can do this but it won’t make any difference…

Chair) I understand that, but that question is about use later on, and demonstration of rights clearance.

A4 – Patricia) This is where change in US errors and omissions insurance makes a difference – that protects them. The film and television makers code of conduct helped insurers engage and feel confident to provide that new type of insurance clause.

Q5) With US platforms, as someone in Norway, it can be hard to understand what you can and cannot access and use on, for instance, in YouTube. Also will algorithmic filtering processes of platforms take into account that they deal with content in different territories?

A5 – Arem) I have spoken to Google Council about that issue of filtering by law – there is no difference there… But monitoring

A5 – Amy) I have written about legal fictions before… They are useful for thinking about what a “reasonable person” – and that can be vulnerable by jury and location so writing that into policies helps to shape that.

A5 – Patricia) The jurisdiction is where you create, not where the work is from…

Q6) There is an indecency case in France which they want to try in French court, but Facebook wants it tried in US court. What might the impact on copyright be?

A6 – Arem) A great question but this type of jurisdictional law has been discussed for over 10 years without any clear conclusion.

A6 – Patricia) This is a European issue too – Germany has good exceptions and limitations, France has horrible exceptions and limitations. There is a real challenge for pan European law.

Q7) Did you look at all of impact on advocacy groups who encouraged writing in/completion of replies on DCMA. And was there any big difference between the farmers and car owners?

A7) There was a lot of discussion on the digital right to repair site, and that probably did have an impact. I did work on Net Neutrality before. But in any of those cases I take out boiler plate, and see what they add directly – but there is a whole other paper to be done on boiler plate texts and how they shape responses and terms of additional comments. It wasn’t that easy to distinguish between farmers and car owners, but it was interesting how individuals established credibility. For farmers they talked abot the value of fixing their own equipment, of being independent, of history of ownership. Car mechanics, by contrast, establish technical expertise.

Q8) As a follow up: farmers will have had a long debate over genetically modified seeds – and the right to tinker in different ways…

A8) I didn’t see that reflected in the comments, but there may well be a bigger issue around micromanagement of practices.

Q9) Olivia, I was wondering if you were considering not only the rhetorical arguements of users, what about the way the techniques and tactics they used are received on the other side… What are the effective tactics there, or locate the limits of the effectiveness of the layperson vernacular stategies?

A9) My goal was to see what frames of arguements looked most effective. I think in the case of the John Deere DCMA case that wasn’t that conclusive. It can be really hard to separate the NGO from the individual – especially when NGOs submit huge collections of individual responses. I did a case study on non-consensual pornography was more conclusive in terms of strategies that was effective. The discourses I look at don’t look like legal discourse but I look at the tone and content people use. So, on revenge porn, the law doesn’t really reflect user practice for instance.

Q10) For Amy, I was wondering… Is the problem that Fair Use isn’t translated… Or the law behind that?

A10 – Amy) I think Twitter in particular have found themselves in a weird middle space… Then the exceptions wouldn’t come up. But having it in English is the odd piece. That policy seems to speak specifically to Americans… But you could argue they are trying to impose (maybe that’s a bit too strong) on all English speaking territory. On YouTube all of the policies are translated into the same languages, including Fair Use.

Q11) I’m fascinated in vernacular understanding and then the experts who are in the round tables, who specialise in these areas. How do you see vernacular discourse use in more closed/smaller settings?

A11 – Olivia) I haven’t been able to take this up as so many of those spaces are opaque. But in the 2012 rule making there were some direct quotes from remixers. And there a suggestion around DVD use that people should videotape the TV screen… and that seemed unreasonably onorous…

Chair) Do you forsee a next stage where you get to be in those rooms and do more on that?

A11 – Olivia) I’d love to do some ethnographic studies, to get more involved.

A11 – Patricia) I was in Washington for the DMCA hearings and those are some of the most fun things I go to. I know that the documentary filmmakers have complained about cost of participating… But a technician from the industry gave 30 minutes of evidence on the 40 technical steps to handle analogue film pieces of information… And to show that it’s not actually broadcast quality. It made them gasp. It was devastating and very visual information, and they cited it in their ruling… And similarly in John Deere case the car technicians made impact. By contrast a teacher came in to explain why copying material was important for teaching, but she didn’t have either people or evidence of what the difference is in the classroom.

Q12) I have an interesting case if anyone wants to look at it, around Wikipedia’s Fair Use issues around multimedia. Volunteers take pre-emptively being stricter as they don’t want lawyers to come in on that… And the Wikipedia policies there. There is also automation through bots to delete content without clear Fair Use exception.

A12 – Arem) I’ve seen Fair Use misappropriated on Wikipedia… Copyright images used at low resolution and claimed as Fair Use…

A12- Patricia) Wikimania has all these people who don’t want to deal with law on copyright at all! Wikimedia lawyers are in an a really difficult position.

Intersections of Technology and Place (panel): Erika Polson, University of Denver, United States of America; Rowan Wilken, Swinburne Institute for Social Research, Australia; Germaine Halegoua,University of Kansas, United States of America; Bryce Renninger, Rutgers University, United States of America; Adrienne Russell, University of Denver, United States of America (Chair: Jessica Lingel)

Traces of our passage: Locative media and the capture of place data – Rowan Wilken

This is a small part of a book that I’m working on. And I am looking at how technologies are geolocating us… In space, in time, but moreso the ways that they reveal our complex socio-technical context through place. And I’m seeing this from an anthropological point of view of places as having particular

Josia Van Dyke in her work on social media business models talks about the use of “location intelligence” as part of the social media ecosystem and economic system.

I want to focus particular on FourSquare… It has changed significantly changed since repositioning in 2014 and those changes in their own and the Swarn app seek to generate real time and even predictive recommendations. They so this through combining social data/social graph and location/Places Graph data. They look to understand People as nodes with edges of proximity, co-location, etc. And in places the places are nodes, the edges are menus, recommendations, etc. So they have these two graphs, but the engineers seen to understand “What are the underlying properties and dynamics of these networks? How can we predict new connections? How do we measure influence?”. Their work now builds up this rich database of places and data around them.

And these changes have led to new repositioning… This has seen FourSquare selling advertising through predictive analysis… The second service called PinPoint, allowing marketers to target users of FourSquare… And for users beyond FourSquare. This is done through GPS locations, finding patterns and tracking shopping and dining routes…

In the last part of this talk I want to talk about Tim Ingol’s work in . For Ingol our perception of place is less about the birds eye view of maps, but of the walked and experienced route, based on the course of moving about in it, of ambulatory knowing. This is perceptual and way finding, less about co-ordinates, more about situating position in the context of moving, of what one knows about routing and moving.

So, my contention is that it’s way finding or mapping not map making or use that are primarily of use and interest to these social platforms going forward. Ingols talks about how new maps come from the replacement and changes over time… I think that is no longer the case, as what is of interest to companies like Foursquare is the digital trace of our passage, not the map itself.

“We know that right now we are not funky”: Placemaking practices in smart cities – Germaine Halegoua, University of Kansas

I am looking at attempts to use underused urban spaces, based on interviews with planners, architects, developers, about how they were developing these spaces – often on reclaimed land or infill – and about what makes them special and unique.

Placemaking is almost always defined as a bottom up process, often linked to home or making somewhere feel like home… But theories of placemaking are less thought of as strategic, thinking of KirkPatrick, or La Corbuisier. And the idea that these are spaces for dominant players – military, powerful people. So in these urban settings the strategic placemaking connects to powerful people, connected and valued around these international players.

I wanted to look at the differences between the planning behind these spaces and smart cities versus the lived experiences and processes. Smart cities are about urbanism imagines, with sustainable urbanism – everything is leaf certified!; technscientific ubranism – data capture is built in, data and technology are thought of as progressive and solutions to our problems; urban triumphalism (Brenner & Schmid 2015). These smart cities are purported as visionary designs, of this coming from the modern needs of people… Taking the best of global cities around the world, naming locations and designs coming in as fragments from other places. Digital media are used to show that this place works, as a place for ideas, a place to get things done… That they are like campus-based communities, like Silicon Valley, a better place than before…

There is this statistic that 70% of all people live in cities, and growing… But they are seen as dumb, problematic, in need of updating… They need order and smart cities are seen as a solution. There is an ordered view of the city as a lab – showroom and demonstration space as well as petri dish for transforming technology. And these are cities built of systems on top of systems – literally (Le Corbeusier-like but with a flowing soft aesthetic) and bringing of things together. So, in Songdu you see this range of services in the space. And in TechCity we see apps and connectedness within the home… Smart cities are monitoring traffic and centralised systems, to monitor biosigns, climate, etc… But in the green spaces or sustainable urban of getting you to live and linger… So you have this odd mixture of not spending the time in the streets, and these green spaces to linger…

But these are quite cold spaces… Vacancies are extremely high. They are seen as artificial. My talk quote is from a developer who feels that the solution is to bring in some funk… To programme serendipity into their lives… The answer is always more technology…

So a few themes here… There is the People Problem… attracting people to the place – not “funky”; placing people within the union of technology and physical design – claim that tech puts man first and needs of the end user… but there is also a sense of people as “bugs”. And I am producing all this data that aren’t about my experience of the city, but which shape that experience.

Geo-social media and the quest for place on-the-go – Erika Polson, University of Denver

This is coming out of my latest book, a multi-site ethnographic project. In the recent work I have developed an idea of digital place making… And this has been about how location technology can be used to shape the space of mobile people.

Expatriation was previously a post WWII experience, and a family affair… Often those assignments failed, sometimes as one partner (often female) couldn’t work. So, as corporations try to globalise there is a move to send younger, single assignees replacing families – they are cheaper and easier to relocate, they are more used to a global professional life as an idea and are enthusiastic.

And we don’t just see people moving once, we see serial global workers… The international experience can be seen as “a global lifestyle is seen as attractive and exciting” Anne Marie Fetcher 2009(?) but that may not reflects reality. There can be deep feelings of loneliness, the experience does’t match experience, they miss out on families, they lack social connections and possibilities to socialise. Margaret Malewski writes in Generation Expatriot (2005) about how there can be an increasing dependency on friends at home, and the need for these extratiots to get out and meet people…

So, my work is based on a range of meetup apps, from Grindr and Tinder, to MeetUp, InterNations and (less of my focus) Couch Surfing… Tools to build connections and find each other. I have studied use of apps in Paris, Bangalore and Singapore. So this image is of a cafe in Paris full of people – the first meetup that I went to and it was intimidating to walk into but immediately someone approached… And I started to think about Digital Place-making about two months into the Paris experience when a friend wanted to meet for dinner and I was at a MeetUp, and he was super floored by his discomfort with talking to a bar full of strangers in Paris – he’s a local guy, he speaks perfect English, he’s very sociable… On any other night he would have owned the space but he was thrown by these expats making the space their own, through Meetup, through their profiles, through discourse of “who we are” and pre-articulation of some of the expectations and norms.

This made me think about the idea of Place and the feelings of belonging and place attachment (Coulthard and Ledema 2009), about shared meanings of place. We’ve seen lots of work on online world and how to create that sense of place, of attachment, or shared meaning.

So, if everyone is able to drop in and feel part of a place… And if professionals can do this, who else can? So, I’m excited to hear the next paper on Grindr. But it’s interesting to think about who is out-of-place, of the quality of place and place relations. And the fact that even as these people maintain this positive narrative of working globally, but also a feeling of following a common template or script. And problems with place-on-the-go for social commitments, community building… Willingness to meet up again, to drop in rather than create anything.

Grindr – Bryce Renninger, Rutgers University, United States of America

I work on open government issues and the site of my work is Grindr – a location based, mainly male, mainly gay and bi casual dating space. And where I am starting from is the idea that Grindr is killing the gay bar (or gayborhood or the gay resort town), which is part of the gay press, for instance articles on the Pines neighbourhood of Fire Island, from New York Magazine. And quotes Ghaziani, author of There Goes the Gaybourhood, that having the app means they don’t need Boystown any more… And I think this narrative comes from concerns of valuing or not valuing these gay towns, resorts, bars, and of the willingness to defend those spaces. Bumgarner (2013) argues that the app does the same thing as the bar… But that assumes that the bar/place is only there to introduce people to each other for narrow range of purposes…

And my way of thinking about this is to think of technologies in democratic ways… Sclove talks about design criteria for democratic technologies, mainly to do with local labour and contribution but this can also be overlaid on social factors as well. And I think there is a space for democratically deliberating as sex publics. Michael Warner respoonds to Andrew Sullivan by problematizing his idea that “normal” is the place for queer people to exist. There are also authors writing on design in public sex spaces as a way to improve health outcomes.

The founder of Grindr says it isn’t killing the gay bar, and indeed provides a platform for the m to advertise on. And showing a quote here of how it is used shows the wide range of use of Grindr (beyond the obvious). I don’t think that Ghaziani’s writing doesn’t talk enough about what the gayborgoods and LGBT spaces are, how they can be class and race exclusive, fitting into gentrification of public spaces… And therefore I recommend Christina Lagazzi’s book.

One of the things I want to do with this work is to think about narratives in which platforms play a part can be written about, spoken about, that allow challenges to popular discourses of technological disruption. The idea that technological disruption is exciting is prevelant, and we aren’t doing enough to challenge that. This AirBnB billboard campaign – a kind of “Fuck You” to the San Francisco authorities and the legal changes to limit their business – are a reminder that we can respond to disruption…

I’m out of time but I think we need to think critically, about social roles of technology and how technological organisations figure into that… And to acknowledge ethnography and press.

Defining space through activism (and journalism): the Paris climate summit – Adrienne Russell, University of Denver

I’ve been working with researchers around the world on the G8 Climate Summits for around ten years, and coverage around it. I’ve been looking at activists and how they kind of spunk up the sapces where meeting take place…

But let me start with an image of Black Lives Matter protestors from the Daily Mail commenting on protestors using mobile phones. It exemplifies the idea that being on your phone means that you are not fully present… If they are on their phone, that arent that serious. This fits a long term type of coverage of protests that seems to suggest that in-person protests are more effective and authentic than social media. Although our literature shows that it is both approaches in combination that is most effective. And then the issue of official versus unofficial action. Activists in the 2014 Paris protestors were especially reliant on online work as protests were banned, public spaces were closed, activists were placed under house arrests… So they had been preparing for years but their action was restricted.

So, the ways that protestors took action was through tools like Climate Games, a real time game which enable you to see real time photography, but also you could highlight surveillance… It was non-violent but called police and law enforcement “team blue”, and lobbyists and greenwashers were “team grey”!

Probably many of you saw the posters across Paris – mocking corporate ad campaigns – e.g. a VW ad saying “we are sorry we got caught”. So you saw these really interesting alternative narratives and interpretations. There was also a hostel called Place to B which became a defacto media centres for protestors, with interviews being given throughout the event. There was a hub of artists who raised issues faced in their own countries. And outside the city there was a venue where they held a mock trial of Exxon vs the People with prominent campaigners from across the globe, this was on the heals of releases showing Exxon had evidence of climate change twenty years back and ignored it. This mock trial made a real media event.

So all these events helped create an alternative narrative. And that crackdown on protest reflects how we are coming to understand this type of top-down event… And resistance through media and counter narratives to mainstream media running predictable official lines.

Panel Q&A
Q1) I have a question, maybe a pushback to you Germaine… Or maybe not… Who are the “they” you are talking about… You talk about city planners… I admire the critique so I want to know who “they” are, and should we problematise that, especially in contemporary smart cities discourses…
A1 – Germaine) It’s CISCO, Seimans, IBM… Those with smart cities labs… Those are the “they”. And I’ve seen the networking of the expert – it is always the same people… The language is really specific and consistent. Everyone is using this term “solutions”… This is the language to talk about the problems… So “they” are transnational, often US based tech corporation with in-house smart cities labs.
Q1) But “they” are also in meetings across the world with lots of different stakeholders, including those people, but others are there. It looks like you are pulling from corporate discourses… Have you traced how that is translating into everyday city planners who host conferences and events they all meet at… And how that plays out and adopt it…
A1 – Germaine) The most I’ve gone with this is to CIOs and City Planners… But it’s a really interesting questions…
Q1) I think it would be interesting and a direction we need to take… How discourses played out and adopted.
Q2) So I was wanting to follow up that question by asking about the role of governments and funders. In the UK right now there is a big push from Government to engage in smart cities, and that offers local authorities a source of capital income that they are keen to take, but then they need providers to deliver that work and are turning to these private sector players…
A2) With cities I have looked at show no vacancy rates, or very low vacancy rates… Of the need to build more units because all are already sold. Some are dormitories for international schools… That lack of join up between ownership and real estate narrative really differs from lived experience. In Kansas they are retrofitting as a smart cities, and taking on that discourse of efficiencies and costs effectiveness…
Q3) How do narratives here fit versus what we used to have as the Cultural Cities narrative…. Who is pushing this? It’s not the same people from civil society perhaps?
A3 – Erika) When I was in Singapore I had this sense of an almost sterile environment. And I learned that the red light district was cleaned up, moved the transvestities and sex workers out… People thought it was too boring… And they started hiring women to dress as men dressed as women to liven it up…
Q4 – Germaine) I wanted to ask about the discourse around the gaybourhood and where they come from…
A4 – Bryce) I think there are particular stakeholders… So one of the articles I showed was about closure of one of the oldest gay bars in New York, and the idea that Grindr caused that, but someone pointed out in the comments that actually real estate prices is the issue. And there is also this change that came from Mayor Giuliani wanting Christopher Street to be more consistent with the rest of New York…
Q5) I was wondering how that location data and tracking data from Rowan’s paper connects with Smart Cities work…
A5 – Germaine) That idea of tracing is common, but the idea of relational space, whilst there, doesn’t really work as it isn’t made yet… There isn’t sufficient density of people to do that… They need the people for that data. In the social media layer it’s relatively invisible, it’s there… But there really is something connected there.
A5 – Rowan) The move to pinpoint technology at FourSquare, they may be interested in Smart Cities… But quite a lot of the critiques I’ve read is that its just about consumption… I’m tired of that… I think they are trying to do something more interesting, to get at the complexity of everyday life… In Melbourne there was a planned development called Docklands… There is nothing there on Foursquare…
A5 – Erika) I am surprised that they aren’t hiring people to be people…
A5 – Rowan) I was thinking about that William Gibson comment about street signs. One of the things about Docklands was that it had high technology and good connections but low population so it did become a centre for crime.
Q6) I work with low income/low socio-economic groups, and how are people ensuring that those communities are part of smart cities, or how their interests are voiced.
A6 – Germaine) In Kansas Cities Google wired neighbourhoods, but that also raised issues around neighbourhoods that were not reached… And that came from activists. Cable wasn’t fitted for poor and middle income communities, but data centres were also located in them. You also see small MESH and Line of Sight networks emerging as a counter measure in some neighbourhoods. I that place it was activists and the press… But in Kansas City it is being picked up as a story.
A6 -Rowan) In my field Jordan Frick does great work on this area, particularly on issues of monolingualism and how that excludes communities.
A6 – Erika) Tim Cresswell does really interesting work in this space… As I’ve thought about place and whose place a particular space it, I’ve been thinking about activists and police in the US. Would be interesting to look at.
A6 – Adreinne) People who have Tor, who resist surveillance, are well off and tech savvy, almost exclusively…
PS-32: Power (chair: Lina Dencik)
Lina: We have another #allfemalepanel for you! On power. 
The Terms Of Service Of Online Protest – Stefania Milan, University of Amsterdam, The Netherlands.
This is part of a bigger project which is slowly approaching book stage, so I won’t sum everything up here but I will give an overview of the theoretical position.
So, one of our starting points is the materiality and broker role of semiotechnologies, and particularly about mediation of social media and the ways that materiality contributes here. I am a sociologist and I’m looking at change. I have been accursed of being a techno-determinist… Yes, to an extent. I play with this. And I am working from the perspective that algorithmically mediated environment of social media has the ability to create change.
I look at a micro level and meso level, looking at interactions between individuals and how that makes differences. Collective action is a social construct – the result of interactions between social actors (Melucci 1996) – not a huge surprise. Organisation is a communicative and expressive activity. And centrality of sense-making activities (ie how people make sense of what they do) Meaning construction is embedded here. That shouldn’t be a surprise either here. Mediata tech and internet are not just tools but as both metaphors and enablers of a new configuration of collective action: cloud protesting. That’s a term I stick with – despite much criticism – as I like the contradiction that it captures… the direct, on the ground, individual, and the large, opaque, inaccessible.
So, features of “cloud protesting” is about the cloud as an “imagined online space” where resources are stored. In social movements there is something important there around shared resources. In this case resources are soft resources – information and meaning making resources. Resources are the “ingredients” of mobilisation. Cyberspaces gives these soft resources and (immaterial) body.

The cloud is a metaphor for organisational forms… And I relate that back to organisational forms of the 1960s, and to later movements, and now the idea of the cloud protest.  The cloud is also an analogy for individualisation – many of the nodes are individuals, who reject pre-packaged non-negotiable identities and organisations. The cloud is a platform for the movements resources can be… But a cloud movement does not require commitment and can be quite hard to activate and mobilise.

Collective identity, in these spaces, has some particular aspects. The “cloud” is an enabler, and you can identify “we” and “them”. But social media spaces overly emphasise visibility over collective identity.

The consequences of the materiality of social media are seen in four mechanisms: centrality of performance; interpellation to fellows and opponents; expansion of the temporality of the protest; reproducability of social action. Now much of that enables new forms of collective action… But there are both positive and negative aspects. Something I won’t mention here is surveillance and consequences of that on collective action.

So, what’s the role of social media? Social media act as intermediaries, enabling speed in protest organisation and diffusion – shaping and constraining collective action too. The cloud is grounded on everyday technology, everyone has the right in his/her pockets. The cloud has the power to deeply influence not only the nature of the protest but also the tactics. Social media enables the creation of a customisable narrative.

Hate Speech and Social Media Platforms – Eugenia Siapera, Paloma Viejo Otero, Dublin City University, Ireland

Eugenia: My narrative is also not hugely positive. We wanted to look at how social media platforms themselves understand, regulate and manage hate speech on their platforms. We did this through an analysis of terms of service. And we did in-depth interviews with key informants – Facebook, Twitter, and YouTube. These platforms are happy to talk to researchers but not to be quoted. We have permission from Facebook and Twitter. YouTube have told us to re-record interviews with lawyers and PR people present.

So, we had three analytical steps – looking at what constitutes hate speech means.

We found that there is no use of definitions of hate speech based on law. Instead they put in reporting mechanisms and use that to determine what is/is not hate speech.

Now, we spoke to people from Twitter and Facebook (indeed there are a number of staff members who move from one to another). The tactic at Facebook was to make rules, what will be taken down (and what won’t), hiring teams to work to apply then, and then help ensure rules are are appropriate. Twitter took a similar approach. So, the definition largely comes from what users report as hate speech rather than from external definitions or understandings.

We had assumed that the content would be manually and algorithmically assessed, but actually reports are reviewed by real people. Facebook have four teams across the world. There are native speakers – to ensure that they understand context – and they prioritise self-/harm and some other categories.

Platforms are reactively rather than proactively positioned. Take downs are not based on number of reports. Hate speech is considered in context – a compromising selfie of a young woman in the UK isn’t hates speech… Unless in India where that may impact on marriage (See Hot Girls of Mumbai – in that case they didn’t take down on that basis but did remove it directly with the ). And if in doubt they keep the content on.

Twitter talk about reluctance to share information with law enforcement, protective of users, protective of freedom of speech. They are not keen to remove someone, would prefer counter arguments. And there are also tensions created by different local regulations and the global operations of the platforms – tension is resolved by compromise (not the case for YouTube).

A Twitter employee talked about the challenges of meeting with representatives from government, where there is tension between legislation and commercial needs, and the need for consistent handling.

There is also a tension between the principled stance assumed by social media corporations that sends the user to block and protect themselves first – a focus on safety and security and personal responsibility. And they want users to feel happy and secure.

Some conclusions… Social media corporations are increasingly acquiring state-like powers. Users are conditioned to behave in ways conforming to social media corporations’ liberal ideology. Posts are “rewarded” by staying online but only if they conform to social media corporations’ version of what constitutes acceptable hate speech.

#YesAllWomen (have a collective story to tell): Feminist hashtags and the intersection of personal narratives, networked publics, and intimate citizenship – Jacqueline Ryan Vickery, University of North Texas, United States of America

The original idea here was to think about second wave feminism and the idea of sharing personal stories and make the personal political. And how that looks online. Working on Plummer’s work (2003) in this areas. All was well… And then I got stuck down the rabbit hole of publics and public discourses that are created when people share personal stories in public spaces… So I have tried to map these aspects. Thinking about the goals of hashtags and who started them as well… not something non-academics tend to look at. I also will be talking about hashtags themselves.

So I tried to think about and mapping goals, political, affective aspects, and affordances and relationships around these. The affordances of hashtags include: Curational – immediacy, reciprocity and conversationality (Papacharissi 2015); they are Polysemic – plurality, open signifiers, diverse meanings (Fiske 1987); Memetic – replicable, ever-evolving, remix, spreadable cultural information (Knobel and Lankshear 2007); Duality in communities of practice – opposing forces that drive change and creativity, local and broader for instance (Wenger 1988); Articulated subjectivities – momentarily jumping in and out of hashtags without really engaging beyond brief usage.

And how can I understand political hashtags on Twitter and their impact? Are we just sharing amongst ourselves, or can we measure that? So I want to think about agenda setting and re-framing – the hashtags I am looking at speak to a public event, or speak back to a media event that is taking place another way. We have op-option by organisations etc. And we see (strategic) essentialism. Awareness/mobilisation. Amplification/silencing of (privileged/marganlisation narratives). So #Yesallwomen is adopted by many privileged white feminists but was started by a biracial muslim women. Indeed all of the hashtags I study were started by non-white women.

So, looking at #Yesallwomen was in response to a terrible shooting and wrote a diatribe about women denying him. The person who created that hashtags left Twitter for a while but has now returned. So we do see lots of tweets that use that hashtag, responding with appropriate experiences and comments.  But it became problematic, too open… This memetic affordance – a controversial male monologist used it as a title for his show, using it abusively and trolling, and beauty brands being there.

The #WhyIStayed hashtag was started by Beverley Gooden in response to commentary that a woman should have left her partner, and that media wasn’t asking why they didn’t ask why that man had beaten and abused his partner. So people shared real stories… But also a pizza company used it – though they apologised and admitted not researching first. Some found the hashtag traumatic… But others shared resources for other women here…

So, I wanted to talk about how these spaces are creating these networked publics, and they do have power to deal with changes. I also think that idea of openness, of lack of control, and the consequences of that openness. #Yesallwomen has lost its meaning to an extent, and is now a very white hashtag. But if we look at these and think of them with social theories we can think about what this means for future movements and publicness.

Internet Rules, Media Logics and Media Grammar: New Perspectives on the Relation between Technology, Organization and Power – Caja Thimm, University of Bonn, Germany

I’m going to report briefly on a long term project on Twitter funded by a range of agencies. There is also a book coming on Twitter and the European Election. So, where do we start… We have Twitter. And we have tweets in French – e.g. from Marine Le Pen – but we see Tweets in other languages too – emoticons, standard structures, but also visual storytelling – images from events.

We have politicians, witnesses, and we see other players, e.g. the police. So first of all we wanted a model for Tweets and how we can understand them. So we used the Functional Operator Model (Thimm et al 2014) – but thats descriptive – great for organising data but not for analysing and understanding platforms.

So, we started with a conference on Media Logic, an old concept from the 1970s. Media Logic offers an approach to develop parameters for a better analysis of such new forms of “media”. It defines players, objectives and power. And how players interact and what do they do (e.g. how do others conquer a hashtag for instance). Consequently you can consider media logics that are to be considered as a network of parameters.

So, what are the parameters of Media Logics that we should understand?

  1. Media Logic and communication cultures. For instance how politicians and political parties take into account media logic of television – production routines, presentation formats (Schulz 2004)
  2. Media Logic and media institutions – institutional and technological modus operandi (Hjarvard 2014)
  3. Media Grammar – a concept drawn from analogy of language.

So, lets think about constituents of “Media Grammar”? Periscope came out of a need, a gap… So you have Surface Grammar – visible and accessible to the user (language, semiotic signs, sounds etc). Surface Grammar is (sometimes) open to the creativity of users. It guides use through media.

(Constitutive) Property Grammar is difference. They are constitutive for the medium itself, determines the rules the functional level of the surface power. Constitutes of algorithms (not exclusively). Not accessible but for the platform itself. And surface grammar and property grammar form a reflexive feedback loop.

We also see van Dijk and Poell (2013) talking about social media as powerful institutions, so the idea of connecting social media grammar here to understand that… This opens up the focus on the open and hidden properties of social media and its interplay with communicative practices. Social media are differentiated, segmented and diverse to such a degree that it seems necessary to focus in more to gain a better idea of how we understand them as technology and society…

Panel Q&A

Q1) A general question to start off. You presented a real range of methodologies, but I didn’t hear a lot about practices and what people actually do, and how that fits into your models.

A1 – Caja) We have a six year project, millions of tweets, and we are trying to find patterns of what they do, and who does what.  There are real differences in usage but still working on what those means.

A1 – Jacqueline) I think that when you look at very large hashtags, even #blacklivesmatter, you do see community participation. But the tags I’m looking at are really personal, not “Political”, these are using hashtags as a momentary act in some way, but is not really a community of practice in a sustainable movements, but some are triggering bigger movements and engagement though…

A1 – Eugenia) We see hate speech being gamed… People put outrageous posts out there to see what will happen, if they will be taen down…

Q2) I’ve been trying to find an appropriate framework… The field is so multidisciplinary… For a study I did on native american activists. We saw interest groups – discursive groups – were loosely stitched together with #indigenous. I’m sort of using the phrase “translator” to capture this. I was wondering if you had any thoughts on how we navigate this…

A2 – Caja) It’s a good question… This conference is very varied, there are so many fields… Socio-linguistics has some interesting frameworks for accommodations in Twitter. No-one seems to have published on that.

A2 – Jacqueline) I think understanding the network, the echo chamber effects, mapping of that network and how the hashtag moves, might be the way in there…

Q2) That’s what we did, but that’s also a problem… But hashtag seems to have a transformative impact too…

Q3) I wonder if we say Social Media Logic, do we loose sight of the overarching issue…

A3 – Caja) I think that Media Logic is in really early stages… It was founded in the 1970s when media was so different. But there are real power symmetries… And I hope we find a real way to bridge the two.

Q4) Many of these arguments come down to how much we trust the idea of the structure in action. Eugenia talks about creating rules iteratively around the issue. Jacqueline talked about the contested rules of play… It’s not clear of who defines those terms in the end…

A4 – Eugenia) There are certain media logics in place now… But they are fast moving as social media move to monetise, to develop, to change. Twitter launches Periscope, Facebook then launches Facebook Live! The grammar keeps on moving, aimed towards the users… Everything keeps moving…

A4 – Caja) But that’s the model. The dynamics are at the core. I do believe that media grammar on the level of small nitpicks that are magic – like the hashtag which has transgressed the platform and even the written form. But it’s about how they work, and whether there are logics inscribed.

A4 – Stefania) There is, of course, attempts made by the platform to hide the logic, and to hide the dynamics of the logic… Even at a radical activist conference who cannot imagine their activism without the platform – and that statement also comes from a belief that they understand the platform.

Q5) I study hate speech too… I came with my top five criticisms but you covered them all in your presentation! You talked about location (IP address) as a factor in hate speech, rather than jurisdiction.

A5 – Eugenia) I think they (nameless social platform) take this approach in the same way that they do for take down notices… But they only do that for France and Germany where hate speech law is very different.

A5 – Caja) There is a study that has been taking place about take downs and the impact of pressure, politics, and effect across platforms when dealing with issues in different countries.

A5 – Eugenia) Twitter has a relationship with NGOs. and have a priority to deal with their requests, sometimes automatically. But they give guidance on how to do that, but they are outsourcing that process to these users…

Q6) I was thinking about platform logics and business logics… And how the business models are part of these logics. And I was wondering if you could talk to some of the methodological issues there… And the issue of the growing powers of governments – for instance Benjamin Netanahu meeting Mark Zuckerberg and talking to him about taking down arabic journalists.

A6 – Eugenia) This is challenging… We want to research them and we want to critique them… But we don’t want to find ourselves blacklisted for doing this. Some of the people I spoke to are very sensitive about, for instance, Palestinian content and when they can take it down. Sometimes though platforms are keen to show they have the power to take down content…

Q7) For Eugenia, you had very good access to people at these platforms. Not surprised they are reluctant to be quoted… But that access is quite difficult in our experience – how did you do it.

A7) These people live in Dublin so you meet them at conferences, there are cross overs through shared interests. Once you get in it’s easier to meet and speak to them… Speaking is ok, quoting and identifying names in our work is different. But it’s not just in social media

Comment) These people really are restricted in who they can talk to… There are PR people at one platform… You ask for comparative roles and use that as a way in… You can start to sidle inside. But mainly it’s the PR people you can access… I’ve had some luck referring to role area at a given company, rather than by name.

Q8 – Stefania) I was wondering about our own roles, in this room, and the issue of agency and publics…

A8 – Jacqueline) I don’t think publics take agency away, in the communities I look at these women benefit from the publics, and of sharing… But actually what we understand as publics varies… So in some publics some talk about exclusion of, e.g. women or people of public, but there are counter publics…

A8 – Caja) Like you were saying there are mini publics and they can be public, and extend out into media and coverage. I think we have to look beyond the idea of the bubble… It’s really fragmented and we shouldn’t overlook that…

And with that, the conference is finished. 

You can read the rest of my posts from this week here:

Thanks to all at AoIR for a really excellent week. I have much to think about, lots of contacts to follow up with, and lots of ideas for taking forward my own work, particularly our new YikYak project

Oct 072016

PS-15: Divides (Chair: Christoph Lutz)

The Empowered Refugee: The Smartphone as a Tool of Resistance on the Journey to Europe – Katja Kaufmann

For those of you from other continents we had a great deal of refugees coming to Europe last year, from Turkey, Syria, etc. who were travelling to Germany, Sweden, and Vienna – where I am from – was also a hub. Some of these refugees had smartphones and that was covered in the (right wing) press about this, criticising this group’s ownership of devices but it was not clear how many had smartphones, how they were being used and that’s what I wanted to look at.

So we undertook interviews with refugees to see if they used them, how they used them. We were researching empowerment by mobile phones, following Svensson and Wamala Larsson (2015) on the role of the mobile phone in transforming capacilities of users. Also with reference to N. Kabeer (1999), A. Sen (1999) etc. on meanings of empowerment in these contexts. Smith, Spend and Rashid (2011) describe mobiles and their networs altering users capability sets, and about phone increasing access to flows of information (Castell 2012).

So, I wanted to identify how smartphones were empowering refugees through: gaining an advantage in knowledge by the experiences of other refugees; sensory information; cross-checking information; and capabilities to opposse actions of others.

In terms of an advantage in knowledge refugees described gaining knowledge from previous refugees on reports, routes, maps, administrative processes, warnings, etc. This was through social networks and Facebook groups in particular. So, a male refugee (age 22) described which people smugglers cannot be trusted, and which can. And another (same age) felt that smart phones were essential to being able to get to Europe – because you find information, plan, check, etc.

So, there was retrospective knowledge here, but also engagement with others during their refugee experience and with those ahead on their journey. This was mainly in WhatsApp. So a male refugee (aged 24) described being in Macedonia and speaking to refugees in Serbia, finding out the situation. This was particularly important last year when approaches were changes, border access changed on an hour by hour basis.

In terms of Applying Sensory Abilities, this was particularly manifested in identifying own GPS position – whilst crossing the Aegean or woods. Finding the road with their GPS, or identifying routes and maps. They also used GPS to find other refugees – friends, family members… Using location based services was also very important as they could share data elsewhere – sending GPS location to family members in Sweden for instance.

In terms of Cross-checking information and actions, refugees were able to track routes whilst in the hand of smugglers. A male Syrian refugee (aged 30) checked information every day whilst with people smugglers, to make sure that they were being taken in the right direction – he wanted to head west. But it wasn’t just routes, it was also weather condiions, also rumous, and cross-checking weather conditions before entering a boat. A female Syrian refugee downloaded an app to check conditions and ensure her smuggler was honest and her trip would be safer.

In terms of opposing actions of others, this was about being capable of opposing actions of others – orders of authorities, potential acts of (police) violence, risks, fraud attempts, etc. Also disobedience by knowledge – the Greek government gave orders about the borders, but smartphones allowed annotated map sharing that allowed orders to be disobeyed. And access to timely information – exchange rates for example – a refugee described negotiating price of changing money down by Google searching for this. And opposition was also about a means to apply pressure – threatening with or publishing photos. A male refugee (aged 25) described holding up phones to threaten to document policy violence, and that was impactful. Also some refugees took pictures of people smugglers as a form of personal protection and information exchange, particularly with publication of images as a threat held in case of mistreatment.

So, in summary the smartphones


Q1) Did you have any examples of privacy concerns in your interviews, or was this a concern for later perhaps?

A1) Some mentioned this, some felt some apps and spaces are more scrutinised than others. There was concern that others may have been identified through Facebook – a feeling rather than proof. One said that they do not send their parents any pictures in case she was mistaken by Syrian government as a fighter. But mostly privacy wasn’t an immediate concern, access to information was – and it was very succesful.

Q2) I saw two women in the data here, were there gender differences?

A2) We tried to get more women but there were difficulties there. On the journey they were using smartphones in similar ways – but I did talk to them and they described differences in use before their journey and talked about picture taking and sharing, the hijab effect, etc.

Social media, participation, peer pressure, and the European refugee crisis: a force awakens? – Nils Gustafsson, Lund university, Sweden

My paper is about receiving/host nations. Sweden took in 160,000 refugees during the crisis in 2015. I wanted to look at this as it was a strange time to live in. A lot of people started coming in late summer and early autumn… Numbers were rising. At first response was quite enthusiastic and welcoming in host populations in Germany, Austria, Sweden. But as it became more difficult to cope with larger groups of people, there were changes and organising to address challenge.

And the organisation will remind you of Alexander (??) on the “logic of collective action” – where groups organise around shared ideas that can be joined, ideas, almost a brand, e.g. “refugees welcome”. And there were strange collaborations between government, NGOs, and then these ad hoc networks. But there was also a boom and bust aspect here… In Sweden there were statements about opening hearts, of not shutting borders… But people kept coming through autumn and winter… By December Denmark, Sweden, etc. did a 180 degree turn, closing borders. There were border controls between Denmark and Sweden for the first time in 60 years. And that shift had popular support. And I was intrigued about this. And this work is all part of a longer 3 year project on young people in Sweden and their political engagement – how they choose to engage, how they respond to each other. We draw on Bennett & Segerberg (2013), social participation, social psychology, and the notion of “latent participation” – where people are waiting to engage so just need asking to mobilise.

So, this is work in progress and I don’t know where it will go… But I’ll share what I have so far. And I tried to focus on recruitment – I am interested in when young people are recruited into action by their peers. I am interested in peer pressure here – friends encouraging behaviours, particularly important given that we develop values as young people that have lasting impacts. But also information sharing through young people’s networks…

So, as part of the larger project, we have a survey, so we added some specific questions about the refugee crisis to that. So we asked, “you remember the refugee crisis, did you discuss it with your friends?” – 93.5% had, and this was not surprising as it is a major issue. When we asked if they had discussed it on social media it was around 33.3% – much lower perhaps due to controversy of subject matter, but this number was also similar to those in the 16-25 year old age group.

We also asked whether they did “work” around the refugee crisis – volunteering or work for NGOs, traditional organisations. Around 13.8% had. We also asked about work with non-traditional organisations and 26% said that they had (and in 16-25% age group, it was 29.6%), which seems high – but we have nothing to compare this too.

Colleagues and I looked at Facebook refugee groups in Sweden – those that were open – and I looked at and scraped these (n=67) and I coded these as being either set up as groups by NGOs, churches, mosques, traditional organisations, or whether they were networks… Looking across autumn and winter of 2015 the posts to these groups looked consistent across traditional groups, but there was a major spike from the networks around the crisis.

We have also been conducting interviews in Malmo, with 16-19 and 19-25 year olds. They commented on media coverage, and the degree to which the media influences them, even with social media. Many commented on volunteering at the central station, receiving refugees. Some felt it was inspiring to share stories, but others talked about their peers doing it as part of peer pressure, and critical commenting about “bragging” in Facebook posts. Then as the mood changed, the young people talked about going to the central station being less inviting, on fewer Facebook posts… about feeling that “maybe it’s ok then”. One of our participants was from a refugee background and ;;;***


Q1) I think you should focus on where interest drops off – there is a real lack of research there. But on the discussion question, I wasn’t surprised that only 30% discussed the crisis there really.

A1) I wasn’t too surprised either here as people tend to be happier to let others engage in the discussion, and to stand back from posting on social media themselves on these sorts of issues.

Q2) I am from Finland, and we also helped in the crisis, but I am intrigued at the degree of public turnaround as it hasn’t shifted like that in Finland.

A2) Yeah, I don’t know… The middleground changed. Maybe something Swedish about it… But also perhaps to do with the numbers…

Q2) I wonder… There was already a strong anti-immigrant movement from 2008, I wonder if it didn’t shift in the same way.

A2) Yes, I think that probably is fair, but I think how the Finnish media treated the crisis would also have played a role here too.

An interrupted history of digital divides – Bianca Christin Reisdorf, Whisnu Triwibowo, Michael Nelson, William Dutton, Michigan State University, United States of America

I am going to switch gears a bit with some more theoretical work. We have been researching internet use and how it changes over time – from a period where there was very little knowledge of or use of the internet to the present day. And I’ll give some background than talk about survey data – but that is an issue of itself… I’ll be talking about quantitative survey data as it’s hard to find systematic collection of qualitative research instruments that I could use in my work.

So we have been asking about internet use for over 20 years… And right now I have data from Michigan, the UK, and the US… I have also just received further data from South Africa (this week!).

When we think about Digital Inequality the idea of the digital divide emerged in the late 1990s – there was government interest, data collection, academic work. This was largely about the haves vs. have-nots; on vs. off. And we saw a move to digital inequalities (Hargittai) in the early 2000s… Then it went quite aside from work from Neil Selwyn in the UK, from Helsper and Livingstone… But the discussion has moved onto skills…

Policy wise we have also seen a shift… Lots of policies around digital divide up to around 2002, then a real pause as there was an assumption that problems would be solved. Then, in the US at least, Obama refocused on that divide from 2009.

So, I have been looking at data from questionnaires from Michigan State of the State Survey (1997-2016); questionnaires from digital future survey in the US (2000, 2002, 2003, 2014); questionnaires from the Oxford Internet Surveys in the UK (2003, 2005, 2007, 2009, 2013); Hungarian World Internet Project (2009); South African World Internet Project (2012).

Across these data sets we have looked at questionnaires and frequency of use of particular questions here on use, on lack of use, etc. When internet penetration was less high there was a lot of explanation in questions, but we have shifted away from that, so that we assume that people understand that… And we’ve never returned to that. We’ve shifted to devices questions, but we don’t ask other than that. We asked about number of hours online… But that increasingly made less sense, we do that less as it is essentially “all day” – shifting to how frequently they go online though.

Now the State of the State Survey in Michigan is different from the other data here – all the others are World Internet Project surveys but SOSS is not looking at the same areas as not interent researchers neccassarily. In Hungary (2009 data) similar patterns of question use emerged, but particular focus on mobile use. But the South African questionnaire was very different – they ask how many people in the household is using the internet – we ask about the individual but not others in the house, or others coming to the house. South Africa has around 40% penetration of internet connection (at least in 2012 when we have data here), that is a very different context. There they ask for lack of access and use, and the reasons for that. We ask about use/non-use rather than reasons.

So there is this gap in the literature, there is a need for quantitative and qualitative methods here. We also need to understand that we need to consider other factors here, particularly technology itself being a moving target – in South Africa they ask about internet use and also Facebook – people don’t always identify Facebook as internet use. Indeed so many devices are connected – maybe we need


Q1) I have a question about the questionnaires – do any ask about costs? I was in Peru and lack of connections, but phones often offer free WhatsApp and free Pokemon Go.

A1) Only the South African one asks that… It’s a great question though…

Q2) You can get Pew questionnaires and also Ofcom questionnaires from their website. And you can contact the World Internet Project directly… And there is an issue with people not knowing if they are on the internet or not – increasingly you ask a battery of questions… and then filtering on that – e.g. if you use email you get counted as an internet user.

A2) I have done that… Trying to locate those questionnaires isn’t always proving that straightforward.

Q3) In terms of instruments – maybe there is a need to developmore nuanced questionnaires there.

A3) Yes.

Levelling the socio-economic playing field with the Internet? A case study in how (not) to help disadvantaged young people thrive online – Huw Crighton Davies, Rebecca Eynon, Sarah Wilkin, Oxford Internet Institute, United Kingdom

This is about a scheme called the “Home Access Scheme” and I’m going to talk about why we could not make it work. The origins here was a city council’s initiative – they came to us. DCLG (2016) data showed 20-30% of the population were below the poverty line, and we new around 7-8% locally had no internet access (known through survey responses). And the players here were researchers, local government, schools, and also an (unnamed) ISP.

The aim of the scheme was to raise attainment in GCSEs, to build confidence, and to improve employability skills. The Schools had a responsibility to identify students in need at school, to procure laptops, memory sticks and software, provide regular, structured in-school pastoral skills and opportunities – not just in computing class. The ISP was to provide set up help, technical support, free internet connections for 2 years.

This scheme has been running two years, so where are we? Well we’ve had successes: preventing arguments and conflict; helped with schoolwork, job hunting; saved money; and improved access to essential services – this is partly as cost cutting by local authorities have moved transactions online like bidding for council housing, repeat prescription etc. There was also some intergenerational bonding as families shared interests. Families commented on the success and opportunities.

We did 25 interiews, 84 1-1 sessions in schools, 3 group workshops, 17 ethnographic visits, plus many more informal meet ups. So we have lots of data about these families, their context, their lives. But…

Only three families had consistent internet access throughout. Only 8 families are still in the programme. It fell apart… Why?

Some schools were so nervous about use that they filtered and locked down their laptops. One school used the scheme money to buy teacher laptops, gave students old laptops instead. Technical support was low priority. Lead teachers left/delegated/didn’t answer emails. Very narrow use of digital technology. No in-house skills training. Very little cross-curriculum integration. Lack of ICT classes after year 11. And no matter how often we asked about it we got no data from schools.

The ISP didn’t set up collections, didn’t support the families, didn’t do what they had agreed to. They tried to bill families and one was threatened with debt collectors!

So, how did this happen? Well maybe these are neoliberalist currents? I use that term cautiously but… We can offer an emergent definition of neoliberalism from this experience.

There is a neoliberalist disfigurement of schools: teachers under intense pressue to meet auditable targets; the scheme’s students subject to a range of targets used to problematise a school’s performance – exclusions, attendance, C grades; the scheme shuffled down priorities; ICT not deemed academic enough under Govian school changes; and learning is stribbed back to narrow range of subjects and focus towards these targets.

There were effects of neoliberalism on the city council: targets and “more for less” culture; scheme disincentivised; erosion of authority of democratic institutional councils – schools beyond authority controls, and high turn over of staff.

There were neoliberalist practices at the ISP: commodifying philanthropy; couldn’t not treat families as customers. And there were dysfunctional mini-markets: they subcontracted delivery and set up; they subcontracted support; they charged for support and charged for internet even if they couldn’t help…


Q1) Is the problem digital divides but divides… Any attempt to overcome class separation and marketisation is working against the attempts to fix this issue here.

A1) We have a paper coming and yes, there were big issues here for policy and a need to be holistic… We found parents unable to attend parents evening due to shift work, and nothing in the school processes to accommodate this. And the measure of poverty for children is “free school meals” but many do not want to apply as it is stigmatising, and many don’t qualify even on very low incomes… That leads to children and parents being labelled disengaged or problematic

Q2) Isn’t the whole basis of this work neoliberal though?]

A2) I agree. We didn’t set the terms of this work..

Panel Q&A

Q1/comment) RSE and access

A1 – Huw) Other companies the same

Q2) Did the refugees in your work Katja have access to Sim cards and internet?

A2 – Katja) It was a challenge. Most downloaded maps and resources… And actually they preferred Apple to Android as the GPS is more accurate without an internet connection – that makes a big difference in the Aegean sea for instance. So refugees shared sim cards, used power banks for the energy.

Q3) I had a sort of reflection on Nils’ paper and where to take this next… It occurs to me that you have quite a few different arguements… You have this survey data, the interviews, and then a different sort of participation from the Facebook groups… I have students in Berlin here looking at the boom and bust – and I wondered about that Facebook group work being worth connecting up to that type of work – it seems quite separate to the youth participation section.

A3 – Nils) I wasn’t planning on talking about that, but yes.

Comment) I think there is a really interesting aspect of these campaigns and how they become part of social media and the everyday life online… The way they are becoming engaged… And the latent participation there…

Q3) I can totally see that, though challenging to cover in one article.

Q4) I think it might be interesting to talk to the people who created the surveys to understand motivations…

A4) Absolutely, that is one of the reasons I am so keen to hear about other surveys.

Q5) You said you were struggling to find qualitative data?

A5 – Katja) You can usually download quantitative instruments, but that is harder for qualitative instruments including questions and interview guides…

XP-02: Carnival of Privacy and Security Delights – Jason Edward Archer, Nathanael Edward Bassett, Peter Snyder, University of Illinois at Chicago, United States of America

Note: I’m not quite sure how to write up this session… So these are some notes from the more presentation parts of the session and I’ll add further thoughts and notes later… 

Nathanial: We have prepared three interventions for you today and this is going to be kind of a gallery exploring space. And we are experimenting with wearables…

Fitbits on a Hamster Wheel and Other Oddities, oh my!

Nathanial: I have been wearing a FitBit this week… but these aren’t new ideas… People used to have beads for counting, there are self-training books for wrestling published in the 16th Century. Pedometers were conceived of in Leonardo di Vinci’s drawings… These devices are old, and tie into ideas of posture, and mastering control of physical selves… And we see the pedometer being connected with regimes of fitness – like the Manpo-Meter (“10,000 steps meter) (1965). This narrative takes us to the 1970s running boom and the idea of recreational discipline. And now the world of smart devices… Wearables are taking us to biometric analysis as a mental model (Neff – preprint).

So, these are ways to track, but what happens with insurance companies, with those monitoring you. At Oriel Roberts university students have to track their fitness as part of their role as students. What does that mean? I encourage you all to check out “unfitbit” – interventions to undermine tracking. Or we could, rather than going to the gym with a FitBit, give it to Terry Crews – he’s going anyway! – and he could earn money… Are fitness slaves in our future?

So, use my FitBit – it’s on my account

And so, that’s the first part of our session…

?: Now, you might like to hear about the challenges of running this session… We had to think about how to make things uncomfortable… But then how do you get people to take part… We considered a man-in-the-middle site that was ethically far too problematic! And no-one was comfortable participating in that way… Certainly raising the privacy and security issue… But as we talk of data as a proxy for us… As internet researchers a lot of us are more aware of privacy and security issues than the general population, particularly around metadata. But this would have been one day… I was curious if people might have faked your data for that one day capture…

Nathanial: And the other issue is why we are so much more comfortable sharing information with FitBit, and other sharing platforms, faceless entities versus people you meet at a conference… And we didn’t think about a gender aspect here… We are three white guys here and we are less sensitive to that being publicised rather than privatised. Men talk about how much they can benchpress… but personal metadata can make you feel under scrutiny

Me: I wouldn’t want to share my data and personal data collection tools…

Borrowing laptop vs borrowing phone…

?: In the US there have been a few cases where FitBits have been submitted as evidence in court… But that data is easier to fake… In one case a woman claimed to have been raped, and they used her FitBit to suggest that

Nathanial: You talked about not being comfortable handing someone your phone… It is really this blackbox… Is it a wearable? It has all that stuff, but you wear it on your body…

??: On cellphones there is FOMO – Fear Of Missing Out… What you might mix…

Me: Device as security

Comment: Ableism embedded in devices… I am a cancer survivor and I first used step counts as part of a research project on chemotherapy and activity… When I see a low step day on my phone now… I can feel this stress of those triggers on someone going through that stress…

Nathanial: FitBit’s vibrate when you have/have not done a number of steps… Trying to put you in an ideological state apparatus…

Jh: That nudge… That can be good for able bodied… But if you can’t move that is a very different experience… How does that add to their stress load.

Interperspectival Goggles

Again looking at the condition of virtuality – Hayles 2006(?)

Vision is constructed… Thinking of higher resolution… From small phone to big phone… Lower resolution to higher resolution TV… We have spectacles, quizzing glasses and monocles… And there is the strange idea of training ourselves to see better (William Horation Bates, 1920s)… And emotional state interfering with how you do something… Rgeb we have optomitry and x-rays as a concept of seeing what could not be seen before… And you have special goggles and helmets… LIke the idea of the Image Accumulator in Videodrome (1985?), or the idea of the Memory recorder and playback device in Brainstorm (1983). We see embodied work stations – Da Vinci Surgery Robot (2000) – divorcing what is seen, from what is in front of them…

There are also playful ideas: binocular football; the Decelerator Helmet; Meta-perceptional Helmet (Cleary and Donnelly 2014); and most recently Google Glass – what is there and also extra layers… Finally we have Oculus Rift and VR devices – seeing something else entirely… We can divorce what we see from what we are perceiving… We want to swap people’s vision…

1. Raise awareness about the complexity of electronic privacy and security issues.

2. Identify potential gaps in the research agenda through playful interventions, subversions, and moments of the absurd.

3. Be weird, have fun!


“Cell phones are tracking devices that make phonecalls” (Applebaum, 2012)

I am interested in IMSI catcher which masquerades as a wireless base station, prompting phones to communicate with it. They are used by police, law inforcement, etc. They can be small and handheld, or they can be drone mounted. And they can track people, people in crowds, etc. There is always a different way to use it – you can scan for people in crowds. So if you know someone is there you can scan for it in a different way. So, these tools are simple and disruptive and problematic, especially in activism contexts.

But these tools are also capable of caturing transmitted content, and all the data in your phone. These devices are problematic and have raised all sorts of issues about their use, who and how you use them. I’d like to think of this a different way… Is there a right to protest? And to protest anonymously? We do have anti-masking laws in some places – that suggests no right to anonymous protest. But that’s still a different privacy right – covering my face is different from participating at all…

Protests are generally about a minority persuading a majoruty about some sort of change. There is no legal rights to protest anonymously, but there are lots of protected anoymous spaces. So, in the 19th century there was big debate on whether or not the voting ballot should be anonymous – democracy is really the C19th killer app. So there is a lovely quote here about the “The Australian system” by Bernheim (1889) and the introduction of anonymous voting. It wasn’t brought in to preserve privacy. At the time politicians brought votes – buying a keg of beer or whatever – and anonymity was there to stop that, not to preserve individual privacy. But Jill LePore (2008) writes about how our forebears considered casting a “secret ballot” to be “cowardly, underhanded and dispicable”.

So, back to these devices… There can be an idea that “if you have nothing to fear, you have nothing to hide”, but many of us understand that it is not true. And this type of device silences uncomfortable discourse.

Mathias Klang, University of Massachusetts Boston

Q1) How do you think that these devices fit into the move to allow law inforcement to block/”switch off” the camera on protestors/individuals’ phones?

A1) Well people can resist these surveillance efforts, and you will see subversive moves. People can cover cameras, conceal devices etc. But with these devices it may be that the phone becomes unusable, requiring protestors to disable phones or leave phones at home… And phones are really popular and well used for coordinating protests

Bryce Newell, Tilburg Institute for Law, Technology, and Society

I have been working on research in Washington Stat, working with law enforcement on license plate recognition systems and public disclosure law. And looking at what you can tell. So, here is a map of license plate data from Seattle, showing vehicle activity. In Minneapolis similar data being released led to mapping of the governer’s registered vehicles..

The second area is about law enforcement and body cameras. Several years ago peaceful protestors at UC Davis were pepper sprayed. Even in the cropped version of that image you can see a vast number of phones out, recording the event. And indeed there are a range of police surveillance apps that allow you to capture police encounters without that being visible on the phone, including: ACLU Police Tape, Stop and Frisk Watch; OpenWatch; CopRecorder2. And some of these apps upload the recording to the cloud right away to ensure capture. And there have certainly been a number of incidents from Rodney King to Oscar Grant (BART), Eric Garner, Ian Tomlinson, Michael Brown. Of these only the Michael Brown case featured law enforcement with bodycams. There has been a huge call for more cameras on law enforcement… During a training meeting some officers told me “Where’s the direct-to-YouTube button?” and “If citizens can do it, why can’t we also benefit from the ability to record in public places?”. There is a real awareness of control and of citizen videos. I also heard a lot of there being “a witch hunt about to begin…”.

So, I’m in the middle of focused coding on police attitudes to body cameras. Police are concerned that citizen video is edited, out of context, distorting. And they are concerned that it doesn’t show wider contexts – when recording starts, perspective, the wider scene, the fact that provocation occurs before filming usually. But there is also the issue of control, and immediate physical interaction, framing, disclosure, visibility – around their own safety, around how visible they are on the web. They don’t know why it is being recorded, where it will go…

There have been a number of regulatory responses to this challenge: (1) restrict collection – not many, usually budgetary and rarely on privacy; (2) restrict access – going back to the Minneapolis case, within two weeks of the map of governer vehicles being published in the paper they had an exemption to public disclosure law which is now permanent for this sort of data. In the North Carolina protests recently the call was “release the tapes” – and they released only some – then the cry was “release all the tapes”… But on 1st October law changed to again restrict access to this type of data.

But different state provide different access. Some provide access. In Oakland, California, data was released on how many license plates had been scanned. In Seattle data on scans can, because the data for many scans of one licence plates over 90 days is quite specific, you can almost figure out the householder. But granularity varies.

Now, we do see body cameras of sobriety tests, foot chases, and a half hour long interview with prostitute that discloses a lot of data. Washington shares a lot of video to YouTube. We see that in Rotterdam, Netherlands police doing this too.

But one patrol office told me that he would never give his information to an officer with a camera. Another noted that police choose when to start recording with little guidance on when and how to do this.

And we see a “collatoreal visibility” issue for police around these technologies.


Q1) Is there any process where police have to disclose that they are filming with a body cam?

A1) Interesting question… Initially they didn’t know. We used to have two party consent process – as for tapings – to ensure consent/implied consent. But the State attorney general described this as outside of that privacy regulation, saying that a conversation with a police officer is a public conversation. But police are starting to have policies that officers should disclose that they have cameras – partly as they hope and sometimes it may reduce violence to police.

Data Privacy in commercial users of municipal location data – Meg Young, University of Washington

My work looks at how companies use Seattle’s location data. I wanted to look at how data privacy is enacted by Seattle municipal government? And I am drawing on the work of Annemarie Mol and John Law (2004), an ethnographer working on health, that focuses on the lived experience. My data is drawing on ethnographic as as well as focus groups, interviews with municipal government and local civic technology communities. I really wanted to present the role of commercial actors in data privacy in city government.

We know that cities collect location data to provide services, and so share it for third parties to do so. In Washinton we have a state freedom of information (FOI) law, which states “The people of this state do not yield their sovereignty to the government…”, making data requestable.

In Seattle the traffic data is collected by a company called Acyclica. The city is growing and the infrastructure is struggling, so they are gathering data to deal with this, to shape traffic signals. This is a large scale longitudinal data collection process. Acyclica are doing that with wi-fi sensors sniff MAC addresses, the location traces sent to Acyclica (MAC salted). The data is aggregated and sent to the city – they don’t see the detailed creepy tracking, but the company does. And this is where the FOI law comes in. The raw data is on the company side here. If the raw data was a public record, it would be requestable. The company becomes a shield for collecting sensitive data – it is proprietizing.

So you can collect data, have service needs met, but without it becoming public to you and I. But analysing the contract the terms do not preclude the resale of data – though a Seattle Dept. of Transport (DOT) worker notes that right now people trust companies more than government. Now I did ask about this data collection – not approved elsewhere – and was told that having wifi settings on in public making you open to data collection – as it is in public space.

My next example is the data from parking meters/pay stations. This shows only the start, end, no credit card #s etc. The DOT is happy to make this available via public records requests. But you can track each individual, and they are using this data to model parking needs.

The third example is the Open Data Portal for Seattle. They pay Socrata to host that public-facing data portal. They also sell access to cleaned, aggregated data to companies through a separate API called the Open Data Network. The Seattle Open Data Manager didn’t see this situation as different from any other reseller. But there is little thought about third party data users – they rarely come up in converations – who may combine this data with other data sets for data analysis.

So, in summary, municipal government data is no less by and for commercial actors as it is the public. Proprietary protections around data are a strategy for protecting sensitive data. Government transfers data to third party


Q1) Seattle has a wifi for all programme

A1) Promisingly this data isn’t being held side by side… But the routers that we connect to collect so much data… Seeing an Oracle database of the websites fokls

Q2) What are you policy recommendations based on your work?

A2) We would recommend licensing data with some restrictions on use, so that if the data is used inappropriately their use could be cut off…

Q2) So activists could be blocked by that recommendation?

A2) That is a tension… Activists are keen for no licensing here for that reason… It is challenging, particularly when data brokers can do problematic profiling…

Q2) But that restricts activists from questioning the state as well.

Response – Sandra Braman

I think that these presentations highlight many of the issues that raise questions about values we hold as key as humans. And I want to start from an aggressive position, thinking about how and why you might effectively be an activist in this sort of environment. And I want to say that any concerns about algorithmically driven processes should be evaluated in the same way as we would social process. So, for instance we need think about how the press and media interrogate data and politicians

? “Decoding the social” (coming soon) is looking at social data and analysis of social data in the context of big data. She argues that social life is too big and complex than predicatable data. Everything that people who use big data “do” to understand patterns, are things that activists can do too. We can be just as sophisticated as corporations.

The two things I am thinking about are how to mask the local, and how to use the local… When I talk of masking the local I look back to work I did several years back on local broadcasting. There is mammoth literature on TV as locale, and production and how that is separate, misrepresenting, and the assumptions versus the actual information provided vs actual decision making. My perception is that social activism is that there is some brilliant activity taking place – brilliance at moments, specific apps often. And I think that if you look at the essays that Julian Assange before he founded WikiLeaks, particularly n weak links and how those work… He uses sophisticated social theory in a political manner.

But anonymity is practicably impossible… What can we learn from local broadcast? You can use phones in organised ways – there was training for phone cameras for the Battle of Seattle for instance. You can fight with indistinguishable actions – all doing the same things. Encryption is cat and mouse… Often we have activists presenting themselves as mice, although we did see an app discussed at the plenary on apps to alert you to protest and risk. And I have written before on tactical memory.

In terms of using the local… If you know you will be sensed all the time, there are things you can do as an activist to use that. It is useful to think about how we can conceive of ourselves as activists as part of the network. And I was inspired by US libel laws – if a journalist has transmission/recording devices but are a neutral observer, you are not “repeating” the libel and can share that footage. That goes back to 1970s law, but that can be useful to us.

We are at risk of being censored, but that means that you have choices about what to share, being deliberate in giving signals. We have witnessing, which can be taken as a serious commitment. That can happen with people with phones, you can train witnessing. There are many moments were leakage can be an opportunity – maybe not with volume or content of Snowden, but we can do that. There are also ways to learn and shape learning. But we can also be routers, and be critically engaged in that – what we share, the acceptable error rate. National Security are concerned about where in the stream they should target the misinformation – activists can adopt that too. The server functions – see my strategic memory piece. We certainly have community-based wifi, MESH networks, and that is useful politically and socially. We have responsibilities to build the public that is appropriate, and the networking infrastructure that enables those freedom. We can use more computational power to resolve issues. Information can be an enabler as well as influencing your own activism. Thank you to Anne and her group in Amsterdam for triggering thinking here, but big data we should be engaging critically. If you can’t make decisions in some way, there’s no point to doing it.

I think there needs to be more robustness in managing and working with data. If you go far then you need a very high level of methodological trust. Information has to stand up in court, to respect activist contributions to data. Use as your standard, what would be acceptable in court. And in a Panspectrum (not Panopticon) environment, when data is collected all the times, you absolutely have to ask the right questions.

Panel Q&A

Q1) I was really interested in that idea of witnessing as being part of being a modern digital citizens… Is there more on protections or on that which you can say

A1 – Sandra) We’ve seen all protections for whistle blowing in government disappear under Bush (II)… We still have protections for private sector whistle blowers. But there would be an interesting research project in there…

Q2) I wondered about that idea of cat and mouse use of technology… Isn’t that potentially making access a matter of securitisation…?

A2) I don’t think that “securitisation” makes you a military force… One thing I forgot to say was about network relations… If a system is interacting with another system – the principle of requisite variety – they have to be as complex as the system you are dealing with. You have to be at least as sophisticated as the other guy…

Q3) For Bryce and Meg, there are so many tensions over when data should be public and when it should be private, and tensions there… And police desires to show the good things they do. Also Meg, this idea of privatising data to ensure privacy of data – it’s problematic for us to collect data, but now a third party can do that.

A3 – Bryce) One thing I didn’t explain well enough is that video online comes from police, and from activists – it depends on the video here. Some videos are accessed via public records requests and published to YouTube channel – in fact in Washington you can make requests for free and you can do it anonymously. Police department does public video. Whilst they did a pilot in 2014 they had a hackathon to consider how to deal with redaction issues… detect faces, blur them, etc.. And proactive posting of – only some – video. The narrative of sharing everything, but that isn’t the case. The rhetoric has been about being open, by privacy rights and the new police chief. A lot of it was administrative cost concerns… In the hackathon they asked if posting in a blurred form, it would do away with blanket requests to focus requests. At that time they dealt with all requests for email. They were receiving so many emails and under state law they had to give up all the data and for free. But state law varies, in Charlotte they gave up less data. In some states there is a a differnet approach with press conferences, narratives around the footage as they release parts of videos…

A3 – Meg) The city has worked on how to release data… They have a privacy screening process. They try to provide data in a way that is embedded. They still have a hard core central value that any public record is requestable. Collection limitation is an important and essential part of what cities should be doing… In a way private companies collecting data results in large data sets that will end up insecure in those data sets… Going back to what Bryce was saying, the bodycam initiative was really controversial… There was so much footage and unclear what should be public and when… And the faultlines have been pretty deep. We have the Coalition for Open Government advocates for full access, the ACLU worried that these become surveillance cameras… This was really contentious… They passed a version of a compromise but the bottom line is that the PRA is still a core value for the state.

A3 – Bryce) Much of the ACLU, nationally certainly, was to support bodycams, but individuals and local ACLUs change and vary… They were very pro, then backing off, then local variance… It’s a very different picture hence that variance.

Q4) For Matthias, you talked about anti-masking laws. Are there cases where people have been brought in for jamming signals under that law.

A4 – Matthias) Right now the American cases is looking for keywords – manufacturers of devices, the ways data is discussed. I haven’t seen cases like that, but perhaps it is too new… I am a Swedish lawyer and that jamming would be illegal in protest…

A4 – Sandra) Would that be under antimasking or under jamming law.

A4 – Matthias) It would be under hacking laws…

Q4) If you counter with information… But not if switching phone off…

A4 – Matthias) That’s still allowed right now.

Q5) Do you do work comparing US and UK bodycamera?

A5 – Bryce) I don’t but I have come across the Rotterdam footage. One of my colleagues has looked at this… The impetus for adoption in the Netherlands has been different. In the US it is transparancy, in the Netherlands it was protection of public servants as the narrative. A number of co-authors have just published recently on the use of cameras and how they may increase assault on officers… Seeing some counter-intuitive results… But the why question is interesting.

Comment) Is there any aspect of cameras being used in higher risk areas that makes that more likely perhaps?

A5 – Sandra) It’s the YouTube on-air question – everyone imagines themselves on air.

Q6) Two speakers quoted individuals accused of serious sexual assault… And I was wondering how we account for the fact that activists are not homogenous here… Particularly when tech activists are often white males, they can be problematic…

A6) Techies don’t tend to be the most politically correct people – to generalise a great deal…

A6 – Sandra) I think they are separate issues, if I didn’t engage with people whose behaviour is problematic it would be hard to do any job at all. Those things have to be fought, but as a woman you should also challenge and call those white male activists on their actions.

Q7 – me) I was wondering about the retention of data. In Europe there is a lot of use of CCTV and the model  there is record everything, and retain any incident. In the US CCTV is not in widespread use I think and the bodycam model is record incidents in progress only… So I was wondering about that choice in practice and about the retention of those videos and the data after capture.

A7 – Bryce) The ACLU has looked at retention of data. It is a state based issue. In Washington there are mandatory minimu periods… They are interesting as due to findings in conduct they are under requirements to keep everything for as long as possible so auditors from DOJ can access and audit. Bellingham and Spokane, officers can flag items, and supervisors can… And that is what dictates retention schedule. There are issues there of course. Default when I was there was 2 years. If it is publicly available and hits YouTube then that will be far more long lasting, can pop up again… Perpetual memory there… So actual retention schedule won’t matter.

A7 – Sandra) A small follow up – you may have answered with that metadata… Do they treat bodycam data like other types of police data, or is it a separate class of data?

A7 – Bryce) Generally it is being thought of as data collection… And there is no difference from public disclosure, but they are really worried about public access. And how they share that with prosecutors… They could share on DVD… And wanted to use share function of software… But they didn’t want emails to be publicly disclosable with that link… So being thought about as like email.

Q8 – Sandra) On behalf of colleagues working on visual evidence in course.

Comment – Micheal) There is work on video and how it can be perceived as “truth” without awareness of potential for manipulation.

A8 – Bryce) One of the interesting things in Bellingham was release of that video I showed of a suspect running away… The footage was following a police pick up for suspected drug dealing but the footage showed evasion of arrest and the whole encounter… And in that case, whether or not he was guilty of the drug charge, that video told a story of the encounter. In preparing for the court case the police shared the video with his defence team and almost immediately they entered a guilty plea in response to that… And I think we will see more of that kind of invisible use of footage that never goes to court.

And with that this session ends… 

PA-31:Caught in a feedback loop? Algorithmic personalization and digital traces (Chair: Katrin Weller)

Wiebke Loosen1, Marco T Bastos2, Cornelius Puschmann3, Uwe Hasebrink1, Sascha Hölig1, Lisa Merten1, Jan­-Hinrik Schmidt1, Katharina E Kinder­-Kurlanda4, Katrin Weller4

1Hans Bredow Institute for Media Research; 2; 3Alexander von Humboldt Institute for Internet and Society; 4GESIS Leibniz Institute for the Social Sciences

?? – Marco T Bastos, University of California, Davis  and Cornelius Puschmann, Alexander von Humboldt Institute for Internet and Society

Marco: This is a long-running project that Cornelius and I have been working on. At the time we started, in 2012, it wasn’t clear what impact social media might have on the filtering of news, but they are now huge mediators of news and news content in Western countries.

Since then there is some challenge and conflict between journalists, news editors and audiences and that raises the issue of how to monitor and understand that through digital trace data. We want to think about which topics are emphasized by news editors, and which are most shared by social media, etc.

So we will talk about taking two weeks of content from the NYT and The Guardian across a range of social media sites – that’s work I’ve been doing. And Cornelius has tracked 1.5/4 years worth of content from four German newspapers (Suddeutsche Zeitung, Die Zeit, FAZ, Die Welt).

With the Guardian we accessed data from the API which tells you which articles were published in print, and which have not – that is baseline data for the emphasis editors place on different types of content.

So, I’ll talk about my data from the NY Times and the Guardian, from 2013, though we now have 2014 and 2015 data too. This data from two weeks is about 16k+ articles. The Guardian runs around 800 articles per day, the NYT does around 1000. And we could track the items on Twitter, Facebook, Google+, Delicious, Pinterest and Stumbleupon. We do that by grabbing the unique identifyer for the news article, then use the social media endpoints of social platforms to find sharing. But we had a challenge with Twitter – in 2014 they killed the end point we and others had been using to track sharing of URLs. The other sites are active, but relatively irrelevant in the sharing of news items! And there are considerable differences across the ecosystems, some of these social networks are not immediately identifiable as social networks – will Delicious or Pinterest impact popularity?

This data allows us to contrast the differences in topics identified by news editors and social media users.

So, looking at the NYT there is a lot of world news, local news, opinion. But looking at the range of articles Twitter maps relatively well (higher sharing of national news, opinion and technology news), but Facebook is really different – there is huge sharing of opinion, as people share what lies with their interests etc. We see outliers in every section – some articles skew the data here.

If we look at everything that appeared in print, we can look at a horrible diagram that shows all shares… When you look here you see how big Pinterest is, but in fashion in lifestyle areas. The sharing there doesn’t reflect ratio of articles published really though. Google+ has sharing in science and technology in the Guardian, in environment, jobs, local news, opinion and technology in the NYT.

Interestingly news and sports, which are real staples of newspapers but barely feature here. Economics are even worse. Now the articles are english-speaking but they are available globally… But what about differences in Germany… Over to Cornelius…

Cornelius: So Marcos’ work is ahead of mine – he’s already published some of this work. But I have been applying his approach to German newspapers. I’ve been looking at usage metrics and how that relationship between audiences and publishers, and how that relationship changes over time.

So, I’ve looked at Facebook engagement with articles in four German newspapers. I have compared comments, likes and shares and how contribution varies… Opinion is important for newspapers but not necessarily where the action is. And I don’t think people share stories in some areas less – in economics they like and comment, but they don’t share. So interesting to think about the social perception of sharability.

So, a graph here of Die Zeit here shows articles published and the articles shared on Facebook… You see a real change in 2014 to greater numbers (in both). I have also looked at type of articles and print vs. web versions.

So, some observations: niche social networks (e.g. Pinterest) are more relevant to news sharing than expected. Reliance on FB at Die Zeit grew suddenly in 2014. Social nors of liking, sharing and discussing differ significantly across news desks. Some sections (e.g. sports) see a mismatch of importance and use versus liking and sharing.

In the future we want to look at temporal shifts in social media feedback and newspapers coverage. Monitoring


Q1) Have you accounted for the possibility of bots sharing content?

A1 – Marcus) No, we haven’t But we are looking across the board but we cannot account for that with the data we have.

Q2) How did you define or find out that an article was shared from the URLs

A2) Tricky… We wrote a script for parsing shortened URLs to check that.

A2 – Cornelius) Read Marco’s excellent documentation.

Q3) What do you make of how readers are engaging, what they like more, what they share more… and what influences that?

A3 – Cornelius) I think it is hard to judge. There are some indications, and have some idea of some functions that are marketed by the platforms being used in different ways… But wouldn’t want to speculate.

Twitter Friend Reportoires: Inferring sources of information management from digital traces – Jan-Hinri Schmidt; Lisa Merton, Wiebke Loosen, Uwe, Kartin?

Our starting point was to think about shifting the focus of Twitter Research. Many studies are on Twitter – explicitly or implicitly – as a broadcast paradigm, but we want to conceive of it as an information tool, and the concept of “Twitter Friend Reportoires” – using “Friend” in the Twitter terminology – someone I follow. We ware looking for patterns in composition of friend sets.

So we take a user, take their friends list, and compare to list of accounts identified previously. So our index has 7,528 Twitter account of media outlets (20.8%) of organisations (political parties, companies, civil society organisations (53.4%) and of individuals (politicians, celebrities and journalists, 25.8%) – all in Germany. We take our sample, compare with a relational table, and then to our master index. And if the account isn’t found in the master index, we can’t say anything about them yet.

To demonstrate the answers we can find with this approach…. We have looked at five different samples:

  • Audience_TS – sample following PSB TV News
  • Audience_SZ – sample following quality daily newspapers
  • MdB – members of federal parliament
  • BPK – political journalists registerd for the bundespressekonferenz
  • Random – random sample of German Twitter users (via Axel Bruns)

We can look at the friends here, and we can categorise the account catagories. In our random sample 77.8% are not identifiable, 22.2% are in our index (around 13% are individual accounts). That is lower than the percentages of friends in our index for all other audiences – for MdB and BPK a high percentage of their friends are in our index. Across the groups there is less following of organisational accounts (in our index) – with the exception of the MdB and political parties. If we look at the media accounts we can see that with the two audience samples they have more following of media accounts than others, including MdB and BPK… When it comes to individual public figures in our indexes, celebrities are prominent for audiences, much less so for MdB and BPK, but MdB follow other politicians, and journalists tend to follow other politicians. And journalists do follow politicians, and politicians – to a less extent – follow journalists.

In terms of patterns of preference we can suggest a model of a fictional user to understand preference between our three categories (organisational account, media account, individual account). And we can use that profile example and compare with our own data, to see how others behaviours fit that typology. So, in our random sample over 30% (37,9%) didn’t follow any organisational accounts. Amongst MdB and BPK there is a real preference for individual accounts.

So, this is what we are measuring right now… I am still not quite happy yet. It is complex to explain, but hard to also show the detail behind that… We have 20 categories in our master index but only three are shown here… Some frequently asked questions that I will ask and answer based on previous talks…

  1. Around 40% identified accounts is not very must is it?
    Yes and no! We have increased this over time. But initially we did not include international accounts, if we did that we’d increase share, especially with celebrities, also international media outlets. However, there is always a trade off, there will also be a long tail… And we are interested in specific categorisations and in public speakers as sources on Twitter.
  2. What does friending mean on Twitter anyway?
    Good question! More qualitative research is needed to understand that – but there is some work on journalists (only). Maybe people friend people for information management reasons, reciprocity norms, public signal of connection, etc. And also how important are algorithmic recommendations in building your set of friends?


Q1 – me) I’m glad you raised the issue of recommendation algorithms – the celebrity issue you identified is something Twitter really pushes as a platform now. I was wondering though if you have been looking at how long the people you are looking at have been on Twitter – as behavioural norms

A1) It would be possible to collect it, but we don’t now. We do, for journalists and politicians we do gather list of friends of each month to get longitudinal idea of changes. Over a year, there haven’t been many changes yet…

Q2) Really interesting talk, could you go further with the reportoire? Could there be a discrepancy between the reportoire and their use in terms of retweeting, replying etc.

A2) We haven’t so far… Could see which types of tweets accounts are favouriting or retweeting – but we are not there yet.

Q3) A problem here…

A3) I am not completely happy to establish preference based on indexes… But not sure how else to do this, so maybe you can help me with it. 

Analysing digital traces: The epistemological dimension of algorithms and (big) internet data – Katharine Kinder-Kuranda and Katrin Weller

Katherine: We are interested in the epistemiological aspects of algorithms, so how we research these. So, our research subjects are researchers themselves.

So we are seeing real focus on algorithms in Internet Research, and we need to understand the (hidden) influence of algorithms on all kinds of research, including researchers themselves. So we have researchers interested in algorithms… And in platforms, users and data… But all of these aspects are totally intertwined.

So lets take a Twitter profile… A user of Twitter gets recommendations of who to follow in a given moment of time, and they see newsfeeds at a given moment of time. That user has context that as a researcher I cannot see or interpret the impact of that context on the user’s choice of e.g. who they then follow.

So, algorithms observe, count, sort and rank information on the basis of a variety of different data sources – they are highly heterogeneous and transient. Online data can be user-generated content or activity, traces or location data from various internet platforms. That promises new possibilities, but also raises significant challenge, including because of its heterogeneity.

Social media data has uncertain origins, about users and their motivations; often uncertain provenance of the data. The “users that we see are not users” but highly structured profiles and the result of careful image-management. And we see renewed discussion of methods and epistemology, particularly within the social sciences, for instance suggestions include “messiness” (Knupf 2014), and ? (Kitchen 2012).

So, what does this mean for algorithms? Algorithms operate on an uncertain basis and present real challenges for internet research. So I’m going to now talk about work that Katrin and I did in a qualitative study of social media researchers (Kinder-Kurlanda and Weller 2014). We conducted interviews at conferences – highly varied – speaking to those working with data obtained from social media. There were 40 interviews in total and we focused on research data management.

We found that researchers found very individual ways to address epistemological challenges in order to realise the potential of this data for research. And there were three real concerns here: accessibility, methodology, research ethics.

  1. Data access and quality of research

Here there were challenges of data access, restrictions on privacy of social media data, technical skills; adjusting research questions due to data availability; struggle for data access often consumes much effort. Researchers talks about difficulty in finding publicatio outlets, recognition, jobs in the disciplinary “mainstream” – it is getting better but a big issue. There was also comment on this being a computer science dominated fields – which had highly formalised review processes, few high ranking conferences, and this enforces highly strategic planning of resources and research topics. So researchers attempts to acieve validity and good research quality are constrained. So, this is really challenging for researchers.

2. New Methodologies for “big data”

Methodologies in this research often defy traditional ways of achieveing research validity – through ensuring reproducability, sharing of data sets (ethically not possible). There is a need to find patterns in large data sets by analysis of keywords, or automated analysis. It is hard for others to understand process and validate it. Data sets cannot be shared…

3. Research ethics

There is a lack of users informed consent to studies based on online data (Hutton and Henderson 2015). There are ethical complexity. Data cannot really be anonymised…

So, how do algorithms influence our research data and what does this mean for researchers who want to learn something about the users? Algoritms influence what content users interact with, for example: How to study user networks without knowing the algorithms behind follower/friend suggestions? How to study populations?

To get back to the question of observing algorithms? Well the problem is that various actors in the most diverse situations react out of different interests to the results of algorithic calculations, and may even try to influence algorithms. You see that with tactics around trending hashtags as part of protest for instance. The results of algorithmic analyses presented to internet users with information on how algorithms take part.

In terms of next steps. researchers need to be aware that online environments are influenced by algorithms and so are the users and the data they leave behind. It may mean capturing the “look and feel” of the platform as part of research.


Q1) One thing I wasn’t sure about… Is your sense when you were interviewing researchers that they were unaware of algorithmic shaping… Or was it about not being sure how to capture that?

A1) Algorithms wasn’t the terminology when we started our work… They talked about big data… the framing and terminology is shifting… So we are adding the algorithms now… But we did find varying levels of understanding of platform function – some were very aware of platform dynamics, but some felt that if they have a Twitter dataset that’s a representation of the real world.

Q1) I would think that if we think about recognising how algorithms and platform function come in as an object… Presumably some working on interfaces were aware but others looking at, e.g. friendship group, took data and weren’t thinking about platform function, but that is something they should be thinking about…

A1) Yes.

Q2) What do you mean by the term “algorithm” now, and how that term is different from previously…

A2) I’m sure there is a messyness of this term. I do believe that looking at programmes, wouldn’t solve that problem. You have the algorithm in itself, gaining attention… From researchers and industry… So you have programmers tweaking algorithms here… as part of different structures and pressures and contexts… But algorithms are part of a lot of peoples’ everyday practice… It makes sense to focus on those.

Q3) You started at the beginning with an illustration of the researcher in the middle, then moved onto the agency of the user… And the changes to the analytical capacities working with this type of data… But how much is the awareness amongst researchers of how the data, the tools they work with, and how they are inscribed into the research…

A3) Thank you for making that distinction here. The problem in a way is that we saw what we might expect – highly varied awareness… This was determined by disciplinary background – whether STS researchers in sociology, or whether a computer scientist, say. We didn’t find too many disciplinary trends, but we looked across many disciplines…. But there were huge ranges of approach and attitude here – our data was too broad.

Panel Q&A

Q1 – Cornelius) I think that we should say that if you are wondering about “feedback” here, it’s about thinking about metrics and how they then feedback into practice, if there is a feedback loop… From very different perspectives… I would like to return to that – maybe next year when research has progressed. More qualitative understanding is needed. But a challenge is that stakeholder groups vary greatly… What if one finding doesn’t hold for other groups…

Q2) I am from the Wikimedia Foundation… I’m someone who does data analysis a lot. I am curious if in looking at these problems you have looked at recommender systems research which has been researching this space for 10 years, work on messy data and cleaning messy data… There are so many tiny differences that can really make a difference. I work on predictive algorithms, but that’s a new bit of turbulence in a turbulent sea… How much of this do you want to bring this space…

A2 – Katrin) These communities have not come together yet. I know people who work in socio-technical studies who do study interface changes… There is another community that is aware that this exists… And is not aware so closely… But see it as tiny bits of the same puzzle… And can be harder to understand for historical data… And getting an idea of what factors influence your data set. In our data sets we have interviewees more like you, and some with people at sessions like this… There is some connection, but not all of those areas coming together…

A2 – Cornelius) I think that there is a clash between computational social science data work, and this stuff here… That predictable aspect screws with big claims about society… Maybe an awareness but not a keenness. In terms of older computer science research that we are not engaging in, but should be… But often there is a conflict of interests sometimes… I saw a presentation that showed changes to the interface, changing behaviour… But companies don’t want to disclose that manipulation…

Comment) We’ve gone through a period, disheartened to see it is still there, that researchers are so excited to trace human activities, that they treat hashtags as the political debate… This community helpfully problematises or contextualises this… But I think that these papers are raising the question of people orientating practices towards the platform, from machine learning… I find it hard to talk about that… And how behaviour feeds into machine learn… Our system tips to behaviour, and technology shifts and reacts to that which is hard.

Q3) I wanted to agree with that idea of the  need to document. But I want to push at your implicit position that this is messy and difficult and hard to measure… But I think that applies to *any* methods… Standards of data removal, arise elsewhere, messiness occurs elsewhere… Some of those issues apply across all kinds of research…

A3 – Cornelius) Christian would have had an example on his algorithm audit work that might have been helpful there.

Comment) I wanted to comment on social media research versus traditional social science research… We don’t have much power over our data set – that’s quite different in comparison with those running surveys, undertaking interviews… and I have control of that tool… And I think that argument isn’t just about survey analysis, but other qualitative analysis… Your research design can fit your purposes…


Twitter recommend algorithms, celebrities and noise. Time on twitter. Overall follower/following counts? Does friend suggest influence?

Advertistors? and role in shaping content in news

Friday, 07/Oct/2016:

4:00pm – 5:30pm

Session Chair:

Location: HU 1.205
Humboldt University of Berlin Dorotheenstr. 24 Building 1, second floor 80 seats
Show help for 'Increase or decrease the abstract text size'


Wiebke Loosen1, Marco T Bastos2, Cornelius Puschmann3, Uwe Hasebrink1, Sascha Hölig1, Lisa Merten1, Jan­-Hinrik Schmidt1, Katharina E Kinder­-Kurlanda4, Katrin Weller4

1Hans Bredow Institute for Media Research; 2University of California, Davis; 3Alexander von Humboldt Institute for Internet and Society; 4GESIS Leibniz Institute for the Social Sciences

Oct 062016

Today I am again at the Association of Internet Researchers AoIR 2016 Conference in Berlin. Yesterday we had workshops, today the conference kicks off properly. Follow the tweets at: #aoir2016.

As usual this is a liveblog so all comments and corrections are very much welcomed. 

PA-02 Platform Studies: The Rules of Engagement (Chair: Jean Burgess, QUT)

How affordances arise through relations between platforms, their different types of users, and what they do to the technology – Taina Bucher (University of Copenhagen) and Anne Helmond (University of Amsterdam)

Taina: Hearts on Twitter: In 2015 Twitter moved from stars to hearts, changing the affordances of the platform. They stated that they wanted to make the platform more accessible to new users, but that impacted on existing users.

Today we are going to talk about conceptualising affordances. In it’s original meaning an affordance is conceived of as a relational property (Gibson). For Norman perceived affordances were more the concern – thinking about how objects can exhibit or constrain particular actions. Affordances are not just the visual clues or possibilities, but can be felt. Gaver talks about these technology affordances. There are also social affordances – talked about my many – mainly about how poor technological affordances have impact on societies. It is mainly about impact of technology and how it can contain and constrain sociality. And finally we have communicative affordances (Hutchby), how technological affordances impact on communities and communications of practices.

So, what about platform changes? If we think about design affordances, we can see that there are different ways to understand this. The official reason for the design was given as about the audience, affording sociality of community and practices.

Affordances continues to play an important role in media and social media research. They tend to be conceptualised as either high-level or low-level affordances, with ontological and epistemological differences:

  • High: affordance in the relation – actions enabled or constrained
  • Low: affordance in the technical features of the user interface – reference to Gibson but they vary in where and when affordances are seen, and what features are supposed to enable or constrain.

Anne: We want to now turn to platform-sensitive approach, expanding the notion of the user –> different types of platform users, end-users, developers, researchers and advertisers – there is a real diversity of users and user needs and experiences here (see Gillespie on platforms. So, in the case of Twitter there are many users and many agendas – and multiple interfaces. Platforms are dynamic environments – and that differentiates social media platforms from Gibson’s environmental platforms. Computational systems driving media platforms are different, social media platforms adjust interfaces to their users through personalisation, A/B testing, algorithmically organised (e.g. Twitter recommending people to follow based on interests and actions).

In order to take a relational view of affordances, and do that justice, we also need to understand what users afford to the platforms – as they contribute, create content, provide data that enables to use and development and income (through advertisers) for the platform. Returning to Twitter… The platform affords different things for different people

Taking medium-specificity of platforms into account we can revisit earlier conceptions of affordance and critically analyse how they may be employed or translated to platform environments. Platform users are diverse and multiple, and relationships are multidirectional, with users contributing back to the platform. And those different users have different agendas around affordances – and in our Twitter case study, for instance, that includes developers and advertisers, users who are interested in affordances to measure user engagement.

How the social media APIs that scholars so often use for research are—for commercial reasons—skewed positively toward ‘connection’ and thus make it difficult to understand practices of ‘disconnection’ – Nicolas John (Hebrew University of Israel) and Asaf Nissenbaum (Hebrew University of Israel)

Consider this… On Facebook…If you add someone as a friend they are notified. If you unfriend them, they do not. If you post something you see it in your feed, if you delete it it is not broadcast. They have a page called World of Friends – they don’t have one called World of Enemies. And Facebook does not take kindly to app creators who seek to surface unfriending and removal of content. And Facebook is, like other social media platforms, therefore significantly biased towards positive friending and sharing actions. And that has implications for norms and for our research in these spaces.

One of our key questions here is what can’t we know about

Agnotology is defined as the study of ignorance. Robert Proctor talks about this in three terms: native state – childhood for instance; strategic ploy – e.g. the tobacco industry on health for years; lost realm – the knowledge that we cease to hold, that we loose.

I won’t go into detail on critiques of APIs for social science research, but as an overview the main critiques are:

  1. APIs are restrictive – they can cost money, we are limited to a percentage of the whole – Burgess and Bruns 2015; Bucher 2013; Bruns 2013; Driscoll and Walker
  2. APIs are opaque
  3. APIs can change with little notice (and do)
  4. Omitted data – Baym 2013 – now our point is that these platforms collect this data but do not share it.
  5. Bias to present – boyd and Crawford 2012

Asaf: Our methodology was to look at some of the most popular social media spaces and their APIs. We were were looking at connectivity in these spaces – liking, sharing, etc. And we also looked for the opposite traits – unliking, deletion, etc. We found that social media had very little data, if any, on “negative” traits – and we’ll look at this across three areas: other people and their content; me and my content; commercial users and their crowds.

Other people and their content – APIs tend to supply basic connectivity – friends/following, grouping, likes. Almost no historical content – except Facebook which shares when a user has liked a page. Current state only – disconnections are not accounted for. There is a reason to not know this data – privacy concerns perhaps – but that doesn’t explain my not being able to find this sort of information about my own profile.

Me and my content – negative traits and actions are hidden even from ourselves. Success is measured – likes and sharin, of you or by you. Decline is not – disconnections are lost connections… except on Twitter where you can see analytics of followers – but no names there, and not in the API. So we are losing who we once were but are not anymore. Social network sites do not see fit to share information over time… Lacking disconnection data is an idealogical and commercial issue.

Commercial users and their crowds – these users can see much more of their histories, and the negative actions online. They have a different regime of access in many cases, with the ups and downs revealed – though you may need to pay for access. Negative feedback receives special attention. Facebook offers the most detailed information on usage – including blocking and unliking information. Customers know more than users, or Pages vs. Groups.

Nicholas: So, implications. From what Asaf has shared shows the risk for API-based research… Where researchers’ work may be shaped by the affordances of the API being used. Any attempt to capture negative actions – unlikes, choices to leave or unfriend. If we can’t use APIs to measure social media phenomena, we have to use other means. So, unfriending is understood through surveys – time consuming and problematic. And that can put you off exploring these spaces – it limits research. The advertiser-friends user experience distorts the space – it’s like the stock market only reporting the rises except for a few super wealthy users who get the full picture.

A biography of Twitter (a story told through the intertwined stories of its key features and the social norms that give them meaning, drawing on archival material and oral history interviews with users) – Jean Burgess (Queensland University of Technology) and Nancy Baym (Microsoft Research)

I want to start by talking about what I mean by platforms, and what I mean by biographies. Here platforms are these social media platforms that afford particular possibilities, they enable and shape society – we heard about the platformisation of society last night – but their governance, affordances, are shaped by their own economic existance. They are shaping and mediating socio-cultural experience and we need to better to understand the values and socio-cultural concerns of the platforms. By platform studies we mean treating social media platforms as spaces to study in their own rights: as institutions, as mediating forces in the environment.

So, why “biography” here? First we argue that whilst biographical forms tend to be reserved for individuals (occasionally companies and race horses), they are about putting the subject in context of relationships, place in time, and that the context shapes the subject. Biographies are always partial though – based on unreliable interviews and information, they quickly go out of date, and just as we cannot get inside the heads of those who are subjects of biographies, we cannot get inside many of the companies at the heart of social media platforms. But (after Richard Rogers) understanding changes helps us to understand the platform.

So, in our forthcoming book, Twitter: A Biography (NYU 2017), we will look at competing and converging desires around e.g the @, RT, #. Twitter’s key feature set are key characters in it’s biography. Each has been a rich site of competing cultures and norms. We drew extensively on the Internet Archives, bloggers, and interviews with a range of users of the platform.

Nancy: When we interviewed people we downloaded their archive with them and talked through their behaviour and how it had changed – and many of those features and changes emerged from that. What came out strongly is that noone knows what Twitter is for – not just amongst users but also amongst the creators – you see that today with Jack Dorsey and Anne Richards. The heart of this issue is about whether Twitter is about sociality and fun, or is it a very important site for sharing important news and events. Users try to negotiate why they need this space, what is it for… They start squabling saying “Twitter, you are doing it wrong!”… Changes come with backlash and response, changed decisions from Twitter… But that is also accompanied by the media coverage of Twitter, but also the third party platforms build on Twitter.

So the “@” is at the heart of Twitter for sociality and Twitter for information distribution. It was imported from other spaces – IRC most obviously – as with other features. One of the earliest things Twitter incorporated was the @ and the links back.. You have things like originally you could see everyone’s @ replies and that led to feed clutter – although some liked seeing unexpected messages like this. So, Twitter made a change so you could choose. And then they changed again to automatically not see replies from those you don’t follow. So people worked around that with “.@” – which created conflict between the needs of the users, the ways they make it usable, and the way the platform wants to make the space less confusing to new users.

The “RT” gave credit to people for their words, and preserved integrity of words. At first this wasn’t there and so you had huge variance – the RT, the manually spelled out retweet, the hat tip (HT). Technical changes were made, then you saw the number of retweets emerging as a measure of success and changing cultures and practices.

The “#” is hugely disputed – it emerged through hashtag.org: you couldn’t follow them in Twitter at first but they incorporated it to fend off third party tools. They are beloved by techies, and hated by user experience designers. And they are useful but they are also easily coopted by trolls – as we’ve seen on our own hashtag.

Insights into the actual uses to which audience data analytics are put by content creators in the new screen ecology (and the limitations of these analytics) – Stuart Cunningham (QUT) and David Craig (USC Annenberg School for Communication and Journalism)

The algorithmic culture is well understood as a part of our culture. There are around 150 items on Tarleton Gillespie and Nick Seaver’s recent reading list and the literature is growing rapidly. We want to bring back a bounded sense of agency in the context of online creatives.

What do I mean by “online creatives”? Well we are looking at social media entertainment – a “new screen ecology” (Cunningham and Silver 2013; 2015) shaped by new online creatives who are professionalising and monetising on platforms like YouTube, as opposed to professional spaces, e.g. Netflix. YouTube has more than 1 billion users, with revenue in 2015 estimated at $4 billion per year. And there are a large number of online creatives earning significant incomes from their content in these spaces.

Previously online creatives were bound up with ideas of democratic participative cultures but we want to offer an immanent critique of the limits of data analytics/algorithmic culture in shaping SME from with the industry on both the creator (bottom up) and platform (top down) side. This is an approach to social criticism exposes the way reality conflicts not with some “transcendent” concept of rationality but with its own avowed norms, drawing on Foucault’s work on power and domination.

We undertook a large number of interviews and from that I’m going to throw some quotes at you… There is talk of information overload – of what one might do as an online creative presented with a wealth of data. Creatives talk about the “non-scalable practices” – the importance and time required to engage with fans and subscribers. Creatives talk about at least half of a working week being spent on high touch work like responding to comments, managing trolls, and dealing with challenging responses (especially with creators whose kids are engaged in their content).

We also see cross-platform engagement – and an associated major scaling in workload. There is a volume issue on Facebook, and the use of Twitter to manage that. There is also a sense of unintended consequences – scale has destroyed value. Income might be $1 or $2 for 100,000s or millions of views. There are inherent limits to algorithmic culture… But people enjoy being part of it and reflect a real entrepreneurial culture.

In one or tow sentences, the history of YouTube can be seen as a sort of clash of NoCal and SoCal cultures. Again, no-one knows what it is for. And that conflict has been there for ten years. And you also have the MCNs (Multi-Contact Networks) who are caught like the meat in the sandwich here.

Panel Q&A

Q1) I was wondering about user needs and how that factors in. You all drew upon it to an extent… And the dissatisfaction of users around whether needs are listened to or not was evident in some of the case studies here. I wanted to ask about that.

A1 – Nancy) There are lots of users, and users have different needs. When platforms change and users are angry, others are happy. We have different users with very different needs… Both of those perspectives are user needs, they both call for responses to make their needs possible… The conflict and challenges, how platforms respond to those tensions and how efforts to respond raise new tensions… that’s really at the heart here.

A1 – Jean) In our historical work we’ve also seen that some users voices can really overpower others – there are influential users and they sometimes drown out other voices, and I don’t want to stereotype here but often technical voices drown out those more concerned with relationships and intimacy.

Q2) You talked about platforms and how they developed (and I’m afraid I didn’t catch the rest of this question…)

A2 – David) There are multilateral conflicts about what features to include and exclude… And what is interesting is thinking about what ideas fail… With creators you see economic dependence on platforms and affordances – e.g. versus PGC (Professionally Generated Content).

A2 – Nicholas) I don’t know what user needs are in a broader sense, but everyone wants to know who unfriended them, who deleted them… And a dislike button, or an unlike button… The response was strong but “this post makes me sad” doesn’t answer that and there is no “you bastard for posting that!” button.

Q3) Would it be beneficial to expose unfriending/negative traits?

A3 – Nicholas) I can think of a use case for why unfriending would be useful – for instance wouldn’t it be useful to understand unfriending around the US elections. That data is captured – Facebook know – but we cannot access it to research it.

A3 – Stuart) It might be good for researchers, but is it in the public good? In Europe and with the Right to be Forgotten should we limit further the data availability…

A3 – Nancy) I think the challenge is that mismatch of only sharing good things, not sharing and allowing exploration of negative contact and activity.

A3 – Jean) There are business reasons for positivity versus negativity, but it is also about how the platforms imagine their customers and audiences.

Q4) I was intrigued by the idea of the “Medium specificity of platforms” – what would that be? I’ve been thinking about devices and interfaces and how they are accessed… We have what we think of as a range but actually we are used to using really one or two platforms – e.g. Apple iPhone – in terms of design, icons, etc. and the possibilities of interface is, and what happens when something is made impossible by the interface.

A4 – Anne) When the “medium specificity” we are talking about the platform itself as medium. Moving beyond end user and user experience. We wanted to take into account the role of the user – the platform also has interfaces for developers, for advertisers, etc. and we wanted to think about those multiple interfaces, where they connect, how they connect, etc.

A4 – Taina) It’s a great point about medium specitivity but for me it’s more about platform specifity.

A4 – Jean) The integration of mobile web means the phone iOS has a major role here…

A4 – Nancy) We did some work with couples who brought in their phones, and when one had an Apple and one had an Android phone we actually found that they often weren’t aware of what was possible in the social media apps as the interfaces are so different between the different mobile operating systems and interfaces.

Q5) Can you talk about algorithmic content and content innovation?

A5 – David) In our work with YouTube we see forms of innovation that are very platform specific around things like Vine and Instagram. And we also see counter-industrial forms and practices. So, in the US, we see blogging and first person accounts of lives… beauty, unboxing, etc. But if you map content innovation you see (similarly) this taking the form of gaps in mainstream culture – in India that’s stand up comedy for instance. Algorithms are then looking for qualities and connections based on what else is being accessed – creating a virtual circle…

Q6) Can we think of platforms as instable, about platforms having not quite such a uniform sense of purpose and direction…

A6 – Stuart) Most platforms are very big in terms of their finance… If you compare that to 20 years ago the big companies knew what they were doing! Things are much more volatile…

A6 – Jean) That’s very common in the sector, except maybe on Facebook… Maybe.

PA-05: Identities (Chair: Tero Jukka Karppi)

The Bot Affair: Ashley Madison and Algorithmic Identities as Cultural Techniques – Tero Karppi, University at Buffalo, USA

As of 2012 Ashley Madison is the biggest online dating site targeted at those already in a committed relationship. Users are asked to share their gender, their sexuality, and to share images. Some aspects are free but message and image exchange are limited to paid accounts.

The site was hacked in 2016, stealing site user data which was then shared. Security experts who analysed the data assessed it as real, associated with real payment details etc. The hacker intention was to expose cheaters but my paper is focused on a different aspect of the aftermath. Analysis showed 43 male bots, and 70k female bots and that is the focus of my paper. And I want to think about this space and connectivity by removing the human user from the equation.

The method for me was about thinking about the distinction between human and non-human user, the individual and the bot. Eminating from germination theory I wanted to use cultural techniques – with materials, symbolic values, rules and places. So I am seeking elements of difference of different materials in the context of the hack and the aftermath.

So, looking at a news items: “Ashley madison, the dating website for cheaters, has admitted that some women on its site were virtual computer programmes instead of real women.” (CNN money), which goes onto say that users thought that they were cheating, but they weren’t after all! These bots interacted with users in a variety of ways from “winking” to messaging, etc. The role of the bot is to engage users in the platform and transform them into paying customers. A blogger talked about the space as all fake – the men are cheaters, the women are bots and only the credit card payments are real!

The fact that the bots are so gender imbalanced tells us the difference in how the platform imagines male and female users. In another commentary they comment on the ways in which fake accounts drew men in – both by implying real women were on the site, and by using real images on fake accounts… The lines between what is real and what is fake have been blurred. Commentators noted the opaqueness of connectivity here, and of the role of the bots. Who knows how many of the 4 million users were real?

The bots are designed to engage users, to appear as human to the extent that we understand human appearance. Santine Olympo talked about bots whilst others looking at algorithmic spaces and what can be imagined and created from our wants and needed. According to Ashley Madison employees the bots – or “angels” – were created to match the needs of users, recycling old images from real user accounts. This case brings together the “angel” and human users. A quote from a commentator imagines this as a science fiction fantasy where real women are replaced by perfect interested bots. We want authenticity in social media sites but bots are part of our mundane everyday existence and part of these spaces.

I want to finish by quoting from Ashley Madison’s terms and conditions, in which users agree that “some of the accounts and users you may encounter on the site may be fiction”.

Facebook algorithm ruins friendship – Taina Bucher, University of Copenhagen

“Rachel”, a Facebook user/informant states this in a tweet. She has a Facebook account that she doesn’t use much. She posts something and old school friends she has forgotten comment on it. She feels out of control… And what I want to focus on today are ordinary affects of algorithmic life taking that idea from ?’s work and Catherine Stewart’s approach to using this in the context of understanding the encounters between people and algorithmic processes. I want to think about the encounter and how the encounter itself becoming generative.

I think that the fetish could be one place to start in knowing algorithms… And how people become attuned to them. We don’t want to treat algorithms as a fetish. The fetishist doesn’t care about the object, just about how the object makes them feel. And so the algorithm as fetish can be a mood maker, using the “power of engagement”. The power does not reside in the algorithm, but in the types of ways people imagine the algorithm to exist and impact upon them.

So, I have undertaken a study of people’s personal algorithm stories, looking at people’s personal algorithm stories about Facebook algorithm; monitoring and querying Twitter for comments and stories (through keywords) relating to Facebook algorithms. And a total of 25 interviews were undertaken via email, chat and Skype.

So, when Rachel tweeted about Facebook and friendship, that gave me the starting point to understand stories and the context for these positions through interviews. And what repeatedly arose was the uncanny nature of Facebook algorithms. Take, for instance Micheal, a musician in LA. He shares a post and usually the likes come in rapidly, but this time nothing… He tweets that the algorithm is “super frustrating” and he believes that Facebook only shows paid for posts. Like others he has developed his own strategy to show posts more clearly. He says:

“If the status doesn’t build buzz (likes, comments, shares) within the first 10 minutes or so it immediately starts moving down the news feed and eventually gets lost.”

Adapting behaviour to social media platforms and their operation can be seen as a form of “optimisation”. Users aren’t just updating their profile or hoping to be seen, they are trying to change behaviours to be better seen by the algorithm. And this takes us to the algorithmic imaginary, the ways of thinking about what algorithms are, what they should be, how they function, and what these imaginations in turn make possible. Many of our participants talked about changing behaviours for the platform. Rachel talks about “clicking every day to change what will show up on her feed” is not only her using the platform, but thinking and behaving differently in the space. Adverts can also suggest algorithmic intervention and, no matter whether the user is profiled or not (e.g. for anti-wrinkle cream), users can feel profiled regardless.

So, people do things to algorithms – disrupting liking practices, comment more frequently to increase visibility, emphasise positively charged words, etc. these are not just interpreted by the algorithm but also shape that algorithm. Critiquing the algorithm is not enough, people are also part of the algorithm and impact upon its function.

Algorithmic identity – Michael Stevenson, University of Groningen, Netherlands

Michael is starting with a poster of Blade Runner… Algorithmic identity brings to mind cyberpunk and science fiction. But day to day algorithmic identity is often about ads for houses, credit scores… And I’m interested in this connection between this clash of technological cool vs mundane instruments of capitalism.

For critics the “cool” is seen as an ideological cover for the underlying political economy. We can look at the rhetoric around technology – “rupture talk”, digital utopianism as that covering of business models etc. Evgeny Morozov writes entertainingly of this issue. I think this critique is useful but I also think that it can be too easy… We’ve seen Morozov tear into Jeff Jarvis and Tim O’Reilly, describing the latter as a spin doctor for Silicon Valley. I think that’s too easy…

My response is this… An image of Christopher Walken saying “needs more Bourdieu”. I think we need to take seriously the values and cultures and the effort it takes to create those. Bourdieu talks about the new media field with areas of “web native”, open, participatory, transparant at one end of the spectrum – the “autonomous pole”; and the “heteronomous pole” of mass/traditional media, closed, controlled, opaque. The idea is that actors locate themselves between these poles… There is also competition to be seen as the most open, the most participatory – you may remember a post from a few years back on Google’s idea of open versus that of Facebook. Bourdieu talks of the autonomous pole as being about downplaying income and economic value, whereas the heteronomous pole is much more directly about that…

So, I am looking at “Everything” – a site designed in the 1990s. It was built by the guys behind Slashdot. It was intended as a compendium of knowledge to support that site and accompany it – items of common interest, background knowledge that wasn’t news. If we look at the site we see implicit and explicit forms of impact… Voting forms on articles (e.g. “I like this write up”), and soft links at the bottom of the page – generated by these types of feedback and engagement. This was the first version in the 1990s. Then in 1999 Nathan Dussendorf(?) developed the Everything2 built with the Everything Development Engine. This is still online. Here you see that techniques of algorithmic identity and datafication of users, this is very explicitly presented – very much unlike Facebook. Among the geeks here the technology is put on top, showing reputation on the site. And being open source, if you wanted to understand the recommendation engine you could just look it up.

If we think of algorithms as talk makers, and we look back at 1999 Everything2, you see the tracking and datafication in place but the statement around it talks about web 2.0/social media type ideas of democracy, meritocracy, conflations of cultural values and social actions with technologies and techniques. Aspects of this are bottom up and you also talk about the role of cookies, and the addressing of privacy. And it directly says “the more you participate, the greater the opportunity for you to mold it your way”.

Thinking about Field Theory we can see some symbolic exclusion – of Microsoft, of large organisations – as a way to position Everything2 within the field. This continues throughout the documentation across the site. And within this field “making money is not a sin” – that developers want to do cool stuff, but that can sit alongside making money.

So, I don’t want to suggest this is a utopian space… Everything2 had a business model, but this was of its time for open source software. The idea was to demonstrate capabilities of the development framework, to get them to use it, and to then get them to pay for services… But this was 2001 and the bubble burst… So the developers turned to “real jobs”. But Everything2 is still out there… And you can play with the first version on an archived version if you are curious!

The Algorithmic Listener – Robert Prey, University of Groningen, Netherlands

This is a version of a paper I am working on – feedback appreciated. And this was sparked by re-reading Raymond Williams, who talks about “there are in fact no masses, but only ways of seeing people as masses” (1958/2011). But I think that in the current environment Williams might now say “there are in fact no individuals, but only ways of seeing people as individuals”. and for me I’m looking at this through the lens of music platforms.

In an increasingly crowded and competitive sector platforms like Spotify, SoundCloud, Apple Music, Deezer, Pandora, Tidel, those platforms are increasingly trying to differentiate themselves through recommendation engines. And I’ll go on to talk about recommendations as individualisation.

Pandora internet radio calls itself the “music genome project” and sees music as genes. It seeks to provide recommendatoins that are outside the distorting impact of cultural information, e.g. you might like “The colour of my love” but you might be put off by the fact that Celine Dion is not cool. They market themselves against the crowd. They play on the individual as the part separated from the whole. However…

Many of you will be familiar with Spotify, and will therefore be familiar with Discover Weekly. The core of Spotify is the “taste profile”. Every interaction you have is captured and recorded in real time – selected artists, songs, behaviours, what you listen to and for how long, what you skip. Discover weekly uses both the taste profile and aspects of collaborative filtering – selecting songs you haven’t discovered that fits your taste profile. So whilst it builds a unique identity for each user, it also relies heavily on other peoples’ taste. Pandora treats other people as distortion, Spotify sees it as more information. Discover weekly does also understands the user based on current and previous behaviours. Ajay Kalia (Spotify) says:

“We believe that it’s important to recognise that a single music listener is usually many listeners… [A] person’s preference will vary by the type of music, by their current activity, by the time of day, and so on. Our goal then is to come up with the right recommendation…”

This treats identity as being in context, as being the sum of our contexts. Previously fixed categories, like gender, are not assigned at the beginning but emerge from behaviours and data. Pagano talks about this, whilst Cheney-Lippold (2011) talks about “cybernetic relationship to individual” and the idea of individuation (Simondon). For Simondon we are not individuals, individuals are an effect of individuation, not the cause. A focus on individuation transforms our relationship to recommendation systems… We shouldn’t be asking if they understand who we are, but the extent to which the person is an effect of personalisation. Personalisation is seen as about you and your need. From a Simondonian perspective there is no “you” or “want” outside of technology. In taking this perspective we have to acknowledge the political economy of music streaming systems…

And the reality is that streaming services are increasingly important to industry and advertisers, particularly as many users use the free variants. And a developer of Pandora talks about the importance for understanding profiles for advertisers. Pandora boasts that they have 700 audience segments to data. “Whether you want to reach fitness-driven moms in Atlanta or mobile Gen X-er… “. The Echo Nest, now owned by Spotify, had created highly detailed consumer profiling before it was brought up. That idea isn’t new, but the detail is. The range of segments here is highly granular… And this brings us to the point that we need to take seriously what Nick Seaver (2015) says we need to think of: “contextualisation as a practice in its own right”.

This matters as the categories that emerge online have profound impacts on how we discover and encounter our world.

Panel Q&A

Q1) I think it’s about music category but also has wider relevance… I had an introduction to the NLP process of Topic Modelling – where you label categories after the factor… The machine sorts without those labels and takes it from the data. Do you have a sense of whether the categorisation is top down, or is it emerging from the data? And if there is similar top down or bottom up categorisation in the other presentations, that would be interesting.

A1 – Robert) I think that’s an interesting question. Many segments are impacted by advertisers, and identifying groups they want to reach… But they may also

Micheal) You talked about the Ashley Madison bots – did they have categorisation, A/B testing, etc. to find successful bots?

Tero) I don’t know but I think looking at how machine learning and machine learning history

Micheal) The idea of content filtering from the bottom to the top was part of the thinking behind Everything…

Q2) I wanted to ask about the feedback loop between the platforms and the users, who are implicated here, in formation of categories and shaping platforms.

A2 – Taina) Not so much in the work I showed but I have had some in-depth Skype interviews with school children, and they all had awareness of some of these (Facebook algorithm) issues, press coverage and particularly the review of the year type videos… People pick up on this, and the power of the algorithm. One of the participants emails me since the study noting how much she sees writing about the algorithm, and about algorithms in other spaces. Awareness is growing much more about the algorithms shaping spaces. It is more prominent than it was.

Q3) I wanted to ask Michael about that idea of positioning Everything2 in relation to other sites… And also the idea of the individual being transformed by platforms like Spotify…

A3 – Michael) I guess the Bourdieun vision is that anyone who wants to position themselves on the spectrum, they can. With Everything you had this moment during the Internet Bubble, a form of utopianism… You see it come together somewhat… And the gap between Wired – traditional mass media – and smaller players but then also a coming together around shared interests and common enemies.

A3 – Robert) There were segments that did come from media, from radio and for advertisers and that’s where the idea of genre came in… That has real effects… When I was at High School there were common groups around particular genres… But right now the move to streaming and online music means there are far more mixed listening and people self-organise in different ways. There has been de-bunking of Bourdieu, but his work was at a really different time.

Q4) I wanted to ask about interactions between humans and non-human. Taina, did people feel positive impacts of understanding Facebook algorithms… Or did you see frustrations with the Twitter algorithms. And Tero, I was wondering how those bots had been shaped by humans.

A4 – Taina) The human and non-human, and whether people felt more or less frustrated by understanding the algorithm. Even if they felt they knew, it changes all the time, their strategies might help but then become obsolete… And practices of concealment and misinformation were tactics here. But just knowing what is taking place, and trying to figure it out, is something that I get a sense is helpful… But maybe that is’t the right answer to it. And that notion of a human and a non human is interesting, particularly for when we see something as human, and when we see things as non-human. In terms of some of the controversies… When is an algorithm blamed versus a human… Well there is no necessary link/consistency there… So when do we assign humanness and non-humanness to the system and does it make a difference?

A4 – Tero) I think that’s a really interesting questions…. Looking at social media now from this perspective helps us to understand that, and the idea of how we understand what is human and what is non-human agency… And what it is to be a human.

Q5) I’m afraid I couldn’t here this question

A5 – Richard) Spotify supports what Deleuze wrote about in terms of the individual and how aspects of our personality are highlighted at the points that is convenient. And how does that effect help us regulate. Maybe the individual isn’t the most appropriate unit any more?

A5 – Taine) For users the exposure that they are being manipulated or can be summed up by the algorithm, that is what can upset or disconcert them… They don’t like to feel summed up by that…

Q6) I really like the idea of the imagined… And perceptions of non-human actors… In the Ashley Madison case we assume that men thought bots were real… But maybe not everyone did that. I think that moment of how and when people imagine and ascribe human or non-human status here. In one way we aren’t concerned by the imaginary… And in another way we might need to consider different imaginaries – the imaginary of the platform creators vs. users for instance.

A6 – Tero) Right now I’m thinking about two imaginaries here… Ashley Madison’s imaginary around the bots, and the users encountering them and how they imagine those bots…

A6 – Taine) A good question… How many imaginaries o you think?! It is about understanding more who you encounter, who you engage with. Imaginaries are tied to how people conceive of their practice in their context, which varies widely, in terms of practices and what you might post…

And with that session finished – and much to think about in terms of algorithmic roles in identity – it’s off to lunch… 

PS-09: Privacy (Chair: Michael Zimmer)

Unconnected: How Privacy Concerns Impact Internet Adoption – Eszter Hargittai, Ashley Walker, University of Zurich

The literature in this area seems to target the usual suspects – age, socio-economic status… But the literature does not tend to talk about privacy. I think one of the reasons may be the idea that you can’t compare users and non-users of the internet on privacy. But we have located a data set that does address this issue.

The U.S. Federal Communication Commission’s issued a National Consumer’s Broadband Service Capability Service in 2009 – when about 24% of Americans were still not yet online. This work is some years ago but our insterest is in the comparison rather than numbers/percentages. And this questioned both internet users and non-users.

One of the questions was: “It is too easy for my personal information to be stolen online” and participants were asked if they Strongly agreed, somewhat agreed, somewhat disagreed, disagreed. We looked at that as bivariate – strongly agreed or not. And analysing that we found that among internet users 63.3% said they strongly agreed versus 81% of non internet users. Now we did analyse demographically… It is what you expect generally – more older people are not online (though interestingly more female respondents are online). But even then the internet non-users again strongly agreed about that privacy/concern question.

So, what does that mean? Well getting people online should address people’s concerns about privacy issues. There is also a methodological takeaway – there is value to asking non-users about internet-related questions – as they may explain their reasons.


Q1) Was it asked whether they had previously been online?

A1) There is data on drop outs, but I don’t know if that was captured here.

Q2) Is there a differentiation in how internet use is done – frequently or not?

A2) No, I think it was use or non-use. But we have a paper coming out on those with disabilities and detailed questions on internet skills and other factors – that is a strength of the dataset.

Q3) Are there security or privacy questions in the dataset?

A3) I don’t think there are, or we would have used them. It’s a big national dataset… There is a lot on type of internet connection and quality of access in there, if that is of interest.

Note, there is more on some of the issues around access, motivations and skills in the Royal Society of Edinburgh Spreading the Benefits of Digital Participation in Scotland Inquiry report (Fourman et al 2014). I was a member of this inquiry so if anyone at AoIR2016 is interested in finding out more, let me know. 

Enhancing online privacy at the user level: the role of internet skills and policy implications – Moritz Büchi, Natascha Just, Michael Latzer, U of Zurich, Switzerland

Natascha: This presentation is connected with a paper we just published and where you can read more if you are interested.

So, why do we care about privacy protection? Well there is increased interest in/availability of personal data. We see big data as a new asset class, we see new methods of value extraction, we see growth potential of data-driven management, and we see platformisation of internet-based markets. Users have to continually balance the benefits with the risks of disclosure. And we see issues of online privacy and digital inequality – those with fewer digital skills are more vulnerable to privacy risks.

We see governance becoming increasingly important and there is an issue of understanding appropriate measures. Market solutions by industry self-regulation is problematic because of a lack of incentives as they benefit from data. At the same time states are not well placed to regulate because of their knowledge and the dynamic nature of the tech sector. There is also a route through users’ self-help. Users self-help can be an effective method to protect privacy – whether opting out, or using privacy enhancing technology. But we are increasingly concerned but we still share our data and engage in behaviour that could threaten our privacy online. And understanding that is crucial to understand what can trigger users towards self-help behaviour. To do that we need evidence, and we have been collecting that through a world internet study.

Moritz: We can imperically address issues of attitudes, concerns and skills. The literature finds these all as important, but usually at most two factors covered in the literature. Our research design and contributions look at general population data, nationally representative so that they can feed into policy. The data was collected in the World Internet Project, though many questions only asked in Switzerland. Participants were approached on landline and mobile phones. And our participants had about 88% internet users – that maps to the approx. population using the internet in Switzerland.

We found a positive effect of privacy attitudes on behaviours – but a small effect. There was a strong effect of privacy breaches and engaging in privacy protection behaviours. And general internet skills also had an effect on privacy protection. Privacy breaches – learning the hard way – do predict privacy self-protection. Caring is not enough – that pro-privacy attitudes do not really predict privacy protection behaviours. But skills are central – and that can mean that digital inequalities may be exacerbated because users with low general internet skills do not tend to engage in privacy protection behaviour.


Q1) What do you mean by internet skills?

A1 – Moritz): In this case there were questions that participants were asked, following a model by Alexander von Durnstern and colleagues developed, that asks for agreement or disagreement

Navigating between privacy settings and visibility rules: online self-disclosure in the social web – Manuela Farinosi1,Sakari Taipale2, 1: University of Udine; 2: University of Jyväskylä

Our work is focused on self-disclosure online, and particularly whether young people are concerned about privacy in relation to other internet users, privacy to Facebook, or privacy to others.

Facebook offers complex privacy settings allowing users to adopt a range of strategies in managing their information and sharing online. Waters and Ackerman (2011) talk about the practice of managing privacy settings and factors that play a role including culture, motivation, risk-taking ratio, etc. And other factors are at play here. Fuchs (2012) talks about Facebook as commercial organisation and concerns around that. But only some users are aware of the platform’s access to their data, may believe their content is (relatively) private. And for many users privacy to other people is more crucial than privacy to Facebook.

And there are differences in privacy management… Women are less likely to share their phone number, sexual orientation or book preferences. Men are more likely to share corporate information and political views. Several scholars have found that women are more cautious about sharing their information online. Nosko et al (2010) found no significant difference in information disclosure except for political informaltion (which men still do more of).

Sakari: Manuela conducted an online survey in 2012 in Italy with single and multiple choice questions. It was issued to university students – 1125 responses were collected. We focused on 18-38 year old respondents, and only those using facebook. We have slightly more female than male participants, mainly 18-25 years old. Mostly single (but not all). And most use facebook everyday.

So, a quick reminder of Facebook’s privacy settings… (a screenshot reminder, you’ve seen these if you’ve edited yours).

To the results… We found that the data that are most often kept private and not shared are mobile phone number, postal address or residence, and usernames of instant messaging services. The only data they do share is email address. But disclosure is high of other types of data – birth date for instance. And they were not using friends list to manage data. Our research also confirmed that women are more cautious about sharing their data, and men are more likely to share political views. The only not gender related issues were disclosure of email and date of birth.

Concerns were mainly about other users, rather than Facebook, but it was not substantially different in Italy. We found very consistent gender effects across our study. We also checked factors related to concerns but age, marital status, education, and perceived level of expertise as Facebook user did not have a significant impact. The more time you spend on Facebook, the less likely you are to care about privacy issues. There was also a connection between respondents’ privacy concerns were related to disclosures by others on their wall.

So, conclusions, women are more aware of online privacy protection than men, and protection of private sphere. They take more active self protection there. And we speculate on the reasons… There are practices around sense of security/insecurity, risk perception between men and women, and the more sociological understanding of women as maintainers of social labour – used to taking more care of their material… Future research needed though.


Q1) When you asked users about privacy settings on Facebook how did you ask that?

A1) They could go and check, or they could remember.


My focus is related to political science… And my topic is lobbying for the free flow of European Personal Data – and how the General Data Protection Regulation come into being and which lobbyists influenced the legislators. This is a new piece of regulation coming in next year. It was the subject of a great deal of lobbying – it became visible when the regulation was in parliament, but the lobbying was much earlier than that.

So, a quick description of EU law making. There is the European Commission which proposes legislation and that goes to both the Council of Europe and also to the Parliament. Both draw up regulations based on the proposal and then that becomes final regulation. In this particular case there was public consultation before the final regulation so I looked at a wide range of publicly available position pages. Looking across here I could see 10 types of stakeholders offering replies to the position papers – far more in 2011 than to the first version in 2009. Companies in the US participated to a very high degree – almost as much as those in the UK and France. That’s interesting… And that’s partly to do with the extended scope of this new regulation that covers EU but also service providers in the US and other locations. This idea is not exclusive to this regulation, known as “the Brussels effect”.

In terms of sector I have categorised the stakeholders so I have divided IP and Node communications for instance, to understand their interests. But I am interested in what they are saying, so I draw on Kluver (2013) and the “preference attainment model” to compare policy preferences of interest groups with the Commissions preliminary draft proposal, the Commission’s final proposal, and the final legislative act adopted by the council. So, what interests did the council take into account? Well almost every article changed – which makes those changes hard to pin down. But…

There is an EU Power Struggle. The Commission draft contained 26 different cases where it was empowered to adopt delegated acts. All but one of these articles were removed from the Council’s draft. And there were 48 exceptions for member states, most of them are “in the public interest”… But that could mean anything! And thus the role of nation states comes into question. The idea of European law is to have consistent policy – that amount of variance undermines that.

We also see a degree of User disempowerment. Here we see responses from Digital Europe – a group of organisations doing any sort of surveillance; But we also see the American Chambers of Commerce submitting responses. In these responses both are lobbying for “implicit consent” – the original draft requested explicit consent. And the Commission sort of brought into this, using a concept of unambiguous consent… Which is itself very ambiguous. Looking at the Council vs Free Data Advocates and then compared to Council vs Privacy Advocates. The Free Data Advocates are pro free movement of data, and privacy – as that’s useful to them too, but they are not keen on greater Commission powers. Privacy Advocates are pro privacy and more supportive of Commission powers.

In Search of Safe Harbors – Privacy and Surveillance of Refugees in Europe – Paula Kift, New York University, United States of America

Over 2015 a million refugees and migrants arrived at the borders of Europe. One of the ways in which the EU attempted to manage this influx was to gather information on these peoples. In particular satellite surveillance and data collection on individuals on arrival.   
The EU does acknowledge that biometric data does raise privacy issues, but that satellites and drones is not personally identifiable or an issue here. I will argue that the right to privacy does not require presence of Personally Identifiable Information.
As background there are two pieces of legislation, Eurosur – regulations to gather and share satelite and drone data across Member States. Although the EU justifies this on the basis of helping refugees in distress, it isn’t written into the regulation. Refugee and human rights organisations say that this surveillance is likely to enable turning back of migrants before they enter EU waters.
If they do reach the EU, according to Eurodac (2000) refugees must give fingerprints (if over 14 years old) and can only apply for asylum status in one country. But in 2013 this regulation has been updated so that fingerprinting can be used in law enforcement – that goes again EU human rights act and Data Protection law. It is also demeaning and suggests that migrants are more likely to be criminal, something not backed up by evidence. They have also proposed photography and fingerprinting be extended to everyone over 6 years old. There are legitimate reasons for this… Refugees come into Southern Europe where opportunities are not as good, so some have burned off fingerprints to avoid registration there, so some of these are attempts to register migrants, and to avoid losing children once in the EU.
The EU does not dispute that biometric data is private data. But with Eurodac and Eurosur the right to data protection does not apply – they monitor boats not individuals. But I argue that the Right to Private Life is jeapodised here, through prejudice, reachability and classifiability… The bigger issue may actually be the lack of personal data being collected… The EU should approach boats and identify those with asylum claim, and manage others differently, but that is not what is done.
So, how is big data relevant? Well big data can turn non personally identifiable information into PII through aggregation and combination. And classifying individuals also has implications for the design of Data Protection Laws. Data protection is a procedural right, but privacy is a substantive right, less dependent on personally identifiable information. Ultimately the right to privacy protects the person, rather than the integrity of the data.
Q1) In your research have you encountered any examples of when policy makers have engaged with research here?
A1 – Paula) I have not conducted any on the ground interviews or ethnographic work with policy makers but I would suggest that the increasing focus on national security is driving this activity, whereas data protection is shrinking in priority.
A1 – Jockum) It’s fairly clear that the Council of Europe engaged with digital rights groups, and that the Commission did too. But then for every one of those groups, there are 10 lobby groups. So you have Privacy International and European Digital Rights who have some traction at European level, but little traction at national level. My understanding is that researchers weren’t significantly consulted, but there was a position paper submitted by a research group at Oxford, submitted by lawers, but their interest was more aligned with national rather than digital rights issues.
Q2) You talked about the ? being embedded in the new legislation… You talk about information and big data… But is there any hope? We’ve negotiated for 4 years, won’t be in force until 2018…
A2 – Paula) I totally agree… You spend years trying to come up with a framework, but it all rests on PII…. And so how do we create Data Protection Act that respects personal privacy without being dependent on PII? Maybe the question is not about privacy but about profiles and discrimination.
A2 – Jockum) I looked at all the different sectors to look at surveillance logic, to understand why surveillance is related to regulation. The problem with Data Protection regulation is inherently problematic as it has opposing goals – to protect individuals and to enable the sharing of data… So, in that sense, surveillance logic is informing this here.
Q3) Could you outline again the threats here beyond PII?
A3 – Paula) Refugees who are aware of these issues don’t take their phones – but that reduces chance of identification but also stops potential help calls and rescues. But the risk is also about profiling… High ranking job offers are more likely to be made to women than men… Google thinks I am between 60 and 80 years old and Jewish, I’m neither, they detect who I am… And that’s where the risk is here… profiling… e.g. transactions being blocked through proposals.
Q4) Interesting mixture of papers here… Many people are concerned about social side of privacy… But know little of institutional privacy concerns. Some become more cynical… But how can we improve literacy… How can we influence people here about Data Protection laws, and privacy measures…
A4 – Esther) It varies by context. In the US the concern is with government surveillance, the EU it’s more about corporate surveillance… You may need to target differently. Myself and a colleague wrote a paper on apathy of privacy… There are issues of trust, but also work on skills. There are bigger conversations, not just with users, to be had. There are conversations to have generally with the population… Where do you infuse that, I don’t know… How do you reach adults, I don’t know?
A4 – Natascha) Not enough to strengthen awareness and rights… Skills are important here too… That you really need to ensure that skills are developed to adapt to policies and changes. Skills are key.
Q5) You talked about exclusion and registration,,, And I was wondering how exclusion to and exclusion of registration (e.g. the dead are not registered).
A5 – Paula) They collect how many are registered… But that can lead to threat inflation and very flawed data. In terms of data that is excluded there is a capacity issue… That may be the issue with deaths. The EU isn’t responsible for saving lives, but doesn’t want to be seen as responsible for those deaths either.
Q6) I wanted to come back to what you see as the problematic implications of the boat surveillance.
A6 – Paula) For many data collection is fine until something happens to you… But if you know it takes place it can have an impact on your behaviours… So there is work to be done to understand if refugees are aware of that surveillance. But the other issue here is about the use of drone surveillance to turn people back then that has clear impact on private lives, particularly as EU states have bilateral agreements with nations that have not all ratified refugee law – meaning turned back boats may result in significantly different rights and opportunities.
RT-07: IR (Chair: Victoria Nash)

The Politics of Internet Research: Reflecting on the challenges and responsibilities of policy engagement

Victoria Nash (University of Oxford, United Kingdom), Wolfgang Schulz (Hans-Bredow-Institut für Medienforschung, Germany), Juan-Carlos De Martin (Politecnico di Torino, Italy), Ivan Klimov, New Economic School, Russia (not attending), Bianca C. Reisdorf (representing Bill Dutton, Quello Center, Michigan Statue University), Kate Coyer, Central European University, Hungary (not attending)

Victoria: I am Vicky Nash and I have convened a round table of members of the international network of internet research centres.

Juan-Carlos: I am director of the Nexa Center for Internet and Society in Italy and we are mainly computer scientists like myself, and lawers. We are ten years old.

Wolfgang: I am associated with two centres, in Humboldt primarily and our interest is in governance and surveillance primarily. We are celebrating our five birthday this year. I also work with the Hans-Bredow-Institut a traditional media institute, multidisciplinary, and we increasingly focus on the internet and internet studies as part of our work.

Bianca: I am representing Bill Dutton. I am Assistant Director of the Quello Center at Michigan State University centre. We were more focused on traditional media but have moved towards internet policy in the last few years as Bill moved to join us. There are three of us right now, but we are currently recruiting for a policy post-doc.

Victoria: Thanks for that, I should talk about the department I am representing… We are in a very traditional institution but our focus has explicitly always been involvement in policy and real world impact.

Victoria: So, over the last five or so years, it does feel like there are particular challenges arising now, especially working with politicians. And I was wondering if other types of researchers are facing those same challenges – is it about politics, or is it specific to internet studies. So, can I kick off and ask you to give me an example of a policy your centre has engaged in, how you were involved, and the experience of that.

Juan-Carlos: There are several examples. One with the regional government in our region of Italy. We were aware of data and participatory information issues in Europe. We reached out and asked if they were aware. We wanted to make them aware of opportunities to open up data, and build on OECD work, but we were also doing some research ourselves. Everybody agreed in the technical infrastructure and on political level… We assisted them in creating the first open data portal in Italy, and one of the first in Europe. And that was great, it was satisfying at the time. Nothing was controversial, we were following a path in Europe… But with a change of regional government that portal has somewhat been neglected so that is frustrating…

Victoria: What motivated that approach you made?

JC: We had a chance to do something new and exciting. We had the know-how and the way it could be, at least in Italy, and that seemed like a great opportunity.

Wolfgang: My centres, I’m kind of an outsider in political governance as I’m concerned with media. But in internet governance it feels like this is our space and we are invested in how it is governed – more so than in other areas. The example I have is from more traditional media work… And that’s from the Hans-Bredow-Institute. We were asked to investigate for a report on usage patterns changes, technology changes, and puts strain on governance structures in Germany… And where there is a need for solutions to make federal and state law in Germany more convergent and able to cope with those changes. But you have to be careful when providing options, because of course you can make some options more appealing than others… So you have to be clear about whether you will be and present it as neutral, or whether you prefer an option and present it differently. And that’s interesting and challenging as an academic and with the role of an academic and institution.

Victoria: So did you consciously present options you did not support?

Wolfgang: Yes, we did. And there were two reasons for this… They were convinced we would come up with a suggestion and basis to start working with… And they accepted that we would not be specifically taking a side – for the federal or local government. And also they were confident we wouldn’t attempt to mess up the system… We didn’t present the ideal but we understood other dependencies and factors and trusted us to only put in suggestions to enhance and practically work, not replace the whole thing…

Victoria: And did they use your options?

Wolfgang: They ignored some suggestions, but where they acted they did take our options.

Bianca: I’ll talk about a semi-successful project. We were looking at detailed postcode level data on internet access and quality and reasons for that. We submitted to the National Science Foundation, it was rejected, then two weeks later we were invited to an event on just that topic by the NPIA. So we are collectively drafting suggestions from the NPIA and from a wide range of many research centres, and we are drafting that now. It was nice to be invited by policy makers… and interesting to see that idea picked up through that process in some way…

Victoria: That’s maybe an unintended consequences aspect there… And that suggestion to work with others was right for you?

Bianca: We were already keen to work with other research centres but actually we also now have policy makers and other stakeholders around the table and that’s really useful.

Victoria: those were all very positive… Maybe you could reflect on more problematic examples…

JC: Ministers often want to show that they are consulting on policy but often that is a gesture, a political move to listen but then policy made an entirely different way… After a while you get used to that. And then you have to calculate whether you participate or not – there is a time aspect there.

Victoria: And for conflict of interest reasons you pay those costs of participating…

JC: Absolutely, the costs are on you.

Wolfgang: We have had contact from ministeries in Germany but then discovered they are interested in the process as a public relations tool rather than as a genuine interest in the outcome. So now we assess that interest and engage – or don’t – accordingly. We try to say at the beginning “no, please speak to someone else” when needed. At Humboldt is reluctant to engage in policy making, and that’s a historical thing, but people expect us to get involved. We are one of the few places that can deliver monitoring on the internet, and there is an expectation to do that… And when ministeries design new programmes, we are often asked to be engaged and we have learned to be cautious about when we engage. Experience helps but you see different ways to approach academia – can be PR, sometimes you want support for your position or support politically, or you can actually be engaged in research to learn and have expertise and information. If you can see what approach it is, you can handle it appropriately.

Victoria: I think as a general piece of advice – to always question “why am I being approached” in the framing of “what are their motivations?”, that is very useful.

Wolfgang: I think starting in terms of research questions and programmes that you are concerned with gives you a counterpoint in your own thinking to dealing with requests. Then when good opportunities come up you can take it and make use of it… But academic value can be limited of some approaches so you need a good reason to engage in those projects and they have to align with your own priorities.

Bianca: My bad example is related to that. The Net Neutrality debate is a big part of our work… There are a lot of partisan opinions on that, and not a lot of neutral research there. We wanted to do a big project there but when we try to get funding for that we have been steered to stay away. We’ve been steered that talking about policy with policy makers is very negative, it is taken poorly. This debate has been bouncing around for 10 years, we want to see where Net Neutrality is imposed if we see changes in investment… But we need funding to do that… And funders don’t want to do it and are usually very cosy with policy makers…

Victoria: This is absolutely an issue, these concerns are in the minds of policy makers as well and that’s important.

Wolfgang: When we talk about research in our field and policy makers, it’s not just about when policy makers approach you to do something… You have a term like Net Neutrality at the centre that requires you to be either neutral or not neutral, that really shapes how you handle that as an academic… You can become, without wanting it, someone promoting one side sometimes. On a minor protection issue we did some work on co-regulation with Australia that seemed to solve a problem… But then after this debate in Germany and started drafting the inter-state treaty on media regulation, the policy makers were interested… And then we felt that we should support it… and I entered the stage but it’s not my question anymore… So you have opinion about how you want something done…

JC: As a coordinator of a European project there was a call that included a topic of “Net Neutrality” – we made a proposal but what happened afterwards clearly proved that that whole area was topic. It was in the call… But we should have framed it differently. Again at European level you see the Commission funds research, you see the outcomes, and then they put out a call that entirely contradicts the work that they funded for political reasons. There is such a drive for evidence-based policy making that it is important that they frame that way… It is evidence-based when it fits their agenda, not when it doesn’t.

Victoria: I did some work with the Department of Media, Culture and Sport last year, again on minor protection, and we were told at the offset to assume porn caused harm to minors. And the frames of reference was shaped to be technical – about access etc. They did bring in a range of academic expertise but the terms of reference really constrained the contribution that was possible. So, there are real bear traps out there!

Wolfgang: A few years back the European Commission asked researchers to look at broadcasters and interruptions to broadcasts and the role of advertising, even though we need money we do not do that, it isn’t answering interesting research questions for us.

Victoria: I raised a question earlier about the specific stakes that academia has in the internet, it isn’t just what we study. Do you want to say more about that.

Wolfgang: Yes, at the pre-conference we had an STS stream… People said “of course we engage with policy” and I was wondering why that is the main position… But the internet comes from academia and there is a long standing tradition of engagement in policy making. Academics do engage with media policy, but they would’t class it as “our domain”, but they were not there are part of the beginning – academia was part of that beginning of the internet.


Q1) I wonder if you are mistaking the “of-ness” with the fact that the internet is still being formed, still in the making. Broadcast is established, the internet is in constant construction.

A1 – Wolfgang) I see that

Q1) I don’t know about Europe but in the US since the 1970s there have been deliberate efforts to reduce the power of decision makers and policy makers to work with researchers…

A1 – Bianca) The Federal Communications Commission is mainly made of economists…

Q1) Requirements and roles constrain activities. The assumption of evidence-based decisions is no longer there.

Q2) I think that there is also the issue of shifting governance. Internet governance is changing and so many academics are researching the governance of the internet, we reflect greatly on that. The internet and also the governance structure are still in the making.

Victoria: Do you feel like if you were sick of the process tomorrow, you’d still want to engage with policy making?

A2 – Phoebe) We are a publicly funded university and we are focused on digital inequalities… We feel real responsibility to get involved, to offer advice and opinions based on our advice. On other topics we’d feel less responsible, depending on the impact it would have. It is a public interest thing.

A2 – Wolfgang) When we look at our mission at the Hans-Bredow-Institute we have a vague and normative mission – we think a functioning public sphere is important for democracy… Our tradition is research into public spheres… We have a responsibility there. But we also have a responsibility that the evaluation of academic research becomes more and more important but there is no mechanism to ensure researchers answer the problems that society has… We have a completely divided set of research councils and their yardsticks are academic excellence. State broadcasters do research but with no peer review at all… There are some calls from the Ministry of Science that are problem-orientated but on the whole there isn’t that focus on social issues and relevance in the reward process, in the understanding of prestige.

Victoria: In the UK we have a bizarre dichotomy where research is measured against two measures: impact – where policy impact has real value – and that applies in all fields; but there is also regulation that you cannot use project funds to “lobby” government – which means you potentially cannot communicate research to politicians who disagree. This happened because a research organisation (not a university) opposed government policy with research funded by them… Implications for universities is currently uncleared.

JC: Italy is implementing a similar system to the UK. Often there is no actual mandate on a topic, so individuals come up with ideas without numbers and plans… We think there is a gap – but it is government and ministries work. We are funded to work in the national interest… But we need resources to help there. We are filling gaps in a way that is not sustainable in the long term really – you are evaluated on other criteria.

Q3) I wanted to ask about policy research… I was wondering if there is policy research we do not want to engage in. In Europe, and elsewhere, there is increasing need to attract research… What are the guidelines or principles around what we do or do not go for funding wise.

A3 – Bianca) We are small so we go for what interests us… But we have an advisory board that guides us.

A3 – Wolfgang) I’m not sure that there are overarching guidelines – there may be for other types of special centres – but it’s an interesting thing to have a more formalised exchange like we have right now…

A3 – JC) No, no blockers for us.

A3 – Victoria) Academic freedom is vigorously held up at Oxford but that can mean we have radically different research agendas in the same centre.

Q4) With that lack of guidance, isn’t there a need for academics to show that they have trust, especially in the public sphere, especially when getting funding from, say, Google or Microsoft. And how can you embed that trust?

A4 – Wolfgang) I think peer review as a system functions to support that trust. But we have to think about other institutional settings, and that there is enough oversight… And many associations, like Liebneiz, requires an institutional review board, to look over the research agenda and ensure some outside scrutiny. I wouldn’t say every organisation or research centre needs that – it can be helpful but costly in terms of time in particular. And you cannot trust the general public to do that, you need it to be peers. An interesting question though, especially as Humboldt has national funding from Google… In this network academics play a role, and organisations play a role, and you have to understand the networks and relationships of partners you work with, and their interests.

A4 – Bianca) That’s a question that we’ve faced recently… That concern that corporate funding may sway result and the best way to face that is to publish methodology, questionnaires, process… to ensure the work is understood in that context that enables trust in the work.
A4 – JC) We spent years trying to deal with the issue of independence and it is very important as academia has responsibility to provide research that is independent and unbiased by funding etc. And not just about the work itself, but also perceptions of the work… It is quite a local/contextual issue. So, getting money from Google is perceived differently in different countries, and at different times…
Victoria: This is something we have to have more conversations about this. In medicine there is far more conversation about codes of conduct around funding. I am also concerned that PhD funding is now requiring something like a third of PhDs to be co-funded by industry, without any understanding from UK Government about what that means and what that means for peer review… That’s something we need to think about far more stringently.
Q5) For companies there are requirements to review outputs before publications to check for proprietary information and ensure it is not released. That makes industry the final arbiter here. In Canada our funding is also increasingly coming from industry and there that means that proprietary data gives them final say…
A5 – Bianca) Sometimes it has to be about negotiating contracts and being clear what is and is not acceptable.
Victoria) That’s my concern with new PhD funding models, and also with use of industry data. It will be non-negotiable that the research is not compromised but how you make that process clear is important.
Q6) What are your models here – are you academic or outside academia?
A6 – JC) Academic and policy are part of the work we are funded to do.
A6 – Bianca) We are 99% Endowment funded, hence having a lot of freedom but also advisory board guidance.
A6 – Wolfgang) Our success is assessed by academic publication. The Humboldt Institute is funded largely by private companies but a range of them, but also from grants. The Hans-Bredow-Institute is mainly directly funded by the Hamburg Ministry of Science but we’d like to be funded from other funders across Germany.
A6 – Victoria) Our income is research income, teaching income from masters degrees… We are a department of the university. Our projects are usually policy related, but not always government related.
Q7) I was wondering if others in the room have been funded for policy work – my experience has been that policy makers had expectations and an idea of how much control they wanted… By contrast money from Google comes with a “research something on the internet” type freedom. This is not what I would have expected so I just wondered how others experiences compared.
Comment) I was asked to do work across Europe with public sector broadcasters… I don’t know how well my report was seen by policy makers but it was well received by the public sector broadcaster organisations.
Comment) I’ve had public sector funding, foundation funding… But I’ve never had corporate money… My cynical take is that corporations maybe are doing this as PR, hence not minding what you work on!
Comment) I receive money from funding agencies, I did a joint project that I proposed to a think tank… Which was orientated to government… But a real push for impact… Numbers needed to be in the title. I had to be an objective researcher but present it the right way… And that worked with impact… And then the government offered me a contract to continue the research – working for them not against them. The funding was coming from a position close to my own idea… I felt it was a bit instrumentalised in this way…
A7 – Wolfgang) I think that it is hard to generalise… Companies as funders do sometimes make demands and expect control of publishing of results… And whether it is published or not. We don’t do that – our work is always public domain. It’s case by case… But there is one aspect we haven’t talked about and that is the relationship between the individual researcher and their political engagement (or not) and how that impacts upon the neutrality of the organisation. As a lawyer I’m very aware of that… For instance if giving expert evidence in court, the importance of being an individual not the organisation. Especially if partners/funders before or in the future are on the opposite side. I was an expert for Germany in a court case, with private broadcasters on the other side, and you have to be careful there…
A7 – JC) There is so little money for research in Italy… Regarding corporations… We got some money from Google to write an open source library, it’s out there, it’s public… There was no conflict there. But money from companies for policy work is really difficult. But lots of case by case issues in-between.
Q8) But companies often fund social science work that isn’t about policy but has impact on policy.
A8 – JC) We don’t do social science research so we don’t face that issue.
A8 – Victoria) Finding ways to make that work that guarantees independence is often the best way forward – you cannot and often do not want to say no… But you work with codes of conduct, with advisory board, with processes to ensure appropriate freedoms.
JC: A question to the audience… A controversial topic arises, one side owns the debate and a private company approaches to support your voice… Do you take their funding?
Comment) I was asked to do that and I kind of stalled so that I didn’t have to refuse or take part, but in that case I didn’t feel
Comment) If having your voice in the public triggers the conversation, you do make it visible and participate, to progress the issue…
Comment) Maybe this comes down to personal versus institutional points of view. And I would need to talk to colleagues to help me make that decision, to decide if this would be important or not… Then I would say yes… Better solution is to say “no, I’m talking in a private capacity”.
JC) I think that the point of separating individual and centres here is important. Generally centres like ours do not take a position… And there is an added element that if a corporation wants to be involved, a track record of past behaviour makes it less troublesome. Saying something for 10 years gives you credibility in a way that suddenly engaging does not.
Wolfgang) In Germany it is general practice that if your arguments are not being heard, then you engage expertise – it is general practice in German legal academic practice. It is ok I think.
Comment) In the Bundestag they bring in experts… But of course the choice of expert reflects values and opinions made in articles. So you have a range of academics supporting politics… If I am invited to talk to parliament, I say what I always say “this is not a problem”.
Victoria: And I think that nicely reminds us why this is the politics of internet research! Thank you.
Plenary Panel: Who Rules the Internet? Kate Crawford (Microsoft Research NYC), Fieke Jansen (Tactical Tech), Carolin Gerlitz (University of Siegen) – Chair: Cornelius Puschmann
Jennifer Stromer-Galley, President of the Association of Internet Researchers: For those of you who are new to the AoIR, this is our 17th conference and we are an international organisation that looks at issues around the internet – now including those things that have come out of the internet including mobile apps. And our panel today we will be focusing on governance issues. Before that I would like to acknowledge this marvellous city of Berlin, and to thank all of my colleagues in Germany who have taken such care, and to Humboldt University for hosting us in this beautiful venue. And now, I’d like to handover to Herr Matthias Graf von Kielmansegg, representing Professor Dr Elizabeth Wacker, Federal Minister of Labour and Social Affairs.
Matthias Graf von Kielmansegg: is here representing Professor Wacker, who takes a great interest in internet and society, including the issues that you are looking at here this week. If you are not familiar with our digitisation policy, the German government published a digital agenda for the first time two years ago, covering all areas of government operation. In terms of activities it concentrates on the term 2013-2017, and it needs to be extended, and it reaches strategically far into the next decade. Additionally we have a regular summit bringing together the private sector, unions, government and the academic world looking at key issues.
You all know that digital is a fundamental gamechanger, in the way goods and services are used, the ways we communicate and collaborate, and digital loosens our ties to time and place… And we aren’t at the end but at the middle of this process. Wikipedia was founded 16 years ago, the iPhone launched 9 years ago, and now we talk about Blockchain… So we do not know where we will be in 10 or 20 years time. And good education and research are key to that. And we need to engage proactively. In Germany we are incorporating Internet of Things into our industries. In Germany we used to have a technology-driven view of these things, but now we look at economic and cultural contexts or ecosystems to understand digital systems.
Research is one driver, the other is that science, education, and research are users in their own right. Let me focus first on education… Here we must answer some major issues – what will drive change here, technology or pedagogy? Who will be the change agents? And what of the role of teachers and schools? They must take the lead in change and secure the dominance of pedagogy, using digital tools to support our key education goals – and not vice versa. And that means digital education must offer more opportunities, flexibilities, and better preparation for tomorrow’s world of work. With this in mind we plan to launch a digital education campaign to help young people find their place in an ever changing digital world, and to be ready to adapt to the changes that arise. How education can support our economic model and higher education. And we will need to address issues of technical infrastructure, governance – and for us how this plays out with our 60 federal states. Closer to your world is the world of science. Digital tools create huge amounts of new data and big data. The challenges organisations face is not just infrastructure but how to access and use this data. We call our approach Securing the Life Cycle of Data, concerned with aceess, use, reuse, interoperability. And how will be decide what we save, and what we delete? And who will decide how third parties use this data. And big data goes alongside other aspects such as high powered computing. We plan to launch an initiative of action in this area next year. To oversee this we have a Scientific Oversight Body with stakeholders. We are also keen to embrace Open Data and the resources to support that. We have added new conditions to our own funding conditions – any publication based on research funded by us, must be published open access.
We need to know more about internet and society need to be known, and there is research to be done. So, the federal government has decided to establish a German Internet Institute. It will address a number of areas of importance: access and use of the digital world; work and value creation and our democracy. We want an interdisciplinary team of social scientists, economists, and information scientists. The competitive selection process is just underway, and we expect the winner to be announced next spring. There is readiness to spend up to €15M over the first five years. And this highlights the importance of the digital world in Germany.
Let me just make one comment. The overall title of this conference is Internet Rules! It is still up to us to be the fool or the wise… We need to understand what might happen is politics, economics and society do not find the answers to the challenges we face. And so hopefully we will find that it’s not the internet that rules, but that democracy rules!
Kate Crawford
When Cornelius asked me to look at the idea of “Who rules the internet?” I looked up at my bookshelf, and found lots of books written by people in this community, many of you in this room, looking at just this question. And we have moved from the ’90s utopianism to the world of infrastructure, socio-technical aspects, the Internet of Things layer – and zombie web cams being coopted by hackers. So many of you have enhanced my understanding of this issue.
Right now we see machine learning and AI being rapidly build into our world without implications being fully understood… I am talking narrowly about AI here… Sometimes they have lovely feminine names: Siri, Alexa, etc… But these systems are embedded in our phones, we have AI analysing images on Facebook. It will never be separate from humans, but it is distinct and significant, and we see AI beyond the internet and into systems – on who gets released from jail, on hospital stays, etc. I am sure all of us were surprised by the fact that Facebook, last month, censored a Pulitzer Prize winning image of a girl being napalmed in Vietnam… We don’t know the processes that triggered this, though an image of a nude girl likely triggers these processes… Now that had attention, the Government of Norway accused Facebook or erasing our shared history. The image was restored but this is the tip of the iceberg – and most images and actions are not so apparent to us…
This lack of visibility is important but it isn’t new… There are many organisational and procedural aspects that are opaque… I think we are having a moment around AI where we don’t know what is taking place… So what do we do?
We could make them transparent… But this doesn’t seem likely to work. A colleague and I have written about the history of transparency and that process and availability code does not necessarily tell you exactly what is happening and how this is used. Y Combinator has installed a system, called HAL 9000 brilliantly, and have boasted that they don’t know how it filters applications, only the system could do that. That’s fine until that system causes issues, denies you rights, gets in your way…
So we need to understand these algorithms from the outside… We have to poke them… And I think of Christian Salmand(?)’s work on algorithmic auditing. Christian couldn’t be here this evening and my thoughts are with him. But he is also part of a group who are trying to pursue legal rights to enable this type of research.
And there are people that say that AI can fix this system… This is something that the finance sector talks about. They have an environment of predatory machine learning hunting each other – Terry Cary has written about this. It’s tempting to create a “police AI” to watch these… I’ve been going back to the 1970s books on AI, and the work of Joseph Weizenbaum who created ELIZA. And he suggested that if we continue to ascribe AI to human acting systems it might be a slow acting poison. It is a reminder to not be seduced by these new forms of AI.
Carolin Gerlitz, University of Siegen
I think after the last few days the answer to the question of “who rules the internet?”, I think the answer is “platforms”!
Their rules of who users are, what they can do, can seem very rigid. Before Facebook introduced the emotions, the Like button was used in a range of ways. With the introduction of emotions they have rigidly defined responses, creating discreet data points to be advertiser ready and available to be recombined.
There are also rules around programmability, that dictate what data can be extracted, how, by whom, in what ways… And platforms also like to keep the interpretation of data in control, and adjust the rules of APIs. Some of you have been working to extract data from platforms where things are changing rapidly – Twitter API changes, Facebook API and Research changes, Instagram API changes, all increasingly restricting access, all dictating who can participate. And limiting the opportunity to hold platforms to account, as my colleague Anne Helmond argues.
Increasingly platforms are accessed indirectly through intermediaries which create their own rules, a cascade of rules for users to engage with. Platforms don’t just extend to platforms but also to apps… As many of you have been writing about in regard to platforms and apps… And Christian, if he were here today, would talk about the increasing role of platforms in this way…
And platforms reach out not only to users but also non-users. They these spaces are also contextual – with place, temporality and the role of commercial content all important here.
These rules can be characterised in different ways… There is a dichotomy of openness and closedness. Much of what takes place is hidden and dictated by cascading sets of rule sets. And then there is the issue of evaluation – what counts, for whom, and in what way? Tailorism refers to the mass production of small tasks – and platforms work in these fine grained algorithmed way. But platforms don’t earn money from users’ repetitive actions… Or from use of platform data by third parties. They “put life to work” (Lazlo) by using data points raising questions of who counts and what counts.
Fieke Jansen, Tactical Tech
I work at an NGO, on the ground in real world scenarios. And we are concerned with the Big Five: Apple, Amazon, Google, Microsoft and Facebook. How did we get like this? People we work with are uncomfortable with this. When we ask activists and ask them to draw the internet, they mostly draw a cloud. We asked at a session “what happens if the government bans Facebook” and they cannot imagine it – and if Facebook is beyond government then where are we at here? And I work with an open source company who use Google Apps for Business – and that seems like an odd situation to me…
But I’ll leave the Big Five for now and turn to BitNik… They used the dark net shopper and brought random stuff for $50… And then placed them in a gallery… They did
Iced T watch… After Wikileaks an activist in Berlin found all the NSA services spying on this and worked out who was working for the secret service… But that triggers a real debate… There was real discussion of being anti-patriotic, and puts people in data… But the data he used, from LinkedIn, is sold every day…. He just used it in a way that raised debate. We allow that selling use… But this coder’s work was not… Isn’t that debate needed.
So, back to the Big Five. In 2014 Google (now Alphabet) was the second biggest company in the world – with equivalent GDP bigger than Austria. We choose to use many of their services every day… But many of their services are less in our face. In the sensor world we have fewer choices about data… And with the big companies it is political too… In Brussels you have to register lobbists – there are 9 for Google, 7 used to work for the European Parliament… There is a revolving door here.
There is also an issue of skill… Google has wealth and power and knowledge that are very large to counter. Facebook have, around 400m active users a month, 300m likes a day, they are worth $190m… And here we miss the political influence. They have an enormous drive to conquer the global south… They want to roll out Facebook Sero as “the internet”…
So, who rules the internet? It’s the 1% of the 1%… It is the Big Five, but also the venture capitalists who back them… Sequoia and Kleiner Perkins Caufield & Byers, and you have Peter Thiel… It is very few people behind many of the biggest companies including some of the Big Five…
People use these services that work well, work easily… I only use open source… Yes, it is harder… Why are so few questioning and critiquing that? We feed the beast on an every day basis… It is our universities – also moving to decentralised Big Five platforms in preference to their own, it is our government… and if we are not critical what happens?
Panel Discussion
Cornelius: Many here study internet governance… So I want to ask, Kate, does AI rule the internet?
Kate: I think it is really hard to think about who rules the internet. The interesting thing about automated decision making networks have been with us for a while… It’s less about ruling, and who… And it’s more about the entanglements, fragmentation and governance. We talk about the Big Five… I would probably say there are Seven companies here, deciding how we get into university, healthcare, housing, filtering far beyond the internet… And governments do have a role to play.
Cornelius: How do we govern what we don’t understand?
Kate: That’s a hard question… That keeps me up at night that question… Governments look to us academics, technology sectors, NGOs, trying to work out what to do. We need really strong research groups to look at this – we tried to do this with AI Now. Interdisciplinary is crucial – these issues cannot be solved by computer science alone or social science alone… This is the biggest challenge of the next 50 years.
Cornelius: What about how national governments can legislate for Facebook, say? (I’m simplifying a longer question that I didn’t catch in time here, correction welcome!)
Carolyn: I’m not sure about Facebook but in our digital methods workshop we talked about how on Twitter content can be deleted, that can then be exposed in other locations via the API. And it is also the case that these services are specific and localised… We expect national governments to have some governance, when what you understand and how you access information varies by location… Increasing that uncanny notion. I also wanted to comment on something you asked Kate – thinking about the actors here, they all require engagement of users – something Fieke pointed to. Those actors involved in rulers are dependent on actions of other actors.
Cornelius: So how else we be running these things? The Chinese option, the Russion options, are there better options?
Carolyn: I think I cannot answer – I’d want to put it to these 570 smart people for the next two days. My answer would be to acknowledge distributedness to which we have to respond and react… We cannot understand algorithms and AI without understanding context…
Carolyn: Fieke, what you talked about… Being extreme… Are we whining because as Europeans we are being colonised by other areas of the world, even as we use and are obsessed by our devices and tools – complaining then checking our iPhones. I’m serious… If we did care that much, maybe actions would change… You said people have the power here, maybe it’s not a big enough issue…
Fieke: Is it Europeans concerned about Americans from a libertarian point of view? Yes. I work mainly in non-European parts of the world and particularly in the North America… For many the internet is seen as magical and neutral – but those who research it we know it is not. But when you ask why people use tools, it’s their friends or community. If you ask them who owns it, that raises questions that are framed in a relevant way. The framing has to fit people’s reality. In South America talk of Facebook Sero as the new colonialism, you will have a political conversation… But we also don’t always know why we are uncomfortable… It can feel abstract, distant, and the concern is momentary. Outside of this field, people don’t think about it.
Kate: Your provocation that we could just step away, and move to open source. The reality includes opportunity costs to employment, to friends and family… But even if you do none of those things then you walk down the streets and you are tracked by sensors, by other devices…
Fieke: I absolutely agree. All the data collected beyond our control is the concern… But we can’t just roll over and die, we have to try and provoke and find mechanisms to play…
Kate: I think that idea of what the political levers may be… Those conversation of legal, ethical, technical parameters seem crucial, more than consumer choice. But I don’t think we have sufficient collective models of changing information ecologies… and they are changing so rapidly.
Q1) Thank you for this wonderful talk and perspectives here. You talked about the infrastructure layer… What about that question. You say this 1% of 1% own the internet, but do they own the infrastructure? Facebook is trying to balloon in the internet so that they cannot be cut off… It also – second question – used to be that YOU owns the internet that changed the dominance of big companies… This happens in history quite often… So what about that?
A1 – Fieke) I think that Kate talked about the many levels of ownership… Facebook piggy backs on other infrastructures, Google does the balloons. It used to be that government owned the infrastructure. There are new cables rolling out… EU funding, governments, private companies, rich people… The infrastructure is mainly owned by companies now.
A1 – Kate) I think infrastructure studies has been extraordinarily rich – work of Nicole Serafichi for instance – but also we have art responses. Infrastructure is very of the moment… But what happens next… It is not just about infrastructures and their ownerships, but also surveillance access to these. There are things like MESH networks… And there are people working here in Berlin to flag up faux police networks during protests to help protestors protect themselves.
A1 – Carolyn) I think that platforms would have argued differently ten years ago about who owned the internet – but “you” probably wouldn’t have been the answer…
Q2) I wonder if the real issue is that we are running on very vague ideas of government that have been established for a very different world. People are responding to elections and referenda in very irrational ways that suggest that model is not fit for purpose. Is there a better form of governance or democracy that we should move towards? Can AI help us there?
A2 – Kate) What a beautiful and impossible to answer question! Obviously I cannot answer that properly but part of the reason I do AI research is to try to inform and shape that… Hence my passion for building research in this space. We don’t have much data to go on but the imaginative space here has been dominated by those with narrow ideas. I want to think about how communities can develop and contribute to AI, and what potential there is.
Q3) Do we need to rethink what we mean by democratic control and regulations… Regulations are closely associated with nation states, but that’s not the context that most of the internet operates. Do we need to re-engage with the question of globalisation again.
A3) As Carolyn said, who is the “you” in web 2.0, and whose narrative is there. Globalisation is similar. I pay taxes to a nation state that has rules of law and governance… By denying that they buy into the narrative of mainly internet companies and huge multinational organisations.
Cornelius: I have the declaration of independence of the internet by Perry Barlow which I was tempted to quote you… But it is interesting to reflect on how we have moved from utopian positions to where we are today.
Q4 – participant from Google!) There is an interesting question here… If this question was pointing to deeper truth… A clear ruler, an internet, would allow this question of who rules to be answered. I would ask how we have agency over how the proliferation of internet technologies and how we benefit from them… ?
A4 – Kate) A great title, but long for the programme! But your phrasing is so interesting – if it is so diverse and complex then how we engage is crucial. I think that is important but, the optimistic part, I think we can do this.
A4 – Carolyn) One way to engage is through descent… and negotiating on a level that ensures platforms work beyond economic values…
Q5) The last time I was forced to give away my data was by the Australian state (where I live) in completing the census… I had to complete it or I would be fined over $1000 AUS – Facebook, Twitter, etc. never did that… I rule this kind of internet, I am still free in my choices. But on the other hand why is it that states that are best at governing platforms are the ones I want to live in the least. Maybe without the platforms no-one would use the internet so we’d have one problem less… If we as academics think about platforms in these mythic ways, maybe we end up governing in a way that is more controlled and has undesirable effects.
A5 – Kate) Many questions there, I’ll address two of those. On the census I’d refer you to articles
University of Cambridge study showed huge accuracy in determining marital status, sexuality and whether a drug or alcohol user based on Facebook likes… You may feel free but those data patterns are being built. But we have to move beyond thinking that only by active participation do you contribute to these platforms…
A5 – Fieke) The Census issue you brought up is interesting… In the UK, US and Australia the contractor for the Census is conducted by one of the world’s biggest arms manufacturers… You don’t give data to the Big Five… But…  So, we do need to question the politics behind our actions… There is also a perception that having technical skills makes you superior to those without, and if we do that we create a whole new class system and that raises whole new questions.
Q6) The question of internet raises issues of boundaries, and how we do governance and work of governance and rule-making. Ideally when we do that governance and rule-making there are values behind that… So what are the values that you think need to underlie those structures and systems…
A6 – Carolyn) I think values that do not discriminate people through algorithmic processing, AI, etc. Those tools should allow people to not be discriminated on the basis of things they have done in the past… But that requires understanding of how that discrimination is taking place now…
A6 – Kate) I love that question… All of these layers of control come with values baked in, we just don’t know what they are… I would be interested to see what values drop out of those systems, that don’t fit the easy metricisation of our world. Some great things to fall out of feminist and race theory and values from that…
A6 – Fieke) I would add that values should not just be about the individual, and should ensure that the collective is also considered…
Cornelius: Thank you for offering a glimmer of hope! Thank you all!
Oct 052016

If you’ve been following my blog today you will know that I’m in Berlin for the Association of Internet Researchers AoIR 2016 (#aoir2016) Conference, at Humboldt University. As this first day has mainly been about workshops – and I’ve been in a full day long Digital Methods workshop – we do have our first conference keynote this evening. And as it looks a bit different to my workshop blog, I thought a new post was in order.

As usual, this is a live blog post so corrections, comments, etc. are all welcomed. This session is also being videoed so you will probably want to refer to that once it becomes available as the authoritative record of the session. 

Keynote: The Platform Society – José van Dijck (University of Amsterdam) with Session Chair: Jennifer Stromer-Galley

We are having an introduction from Wolfgang (?) from Humboldt University, welcoming us and noting that AoIR 2016 has made the front page of a Berlin newspaper today! He also notes the hunger for internet governance information, understanding, etc. from German government and from Europe.

Wolfgang: The theme of “Internet Rules!” provides lots of opportunities for keynotes, discussions, etc. and it allows us to connect the ideas of internet and society without deterministic structures. I will now hand over to the session chair Cornelius Puschmann.

Cornelius: It falls to me to do the logistical stuff… But first we have 570 people registered for AoIR 2016  so we have a really big conference. And now the boring details… which I won’t blog in detail here, other than to note the hashtag list:

  • Official: #aoir2016
  • Rebel: #aoir16
  • Retro: #ir17
  • Tim Highfield: #itisthesevebeenthassociationofinternetresearchersconferenceanditishappeningin2016

And with that, and a reminder of some of the more experimental parts of the programme to come.

Jennifer: Huge thanks to all of my colleagues here for turning this crazy idea into this huge event with a record number of attendees! Thank you to Cornelius, our programme chair.

Now to introduce our speaker… Jose van Dijck, professor at the University of Amsterdam as well as visiting work across the world. She is the first woman to hold the Presidency of the Royal Academy of Arts, Science and Research in The Netherlands. Her most recent book is the Culture of Connectivity: A History of Social Media. It takes a critical look back at social media and social networking, not only as social spaces but as business spaces. And her lecture tonight will give a preview of her forthcoming work on the Public Values in a Platform Society.

Jose: It is lovely to be here, particularly on this rather strange day…. I became President of the Royal Academy this year and today my colleague won the Nobel Prize in Chemistry – so instead of preparing for my keynote today I was dealing with press inquiries, so it is nice to focus back on my real job…

So a few years ago Thomas Poell wrote an article on the politics of social platforms. His work on platforms inspired my work on networked platforms being interwoven into an ecology economically and socially. Since I wrote that book, the last chapter is on platforms, many of which have now become the main players… I talked about Google (now Alphabet), Facebook, Amazon, Microsoft, LinkedIn (now owned by Microsoft), Apple… And since then we’ve seen other players coming in and creating change – like Uber, AirBnB, Coursera. These platforms have become the gateways to our social life… And they have consolidated and expanded…

So a Platform is an online site that deploys automated technologies and business models to organise data streams, economic interactions, and social exchanges between users of the internet. That’s the core of the social theory I am using. Platforms ARE NOT simple facilitators, and they are not stand alone systems – they are interconnected.

And a Platform Ecosystem is an assemblage of networked platforms, governed by its own dynamics and operating on a set of mechanisms…

Now a couple of years ago Thomas and I wrote about platform mechanisms and the very important idea of “Datafication”. Commodification – a platform’s business model and governance defines the way in which datafied information is transformed into (economic, societal) value. There are many business models and many governance models – they vary but governance models are maybe more important than business models, and they can be hard to pin down. Selection are about data flows filtered by algorithms and bots, allowing for automated selection such as personalisation, rankings, reputation. Those mechanisms are not visible right now, and we need to make those explicit so that we can talk about them and their implications. Can we hold Facebook accountable for Newsfeed in the ways that traditional media are accountable? That’s an important question for us to consider…

The platform ecosystem is not a level playing field. They are gaining traction not through money but through the number of users. And network effects mean that user numbers are the way we understand the size of the network. There is Platformisation (thanks Anna?) across sectors… And that power is gained through cross ownership and cross platform, but also through true architecture and shared platforms. In our book we’ll give both private and public sectors and how they are penetrated by platform ecosystems. We used to have big oil companies, or big manufacturing companies… But now big companies operate across sectors.

So transport for instance… Uber is huge, partly financed by Google and also in competition with Google. If we look at News as a sector we have Huffington Post, Buzzfeed, etc. they are also used as content distribution and aggregators for Google, Facebook, etc.

In health – a second becoming most proliferated – we see fitness and health apps, with Google and Apple major players here. And in your neighbourhood there are apps available, some of these are global apps localised to your neighbourhoods, sitting alongside massive players.

In Education we’ve seen the rise of Massive Online Open Courses, with Microsoft and Google investing heavily alongside players like EdX, Coursera, Udacity, FutureLearn, etc.

All of the sectors are undergoing platformisation… And if you look across them all, all areas of private and public life the activity is revolving around the big five: Google, Facebook. Apple, Amazon, with LinkedIn and Twitter also important. And take, for example, AirBnB

Platform society is a society which social, economic and interpersonal traffic is largely channeled by an (overwhelmingly corporate) global online platform ecosystem that is driven by algorithms and fuelled by data. That’s not a revolution, it’s something we are part of and see every day.

Now we have promises of “participatory culture” and the euphoria of the idea of web 2.0, and of individuals contributing. More recently that idea has shifted to the idea of the “sharing economy”… But sharing has shifted in it’s meaning too. It is about sharing resources or services for some sort of fee, that’s a transaction based idea. And from 2015 we see awareness of the negative sides of the sharing economy. So a Feb 2015 Time cover read: “Strangers crashed my car, ate my food and wore my pants. Tales from the sharing economy” – about the personal discomfort of the downsides. And we see Technology Quarterly writing about “When it’s not so good to share” – from the perspective of securing the property we share here. But there is more at stake than personal discomfort…

We have started to see disruptive protest against private platforms, like posters against AirBnB. City Councils have to hire more inspectors to regulate AirBnB hosts for safety reasons – a huge debate in Amsterdam now, and the public values changing as a consequence of so many AirBnB hosts in this city. And there are more protests about changing values… Saying people are citizens not entrepreneurs, that the city is not for sale…

In another sector we see Uber protests, by various stakeholders. We see these from licenced taxi drivers, accusing them of safety issues and social values; but also protests by drivers. Uber do not call themselves a “transportation” company, instead calling themselves a connectivity company. Now Uber drivers have complained that Uber don’t pay insurance or pensions…

So, AirBnB and Uber are changing public values, they haven’t anchored existing values in their own design and development. There are platform promises and paradoxes here… They offer personalised services whilst contributing to the public good… The idea is that they are better at providing services than existing players. They promote community and connectedness whilst bypassing cumbersome institutions – based on the idea that we can do without big government or institutions, and without those values. These platforms also emphasize public values, whilst obscuring private gain. These are promises claiming that they are in the public interest… But that’s a paradox with hidden private gains.

And so how do we anchor collective, public values in a platform society and how do we govern this. ? has the idea of governance of platforms as opposed to governance by platforms. Our government is mainly concerned with governing platforms – regulations, privacy etc. and that is appropriate but there are public values like fairness, like accuracy, like safety, like privacy, like transparency, like democracy… Those values are increasingly being governed by platforms, and that governance is hidden from us in the algorithms and design decisions…

Who rules the platform society? Who are the stakeholders here? There are many platform societies of course, but who can be held accountable? Well it is an intense ideological battleground… With private stakeholders like (global) corporations, businesses, (micro-)entrepreneurs; consumer groups; consumers. And public stakeholders like citizens; co-ops and collectives, NGOs, public institutions, governments, supra-national bodies… And matching those needs up is never going to happen really…

Who uses health apps here? (many do) In 2015 there were 165,000 health apps in the Google Play store. Most of them promise personalised health and, whilst that is in the future, they track data… They take data right from individual to companies, bi-passing other actors and health providers… They manage a wide variety of data flows (patients, doctors, companies). There is a variety of business models, particularly unclear. There is a site called “Patients like me” which says that it is “not just for profit” – so it is for profit, but not just for profit… Data has become currency in our health economy. And that private gain is hiding behind the public good arguement. A few months ago in Holland we started to have insurance discounts (5%) if you send FitBit scores… But I thin the next step will be paying more if you do not send your scores… That’s how public values change…

Finally we have regulation – government should be regulating security, safety, accuracy, and privacy. It takes the Dutch FDA 6 months to check the safety and accuracy of one app – and if it is updated, you have to start again! In the US the US Dept of Health and Human Services, Office of National Coordinator for Health Information Technology (ONC), Office for Civil Rights (OCR) and Food and Drug Administration (FDA) released a guide called “Developing a mobile health app?” providing guidance on which federal laws need to be followed. And we see not just insurance using apps, but insurance and healthcare providers having to buy data services from providers and that changing the impact of these apps. You have things like 23 and Me, and those are global – and raises global regulation issues – so hard to govern around that issue. But our platform ecosystem is transnational, and governments are national. We also see platforms coming from technology companies – Phillips was building physical kit, MRI machines, but it now models itself as a data company. What you see here is that the big five internet and technology players are also big players in this field – Google Health and 23 and Me (financed by Sergei Brin, run by his ex-wife), Apple HealthKit, etc. And even then you have small independent apps like mPower but they are distributed via the app stores, led by big players and again, hard to govern.


We used to build trust in society through institutions and institutional norms and codes, which were subject to democratic controls. But these are increasingly bi-passed… And that may be subtle but it is going uncontrolled. So, how can we build trust in a platformed world? Well, we have to understand who rules the platform ecosystem, and by understanding how it is governed. And when you look at this globally you see competing ideological hemispheres… You see the US model of commercial values, and those are literally imposed on others. And you have Yandex and the Chinese model, and that that’s an interesting model…

I think coming back to my main question: what do we do here to help? We can make visible how this platformised society works… So I did a presentation a few weeks ago and shared recommendations there for users:

  • Require transparency in platforms
  • Do not trade convenience for public values
  • Be vigilant, be informed

But can you expect individuals to understand how each app works and what its implications are? I think government have a key role to protect citizens rights here.

In terms of owners and developers my recommendations are:

  • Put long-term trust over short-term gain
  • Be transparent about data flows, business models, and governance structure
  • Help encode public values in platform architecture (e.g. privacy by design)

A few weeks back the New York Times ran an article on holding algorithms accountable, and I think that that is a useful idea.

I think my biggest recommendations are for governments, and they are:

  • Defend public values and common good; negotiate public interests with platforms. What it could also do is to, for instance, legislate to manage demands and needs in how platforms work.
  • Upgrade regulatory institutions to deal with the digital constellations we are facing.
  • Develop (inter)national blueprint for a democratic platform society.

And we, as researchers, we can help expose and share the platform society so that it is understaood and engaged with in a more knowledgeable way. Governments have a special responsibility to govern the networked society – right now it is a Wild West. We are struggling to resolve these issues, so how can we help govern the platforms to shape society, when the platforms themselves are so enormous and powerful. In Europe we see platforms that are mainly US-based private sector spaces, and they are threatening public sector organisations.. It is important to think about how we build trust in that platform society…


Q1) You talked about private interests being concealed by public values, but you didn’t talk about private interests of incumbents…

A1) That is important of course. Those protests that I mentioned do raise some of those issues – undercutting prices by not paying for insurance, pensions etc. of taxi drivers. In Europe those costs can be up to 50% of costs, so what do we do with those public values, how do we pay for this? We’ll pay for it one way or the other. The incumbents do have their own vested interests… But there are also social values there… If we want to retain those values though we need to find a model for that… European economic values have had collective values inscribed in… If that is outmoded, than fine, but how do we build those in in other ways…

Q2) I think in my context in Australia at least the Government is in cahoots with private companies, with public-private partnerships and security arms of government heavily benefitting from data collection and surveillance… I think that government regulating these platforms is possible, I’m not sure that they will.

A2) A lot of governments are heavily invested in private industries… I am not anti-companies or anti-government… My first goal is to make them aware of how this works… I am always surprised how little governments are aware of what runs underneath the promises and paradoxes… There is reluctance to work with companies from regulators but there is also exhaustion and a lack of understanding about how to update regulations and processes. How can you update health regulations with 165k health apps out there? I probably am an optimist… But I want to ensure governments are aware and understand how this is transforming society. There is so much ignorance in the field, and there is nievete about how this will play out. Yes, I’m an optimist. But no, there is something we can do to shape the direction that the platform society will develop.

Q3) You have great faith in regulation, but there are real challenges and issues… There are many cases where governments have colluded with industry to inflate the costs of delivery. There is the idea of regulatory capture. Why should we expect regulators to act in public interest when historically they act in the interest of private companies.

A3) It’s not that I put all my trust there… But I’m looking for a dialogue with whoever is involved in this space, in the contested play of where we start… It is one of many actors in this whole contested battlefield. I don’t think we have the answers, but it is our job to explain the underlying mechanisms… And I’m pretty shocked by how little they know about the platforms and the underlying mechanisms there. Sometimes it’s hard to know where to start… But you have to make a start somewhere…