Oct 052016
 

If you’ve been following my blog today you will know that I’m in Berlin for the Association of Internet Researchers AoIR 2016 (#aoir2016) Conference, at Humboldt University. As this first day has mainly been about workshops – and I’ve been in a full day long Digital Methods workshop – we do have our first conference keynote this evening. And as it looks a bit different to my workshop blog, I thought a new post was in order.

As usual, this is a live blog post so corrections, comments, etc. are all welcomed. This session is also being videoed so you will probably want to refer to that once it becomes available as the authoritative record of the session. 

Keynote: The Platform Society – José van Dijck (University of Amsterdam) with Session Chair: Jennifer Stromer-Galley

We are having an introduction from Wolfgang (?) from Humboldt University, welcoming us and noting that AoIR 2016 has made the front page of a Berlin newspaper today! He also notes the hunger for internet governance information, understanding, etc. from German government and from Europe.

Wolfgang: The theme of “Internet Rules!” provides lots of opportunities for keynotes, discussions, etc. and it allows us to connect the ideas of internet and society without deterministic structures. I will now hand over to the session chair Cornelius Puschmann.

Cornelius: It falls to me to do the logistical stuff… But first we have 570 people registered for AoIR 2016  so we have a really big conference. And now the boring details… which I won’t blog in detail here, other than to note the hashtag list:

  • Official: #aoir2016
  • Rebel: #aoir16
  • Retro: #ir17
  • Tim Highfield: #itisthesevebeenthassociationofinternetresearchersconferenceanditishappeningin2016

And with that, and a reminder of some of the more experimental parts of the programme to come.

Jennifer: Huge thanks to all of my colleagues here for turning this crazy idea into this huge event with a record number of attendees! Thank you to Cornelius, our programme chair.

Now to introduce our speaker… Jose van Dijck, professor at the University of Amsterdam as well as visiting work across the world. She is the first woman to hold the Presidency of the Royal Academy of Arts, Science and Research in The Netherlands. Her most recent book is the Culture of Connectivity: A History of Social Media. It takes a critical look back at social media and social networking, not only as social spaces but as business spaces. And her lecture tonight will give a preview of her forthcoming work on the Public Values in a Platform Society.

Jose: It is lovely to be here, particularly on this rather strange day…. I became President of the Royal Academy this year and today my colleague won the Nobel Prize in Chemistry – so instead of preparing for my keynote today I was dealing with press inquiries, so it is nice to focus back on my real job…

So a few years ago Thomas Poell wrote an article on the politics of social platforms. His work on platforms inspired my work on networked platforms being interwoven into an ecology economically and socially. Since I wrote that book, the last chapter is on platforms, many of which have now become the main players… I talked about Google (now Alphabet), Facebook, Amazon, Microsoft, LinkedIn (now owned by Microsoft), Apple… And since then we’ve seen other players coming in and creating change – like Uber, AirBnB, Coursera. These platforms have become the gateways to our social life… And they have consolidated and expanded…

So a Platform is an online site that deploys automated technologies and business models to organise data streams, economic interactions, and social exchanges between users of the internet. That’s the core of the social theory I am using. Platforms ARE NOT simple facilitators, and they are not stand alone systems – they are interconnected.

And a Platform Ecosystem is an assemblage of networked platforms, governed by its own dynamics and operating on a set of mechanisms…

Now a couple of years ago Thomas and I wrote about platform mechanisms and the very important idea of “Datafication”. Commodification – a platform’s business model and governance defines the way in which datafied information is transformed into (economic, societal) value. There are many business models and many governance models – they vary but governance models are maybe more important than business models, and they can be hard to pin down. Selection are about data flows filtered by algorithms and bots, allowing for automated selection such as personalisation, rankings, reputation. Those mechanisms are not visible right now, and we need to make those explicit so that we can talk about them and their implications. Can we hold Facebook accountable for Newsfeed in the ways that traditional media are accountable? That’s an important question for us to consider…

The platform ecosystem is not a level playing field. They are gaining traction not through money but through the number of users. And network effects mean that user numbers are the way we understand the size of the network. There is Platformisation (thanks Anna?) across sectors… And that power is gained through cross ownership and cross platform, but also through true architecture and shared platforms. In our book we’ll give both private and public sectors and how they are penetrated by platform ecosystems. We used to have big oil companies, or big manufacturing companies… But now big companies operate across sectors.

So transport for instance… Uber is huge, partly financed by Google and also in competition with Google. If we look at News as a sector we have Huffington Post, Buzzfeed, etc. they are also used as content distribution and aggregators for Google, Facebook, etc.

In health – a second becoming most proliferated – we see fitness and health apps, with Google and Apple major players here. And in your neighbourhood there are apps available, some of these are global apps localised to your neighbourhoods, sitting alongside massive players.

In Education we’ve seen the rise of Massive Online Open Courses, with Microsoft and Google investing heavily alongside players like EdX, Coursera, Udacity, FutureLearn, etc.

All of the sectors are undergoing platformisation… And if you look across them all, all areas of private and public life the activity is revolving around the big five: Google, Facebook. Apple, Amazon, with LinkedIn and Twitter also important. And take, for example, AirBnB

Platform society is a society which social, economic and interpersonal traffic is largely channeled by an (overwhelmingly corporate) global online platform ecosystem that is driven by algorithms and fuelled by data. That’s not a revolution, it’s something we are part of and see every day.

Now we have promises of “participatory culture” and the euphoria of the idea of web 2.0, and of individuals contributing. More recently that idea has shifted to the idea of the “sharing economy”… But sharing has shifted in it’s meaning too. It is about sharing resources or services for some sort of fee, that’s a transaction based idea. And from 2015 we see awareness of the negative sides of the sharing economy. So a Feb 2015 Time cover read: “Strangers crashed my car, ate my food and wore my pants. Tales from the sharing economy” – about the personal discomfort of the downsides. And we see Technology Quarterly writing about “When it’s not so good to share” – from the perspective of securing the property we share here. But there is more at stake than personal discomfort…

We have started to see disruptive protest against private platforms, like posters against AirBnB. City Councils have to hire more inspectors to regulate AirBnB hosts for safety reasons – a huge debate in Amsterdam now, and the public values changing as a consequence of so many AirBnB hosts in this city. And there are more protests about changing values… Saying people are citizens not entrepreneurs, that the city is not for sale…

In another sector we see Uber protests, by various stakeholders. We see these from licenced taxi drivers, accusing them of safety issues and social values; but also protests by drivers. Uber do not call themselves a “transportation” company, instead calling themselves a connectivity company. Now Uber drivers have complained that Uber don’t pay insurance or pensions…

So, AirBnB and Uber are changing public values, they haven’t anchored existing values in their own design and development. There are platform promises and paradoxes here… They offer personalised services whilst contributing to the public good… The idea is that they are better at providing services than existing players. They promote community and connectedness whilst bypassing cumbersome institutions – based on the idea that we can do without big government or institutions, and without those values. These platforms also emphasize public values, whilst obscuring private gain. These are promises claiming that they are in the public interest… But that’s a paradox with hidden private gains.

And so how do we anchor collective, public values in a platform society and how do we govern this. ? has the idea of governance of platforms as opposed to governance by platforms. Our government is mainly concerned with governing platforms – regulations, privacy etc. and that is appropriate but there are public values like fairness, like accuracy, like safety, like privacy, like transparency, like democracy… Those values are increasingly being governed by platforms, and that governance is hidden from us in the algorithms and design decisions…

Who rules the platform society? Who are the stakeholders here? There are many platform societies of course, but who can be held accountable? Well it is an intense ideological battleground… With private stakeholders like (global) corporations, businesses, (micro-)entrepreneurs; consumer groups; consumers. And public stakeholders like citizens; co-ops and collectives, NGOs, public institutions, governments, supra-national bodies… And matching those needs up is never going to happen really…

Who uses health apps here? (many do) In 2015 there were 165,000 health apps in the Google Play store. Most of them promise personalised health and, whilst that is in the future, they track data… They take data right from individual to companies, bi-passing other actors and health providers… They manage a wide variety of data flows (patients, doctors, companies). There is a variety of business models, particularly unclear. There is a site called “Patients like me” which says that it is “not just for profit” – so it is for profit, but not just for profit… Data has become currency in our health economy. And that private gain is hiding behind the public good arguement. A few months ago in Holland we started to have insurance discounts (5%) if you send FitBit scores… But I thin the next step will be paying more if you do not send your scores… That’s how public values change…

Finally we have regulation – government should be regulating security, safety, accuracy, and privacy. It takes the Dutch FDA 6 months to check the safety and accuracy of one app – and if it is updated, you have to start again! In the US the US Dept of Health and Human Services, Office of National Coordinator for Health Information Technology (ONC), Office for Civil Rights (OCR) and Food and Drug Administration (FDA) released a guide called “Developing a mobile health app?” providing guidance on which federal laws need to be followed. And we see not just insurance using apps, but insurance and healthcare providers having to buy data services from providers and that changing the impact of these apps. You have things like 23 and Me, and those are global – and raises global regulation issues – so hard to govern around that issue. But our platform ecosystem is transnational, and governments are national. We also see platforms coming from technology companies – Phillips was building physical kit, MRI machines, but it now models itself as a data company. What you see here is that the big five internet and technology players are also big players in this field – Google Health and 23 and Me (financed by Sergei Brin, run by his ex-wife), Apple HealthKit, etc. And even then you have small independent apps like mPower but they are distributed via the app stores, led by big players and again, hard to govern.

 

We used to build trust in society through institutions and institutional norms and codes, which were subject to democratic controls. But these are increasingly bi-passed… And that may be subtle but it is going uncontrolled. So, how can we build trust in a platformed world? Well, we have to understand who rules the platform ecosystem, and by understanding how it is governed. And when you look at this globally you see competing ideological hemispheres… You see the US model of commercial values, and those are literally imposed on others. And you have Yandex and the Chinese model, and that that’s an interesting model…

I think coming back to my main question: what do we do here to help? We can make visible how this platformised society works… So I did a presentation a few weeks ago and shared recommendations there for users:

  • Require transparency in platforms
  • Do not trade convenience for public values
  • Be vigilant, be informed

But can you expect individuals to understand how each app works and what its implications are? I think government have a key role to protect citizens rights here.

In terms of owners and developers my recommendations are:

  • Put long-term trust over short-term gain
  • Be transparent about data flows, business models, and governance structure
  • Help encode public values in platform architecture (e.g. privacy by design)

A few weeks back the New York Times ran an article on holding algorithms accountable, and I think that that is a useful idea.

I think my biggest recommendations are for governments, and they are:

  • Defend public values and common good; negotiate public interests with platforms. What it could also do is to, for instance, legislate to manage demands and needs in how platforms work.
  • Upgrade regulatory institutions to deal with the digital constellations we are facing.
  • Develop (inter)national blueprint for a democratic platform society.

And we, as researchers, we can help expose and share the platform society so that it is understaood and engaged with in a more knowledgeable way. Governments have a special responsibility to govern the networked society – right now it is a Wild West. We are struggling to resolve these issues, so how can we help govern the platforms to shape society, when the platforms themselves are so enormous and powerful. In Europe we see platforms that are mainly US-based private sector spaces, and they are threatening public sector organisations.. It is important to think about how we build trust in that platform society…

Q&A

Q1) You talked about private interests being concealed by public values, but you didn’t talk about private interests of incumbents…

A1) That is important of course. Those protests that I mentioned do raise some of those issues – undercutting prices by not paying for insurance, pensions etc. of taxi drivers. In Europe those costs can be up to 50% of costs, so what do we do with those public values, how do we pay for this? We’ll pay for it one way or the other. The incumbents do have their own vested interests… But there are also social values there… If we want to retain those values though we need to find a model for that… European economic values have had collective values inscribed in… If that is outmoded, than fine, but how do we build those in in other ways…

Q2) I think in my context in Australia at least the Government is in cahoots with private companies, with public-private partnerships and security arms of government heavily benefitting from data collection and surveillance… I think that government regulating these platforms is possible, I’m not sure that they will.

A2) A lot of governments are heavily invested in private industries… I am not anti-companies or anti-government… My first goal is to make them aware of how this works… I am always surprised how little governments are aware of what runs underneath the promises and paradoxes… There is reluctance to work with companies from regulators but there is also exhaustion and a lack of understanding about how to update regulations and processes. How can you update health regulations with 165k health apps out there? I probably am an optimist… But I want to ensure governments are aware and understand how this is transforming society. There is so much ignorance in the field, and there is nievete about how this will play out. Yes, I’m an optimist. But no, there is something we can do to shape the direction that the platform society will develop.

Q3) You have great faith in regulation, but there are real challenges and issues… There are many cases where governments have colluded with industry to inflate the costs of delivery. There is the idea of regulatory capture. Why should we expect regulators to act in public interest when historically they act in the interest of private companies.

A3) It’s not that I put all my trust there… But I’m looking for a dialogue with whoever is involved in this space, in the contested play of where we start… It is one of many actors in this whole contested battlefield. I don’t think we have the answers, but it is our job to explain the underlying mechanisms… And I’m pretty shocked by how little they know about the platforms and the underlying mechanisms there. Sometimes it’s hard to know where to start… But you have to make a start somewhere…

Oct 052016
 

After a few weeks of leave I’m now back and spending most of this week at the Association of Internet Researchers (AoIR) Conference 2016. I’m hugely excited to be here as the programme looks excellent with a really wide range of internet research being presented and discussed. I’ll be liveblogging throughout the week starting with today’s workshops.

This is a liveblog so all corrections, updates, links, etc. are very much welcomed – just leave me a comment, drop me an email or similar to flag them up!

I am booked into the Digital Methods in Internet Research: A Sampling Menu workshop, although I may be switching session at lunchtime to attend the Internet rules… for Higher Education workshop this afternoon.

The Digital Methods workshop is being chaired by Patrik Wikstrom (Digital Media Research Centre, Queensland University of Technology, Australia) and the speakers are:

  • Erik Borra (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Axel Bruns (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Jean Burgess (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Carolin Gerlitz (University of Siegen, Germany),
  • Anne Helmond (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Ariadna Matamoros Fernandez (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Peta Mitchell (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Richard Rogers (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Fernando N. van der Vlist (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Esther Weltevrede (Digital Methods Initiative, University of Amsterdam, the Netherlands).

I’ll be taking notes throughout but the session materials are also available here: http://tinyurl.com/aoir2016-digmethods/.

Patrik: We are in for a long and exciting day! I won’t introduce all the speakers as we won’t have time!

Conceptual Introduction: Situating Digital Methods (Richard Rogers)

My name is Richard Rogers, I’m professor of new media and digital culture at the University of Amsterdam and I have the pleasure of introducing today’s session. So I’m going to do two things, I’ll be situating digital methods in internet-related research, and then taking you through some digital methods.

I would like to situate digital methods as a third era of internet research… I think all of these eras thrive and overlap but they are differentiated.

  1. Web of Cyberspace (1994-2000): Cyberstudies was an effort to see difference in the internet, the virtual as distinct from the real. I’d situate this largely in the 90’s and the work of Steve Jones and Steve (?).
  2. Web as Virtual Society? (2000-2007) saw virtual as part of the real. Offline as baseline and “virtual methods” with work around the digital economy, the digital divide…
  3. Web as societal data (2007-) is about “virtual as indication of the real. Online as baseline.

Right now we use online data about society and culture to make “grounded” claims.

So, if we look at Allrecipes.com Thanksgiving recipe searches on a map we get some idea of regional preference, or we look at Google data in more depth, we get this idea of internet data as grounding for understanding culture, society, tastes.

So, we had this turn in around 2008 to “web as data” as a concept. When this idea was first introduced not all were comfortable with the concept. Mike Thelwell et al (2005) talked about the importance of grounding the data from the internet. So, for instance, Google’s flu trends can be compared to Wikipedia traffic etc. And with these trends we also get the idea of “the internet knows first”, with the web predicting other sources of data.

Now I do want to talk about digital methods in the context of digital humanities data and methods. Lev Manovich talks about Cultural Analytics. It is concerned with digitised cultural materials with materials clusterable in a sort of art historical way – by hue, style, etc. And so this is a sort of big data approach that substitutes “continuous change” for periodisation and categorisation for continuation. So, this approach can, for instance, be applied to Instagram (Selfiexploration), looking at mood, aesthetics, etc. And then we have Culturenomics, mainly through the Google Ngram Viewer. A lot of linguists use this to understand subtle differences as part of distance reading of large corpuses.

And I also want to talk about e-social sciences data and method. Here we have Webometrics (Thelwell et al) with links as reputational markers. The other tradition here is Altmetrics (Priem et al), which uses online data to do citation analysis, with social media data.

So, at least initially, the idea behind digital methods was to be in a different space. The study of online digital objects, and also natively online method – methods developed for the medium. And natively digital is meant in a computing sense here. In computing software has a native mode when it is written for a specific processor, so these are methods specifically created for the digital medium. We also have digitized methods, those which have been imported and migrated methods adapted slightly to the online.

Generally speaking there is a sort of protocol for digital methods: Which objects and data are available? (links, tags, timestamps); how do dominant devices handle them? etc.

I will talk about some methods here:

1. Hyperlink

For the hyperlink analysis there are several methods. The Issue Crawler software, still running and working, enable you to see links between pages, direction of linking, aspirational linking… For example a visualisation of an Armenian NGO shows the dynamics of an issue network showing politics of association.

The other method that can be used here takes a list of sensitive sites, using Issue Crawler, then parse it through an internet censorship service. And variations on this that indicate how successful attempts at internet censorship are. We do work on Iran and China and I should say that we are always quite thoughtful about how we publish these results because of their sensitivity.

2. The website as archived object

We have the Internet Archive and we have individual archived web sites. Both are useful but researcher use is not terribly signficant so we have been doing work on this. See also a YouTube video called “Google and the politics of tabs” – a technique to create a movie of the evolution of a webpage in the style of timelapse photography. I will be publishing soon about this technique.

But we have also been looking at historical hyperlink analysis – giving you that context that you won’t see represented in archives directly. This shows the connections between sites at a previous point in time. We also discovered that the “Ghostery” plugin can also be used with archived websites – for trackers and for code. So you can see the evolution and use of trackers on any website/set of websites.

6. Wikipedia as cultural reference

Note: the numbering is from a headline list of 10, hence the odd numbering… 

We have been looking at the evolution of Wikipedia pages, understanding how they change. It seems that pages shift from neutral to national points of view… So we looked at Srebenica and how that is represented. The pages here have different names, indicating difference in the politics of memory and reconciliation. We have developed a triangulation tool that grabs links and references and compares them across different pages. We also developed comparative image analysis that lets you see which images are shared across articles.

7. Facebook and other social networking sites

Facebook is, as you probably well know, is a social media platform that is relatively difficult to pin down at a moment in time. Trying to pin down the history of Facebook find that very hard – it hasn’t been in the Internet Archive for four years, the site changes all the time. We have developed two approaches: one for social media profiles and interest data as means of stufying cultural taste ad political preference or “Postdemographics”; And “Networked content analysis” which uses social media activity data as means of studying “most engaged with content” – that helps with the fact that profiles are no longer available via the API. To some extend the API drives the research, but then taking a digital methods approach we need to work with the medium, find which possibilities are there for research.

So, one of the projects undertaken with in this space was elFriendo, a MySpace-based project which looked at the cultural tastes of “friends” of Obama and McCain during their presidential race. For instance Obama’s friends best liked Lost and The Daily Show on TV, McCain’s liked Desperate Housewives, America’s Next Top Model, etc. Very different cultures and interests.

Now the Networked Content Analysis approach, where you quantify and then analyse, works well with Facebook. You can look at pages and use data from the API to understand the pages and groups that liked each other, to compare memberships of groups etc. (at the time you were able to do this). In this process you could see specific administrator names, and we did this with right wing data working with a group called Hope not Hate, who recognised many of the names that emerged here. Looking at most liked content from groups you also see the shared values, cultural issues, etc.

So, you could see two areas of Facebook Studies, Facebook I (2006-2011) about presentation of self: profiles and interests studies (with ethics); Facebook II (2011-) which is more about social movements. I think many social media platforms are following this shift – or would like to. So in Instagram Studies the Instagram I (2010-2014) was about selfie culture, but has shifed to Instagram II (2014-) concerned with antagonistic hashtag use for instance.

Twitter has done this and gone further… Twitter I (2006-2009) was about urban lifestyle tool (origins) and “banal” lunch tweets – their own tagline of “what are you doing?”, a connectivist space; Twitter II (2009-2012) has moved to elections, disasters and revolutions. The tagline is “what’s happening?” and we have metrics “trending topics”; Twitter III (2012-) sees this as a generic resource tool with commodification of data, stock market predictions, elections, etc.

So, I want to finish by talking about work on Twitter as a storytelling machine for remote event analysis. This is an approach we developed some years ago around the Iran event crisis. We made a tweet collection around a single Twitter hashtag – which is no longer done – and then ordered by most retweeted (top 3 for each day) and presented in chronological (not reverse) order. And we then showed those in huge displays around the world…

To take you back to June 2009… Mousavi holds an emergency press conference. Voter turn out is 80%. SMS is down. Mousavi’s website and Facebook are blocked. Police use pepper spray… The first 20 days of most popular tweets is a good succinct summary of the events.

So, I’ve taken you on a whistle stop tour of methods. I don’t know if we are coming to the end of this. I was having a conversation the other day that the Web 2.0 days are over really, the idea that the web is readily accessible, that APIs and data is there to be scraped… That’s really changing. This is one of the reasons the app space is so hard to research. We are moving again to user studies to an extent. What the Chinese researchers are doing involves convoluted processes to getting the data for instance. But there are so many areas of research that can still be done. Issue Crawler is still out there and other tools are available at tools.digitalmethods.net.

Twitter studies with DMI-TCAT (Fernando van der Vlist and Emile den Tex)

Fernando: I’m going to be talking about how we can use the DMI-TCAT tool to do Twitter Studies. I am here with Emile den Tex, one of the original developers of this tool, alongside Eric Borra.

So, what is DMI-TCAT? It is the Digital Methods Initiative Twitter Capture and Analysis Toolset, a server side tool which tries to capture robust and reproducible data capture and analysis. The design is based on two ideas: that captured datasets can be refined in different ways; and that the datasets can be analysed in different ways. Although we developed this tool, it is also in use elsewhere, particularly in the US and Australia.

So, how do we actually capture Twitter data? Some of you will have some experience of trying to do this. As researchers we don’t just want the data, we also want to look at the platform in itself. If you are in industry you get Twitter data through a “data partner”, the biggest of which by far is GNIP – owned by Twitter as of the last two years – then you just pay for it. But it is pricey. If you are a researcher you can go to an academic data partner – DiscoverText or Hexagon – and they are also resellers but they are less costly. And then the third route is the publicly available data – REST APIs, Search API, Streaming APIs. These are, to an extent, the authentic user perspective as most people use these… We have built around these but the available data and APIs shape and constrain the design and the data.

For instance the “Search API” prioritises “relevance” over “completeness” – but as academics we don’t know how “relevance” is being defined here. If you want to do representative research then completeness may be most important. If you want to look at how Twitter prioritises the data, then that Search API may be most relevant. You also have to understand rate limits… This can constrain research, as different data has different rate limits.

So there are many layers of technical mediation here, across three big actors: Twitter platform – and the APIs and technical data interfaces; DMI-TCAT (extraction); Output types. And those APIs and technical data interfaces are significant mediators here, and important to understand their implications in our work as researchers.

So, onto the DMI-TCAT tool itself – more on this in Borra & Reider (2014) (doi:10.1108/AJIM-09-2013-0094). They talk about “programmed method” and the idea of the methodological implications of the technical architecture.

What can one learn if one looks at Twitter through this “programmed method”? Well (1) Twitter users can change their Twitter handle, but their ids will remain identical – sounds basic but its important to understand when collecting data. (2) the length of a Tweet may vary beyond maximum of 140 characters (mentions and urls); (3) native retweets may have their top level text property stortened. (4) Unexpected limitations  support for new emoji characters can be problematic. (5) It is possible to retrieve a deleted tweet.

So, for example, a tweet can vary beyond 140 characters. The Retweet of an original post may be abbreviated… Now we don’t want that, we want it to look as it would to a user. So, we capture it in our tool in the non-truncated version.

And, on the issue of deletion and witholding. There are tweets deleted by users, and their are tweets which are withheld by the platform – and the withholding is a country by country issue. But you can see tweets only available in some countries. A project that uses this information is “Politwoops” (http://politwoops.sunlightfoundation.com/) which captures tweets deleted by US politicians, that lets you filter to specific states, party, position. Now there is an ethical discussion to be had here… We don’t know why tweets are deleted… We could at least talk about it.

So, the tool captures Twitter data in two ways. Firstly there is the direct capture capabilities (via web front-end) which allows tracking of users and capture of public tweets posted by these users; tracking particular terms or keywords, including hashtags; get a small random (approx 1%) of all public statuses. Secondary capture capabilities (via scripts) allows further exploration, including user ids, deleted tweets etc.

Twitter as a platform has a very formalised idea of sociality, the types of connections, parameters, etc. When we use the term “user” we mean it in the platform defined object meaning of the word.

Secondary analytical capabilities, via script, also allows further work:

  1. support for geographical polygons to delineate geographical regions for tracking particular terms or keywords, including hashtags.
  2. Built-in URL expander, following shortened URLs to their destination. Allowing further analysis, including of which statuses are pointing to the same URLs.
  3. Download media (e.g. videos and images (attached to particular Tweets).

So, we have this tool but what sort of studies might we do with Twitter? Some ideas to get you thinking:

  1. Hashtag analysis – users, devices etc. Why? They are often embedded in social issues.
  2. Mentions analysis – users mentioned in contexts, associations, etc. allowing you to e.g. identify expertise.
  3. Retweet analysis – most retweeted per day.
  4. URL analysis – the content that is most referenced.

So Emile will now go through the tool and how you’d use it in this way…

Emile: I’m going to walk through some main features of the DMI TCAT tool. We are going to use a demo site (http://tcatdemo.emiledentex.nl/analysis/) and look at some Trump tweets…

Note: I won’t blog everything here as it is a walkthrough, but we are playing with timestamps (the tool uses UTC), search terms etc. We are exploring hashtag frequency… In that list you can see Bengazi, tpp, etc. Now, once you see a common hashtag, you can go back and query the dataset again for that hashtag/search terms… And you can filter down… And look at “identical tweets” to found the most retweeted content. 

Emile: Eric called this a list making tool – it sounds dull but it is so useful… And you can then put the data through other tools. You can put tweets into Gephi. Or you can do exploration… We looked at Getty Parks project, scraped images, reverse Google image searched those images to find the originals, checked the metadata for the camera used, and investigated whether the cost of a camera was related to the success in distributing an image…

Richard: It was a critique of user generated content.

Analysing Social Media Data with TCAT and Tableau (Axel Bruns)

My talk should be a good follow on from the previous presentation as I’ll be looking at what you can do with TCAT data outside and beyond the tool. Before I start I should say that both Amsterdam and QUT are holding summer schools – and we have different summers! – so do have a look at those.

You’ve already heard about TCAT so I won’t talk more about that except to talk about the parts of TCAT I have been using.

TCAT Data Export allows you to export all tweets from selection – containing all of the tweets and information about them. You can also export a table of hashtags – tweet ids from your selection and hashtags; and mentions – tweet ids from your selection with mentions and mention type. You can export other things as well – known users (politicians, celebrities, etc); URLs; etc. And the structure that emerges are the Main TCAT export file (“full export”) and associating Hashtags; Mentions; Any other additional data. If you are familiar with SQL you are essentially joining databases here. If not then that’s fine, Tableau does this for you.

In terms of processing the data there are a number of tools here. Excel just isn’t good enough at scale – limited to 100,000 rows and that Trump dataset was 2.8 M already. So a tool that I and many others have been working with is Tableau. It’s a tool that copes with scale, it’s user-friendly, intuitive, all-purpose data analytics tool, but the downside is that it is not free (unless you are a student or are using it in teaching). Alongside that, for network visualisation, Gephi is the main tool at the moment. That’s open source and free and a new version came out in December.

So, into Tableau and an idea of what we can do with the data… Tableau enables you to work with data sources of any form, databases, spreadsheets, etc. So I have connected the full export I’ve gotten from TCAT… I have linked the main file to hashtag and mention files. Then I have also generated an additional file that expands the URLs in that data source (you can now do this in TCAT too). This is a left join – one main table that other tables are connected to. I’ve connected based on (tweet) id. And the dataset I’m showing here is from the Paris 2015 UN Climate Change. And all the steps I’m going through today are in a PDF guidebook that is available in that session resources link (http://tinyurl.com/aoir2016-digmethods/).

Tableau then tries to make sense of the data… Dimensions are the datasets which have been brought in, clicking on those reveals columns in the data, and then you see Measures – countable features in the data. Tableau makes sense of the file itself, although it won’t always guess correctly.

Now, we’ve joined the data here so that can mean we get repetition… If a tweet has 6 hashtags, it might seem to be 6 tweets. So I’m going to use the unique tweet ids as a measure. And I’ll also right click to ensure this is a distinct count.

Having done that I can begin to visualise my data and see a count of tweets in my dataset… And I can see when they were created – using Created at but also then finessing that to Hour (rather than default of Year). Now when I look at that dataset I see a peak at 10pm… That seems unlikely… And it’s because TCAT is running on Brisbane time, so I need to shift to CET time as these tweets were concerned with events in Paris. So I create a new Formula called CET, and I’ll set it to be “DateAdd (‘hour’, -9, [Created at])” – which simply allows us to take 9 hours off the time to bring it to the correct timezone. Having done that the spike is 3.40pm, and that makes a lot more sense!

Having generated that graph I can click on, say, the peak activity and see the number of tweets and the tweets that appeared. You can see some spam there – of course – but also widely retweeted tweet from the White House, tweets showing that Twitter has created a new emoji for the summit, a tweet from the Space Station. This gives you a first quick visual inspection of what is taking place… And you can also identify moments to drill down to in further depth.

I might want to compare Twitter activity with number of participating users, comparing the unique number of counts (synchronising axes for scale). Doing that we do see that there are more tweets when more users are active… But there is also a spike that is independent of that. And that spike seems to be generated by Twitter users tweeting more – around something significant perhaps – that triggers attention and activity.

So, this tool enables quantitative data analysis as a starting point or related route into qualitative analysis, the approaches are really inter-related. Quickly assessing this data enables more investigation and exploration.

Now I’m going to look at hashtags, seeing the volume against activity. By default the hashtags are ordered alphabetically, but that isn’t that useful, so I’m going to reorder by use. When I do that you can see that by far COP21 – the official hashtag – is by far the most popular. These tweets were generated from that hashtags but also from several search terms for the conference – official abbreviations for the event. And indeed some tweets have “Null” hashtags – no hashtags, just the search terms. You also see variance in spelling and capitalisation. Unlike Twitter Tableau is case sensitive so I would need to use some sort of Formula to resolve this – combining terms to one hashtag. A quick way to do that is to use “LOWER(‘Hashtag’)” which converts all data in the hashtag fields to lower case. That clustering shows COP21 as an even bigger hashtag, but also identifies other popular terms. We do see spikes in a given hashtag – often very brief – and these are often related to one very popular and heavily retweeted tweet has emerged. So, e.g. a prominent actor/figure has tweeted – e.g. in this data set Cara Delevingne (a British supermodel) triggers a short sharp spike in tweets/retweets.

And we can see these hashtags here, their relative popularity. But remember that my dataset is just based on what I asked TCAT to collect… TCOT might be a really big hashtag but maybe they don’t usually mention my search terms, hence being smaller in my data set. So, don’t be fooled into assuming some of the hashtags are small/low use just because they may not be prominent in a collected dataset.

Turning now to Mentions… We can see several Mention Types: original/null (no mentions); mentions; retweet. You also see that mentions and retweets spikes at particular moments – tweets going viral, key figures getting involved in the event or the tweeting, it all gives you a sense of the choreography of the event…

So, we can now look at who is being mentioned. I’m going to take all Twitter users in my dataset… I’ll see how many tweets mention them. I have a huge Null group here – no mentions – so I’ll start by removing that. The most mentioned accounts we see COP21 being the biggest mentioned account, and others such as Narendra Modi (chair of event?), POTUS, UNFCCC, Francois Hollande, the UN, Mashi Rafael, COP21en – the English language event account; EPN – Justin Trudeau; StationCDRKelly; C Figueres; India4Climate; Barack Obama’s personal account, etc. And I can also see what kind of mention they get. And you see that POTUS gets mentions but no retweets, whilst Barack Obama has a few retweets but mainly mentions. That doesn’t mean he doesn’t get retweets, but not in this dataset/search terms. By contrast Station Commander Kelly gets almost exclusively retweets… The balance of mentions, how people are mentioned, what gets retweeting etc… That is all a starting point for closer reading and qualitative analysis.

And now I want to look at who tweets the most… And you’ll see that there is very little overlap between the people who tweet the most, and the people who are mentioned and retweeted. The one account there that appears in both is COP21 – the event itself. Now some of the most active users are spammers and bots… But others will be obsessive, super-active users… Further analysis lets you dig further. Having looked at this list, I can look at what sort of tweets these users are sending… And that may look a bit different… This uses the Mention type and it may be that one tweet mentions multiple users, so get counted multiple times… So, for instance, DiploMix puts out 372 tweets… But when re-looked at for mentions and retweets we see a count of 636. That’s an issue you have to get your head around a bit… And the same issue occurs with hashtags. Looking at the types of tweets put out show some who post only or mainly original tweets, some who do mention others, some only or mainly retweet – perhaps bots or automated accounts. For instance DiploMix retweets diplomats and politicians. RelaxinParis is a bot retweeting everything on Paris – not useful for analysis, but part of lived experience of Twitter of course.

So, I have lots of views of data, and sheets saved here. You can export tables and graphs for publications too, which is very helpful.

I’m going to finish by looking at URLs mentioned… I’ve expanded these myself, and I’ve got the domain/path as well as the domain captured. I remove the NULL group here. And the most popular linked to domain is Twitter – I’m going to combine http and https versions in Tableau – but Youtube, UN, Leader of Iran, etc. are most popular. If I dig further into the Twitter domains, looking at Path, I can see whose accounts/profiles etc. are most linked to. If I dig into Station Commander Kelly you see that the most shared of these URLs are images… And we can look at that… And that’s a tweet we had already seen all day – a very widely shared image of a view of earth.

My time is up but I’m hoping this has been useful… This is the sort of approach I would take – exploring the data, using this as an entry point for more qualitative data analysis.

Analysing Network Dynamics with Agent Based Models (Patrik Wikström)

I will be talking about network dynamics and how we can understand some of the theory of network dynamics. And before I start a reminder that you can access and download all these materials at the URL for the session.

So, what are network dynamics? Well we’ve already seen graphs and visualisations of things that change over time. Network dynamics are very much about things that change and develop over time… So when we look at a corpus of tweets they are not all simultaneous, there is a dimension of time… And we have people responding to each other, to what they see around them, etc. So, how can we understand what goes on? We are interested in human behaviour, social behaviour, the emergence of norms and institutions, information diffusion patterns across multiple networks, etc. And these are complex and related to time, we have to take time into account. We also have to understand how macro level patterns emerge from local interactions between heterogenous agents, and how macro level patterns influence and impact upon those interactions. But this is hard…

It is difficult to capture complexity of such dynamic phenomena with verbal or conceptual models (or with static statistical models). And we can be seduced by big data. So I will be talking about using particular models, agent-based models. But what is that? Well it’s essentially a computer program, or a computer program for each agent… That allows it to be heterogeneous, autonomous and to interact with the environment and with other agents; that means they can interact in a (physical) space or as nodes in a network; and we can allow them to have (limited) perception, memory and cognition, etc. That’s something it is very hard for us to do and imagine with our own human brains when we look at large data sets.

The fundamental goal of this model is to develop a model that represents theoretical constructs, logics and assumptions and we want to be able to replicate the observed real-world behaviour. This is the same kind of approach that we use in most of our work.

So, a simple example…

Let’s assume that we start with some inductive idea. So we want to explain the emergence of the different social media network structures we observe. We might want some macro-level observations of Structure – clusters, path lengths, degree distributions, size; Time – growth, decline, cyclic; Behaviours – contagion, diffusion. So we want to build some kind of model to transfer or take our assumptions of what is going on, and translate that into a computer model…

So, what are our assumptions?

Well lets say we think people use different strategies when they decide which accounts to follow, with factors such as familiarity, similarity, activity, popularity, random… They may all be different explanations of why I connect with one person rather than another…  And lets also assume that when a user joins Twitter they immediately start following a set of accounts, and once part of the network they add more. And lets also assume that people are different – that’s really important! People are interested in different things – they have different passions, topics that interest them, some are more active, some are more passive. And that’s something we want to capture.

So, to do this I’m going to use something called NetLogo – which some of you may have already played with – it is a tool developed maybe 25 years back at Northwestern University. You can download it – or use a limited browser-based version -from: http://ccl.northwestern.edu/netlogo/.

In NetLogo we start with a 3 node network… I initialise the network and get three new nodes. Then I can add a new node… In this model I have a slider for “randomness” – if I set it to less random, it picks existing popular nodes, in the middle it combines popularity with randomness, and at most random it just adds nodes randomly…

So, I can run a simulation with about 200 nodes with randomness set to maximum… You can see how many nodes are present, how many friends the most popular node has, and how many nodes have very few friends (with 3 which is minimum connections in this model). If I now change the formation strategy here to set randomness to zero… then we see the nodes connecting back to the same most popular nodes… A more broadcast-like network. This is a totally different kind of network.

Now, another simulation here toggles the size of nodes to represent number of followers… Larger blobs represent really popular nodes… So if I run this in random mode again, you’ll see it looks very different…

So, why am I showing you this? Well I live to show a really simple model. This is maybe 50 lines of code – you could build it in a few hours. The first message is that it is easy to build this kind of model. And even though we have a simple model we have at least 200 agents… We normally work with thousands or much greater scale, but you can still learn something here. You can see how to replicate the structure of a network. Maybe it is a starting point that requires more data to be added, but it is a place to start and explore. Even though a simple model you can use this to build theory, to guide data collection and so forth.

So, having developed a model you can set up a simulation to run hundreds of times, to analyse with your data analytics tools… So I’ve run my 200 node network, 5000 simulations, comparing randomness and maximum links to a nodes – helping understand that different formation strategy creates different structures. And that’s interesting but it doesn’t take us all the way. So I’d like to show you a different model that takes this a little bit further…

This model is an extension of the previous model – with all the previous assumptions – so you have two formation strategies, but also other assumptions we were talking about… That I am more likely to connect to accounts with shared interests, more inclines to connect with accounts with shared interests, and with that we generate a simulation which is perhaps a better representation of the kinds of network we might see. And this accommodates the idea that this network has content, sharing, and other aspects that inform what is going on in the formation of that network. This visualisation looks pretty but the useful part is the output you can get at an aggregate level… We are looking at population level, seeing how local interactions at local levels, influence macro level patterns and behaviours… We can look at in-degree distribution, we can look at out-degree… We can look at local clustering coefficients, longest/shortest path, etc. And my assumptions might be plausible and reasonable…

So you can build models that give a much deeper understanding of real world dynamics… We are building an artificial network BUT you can combine this with real world data – load a real world network structure into the model and look at diffusion within that network, and understand what happens when one node posts something, what impact would that have, what information diffusion would that have…

So I’ve shown you NetLogo to play with these models. If you want to play around, that’s a great first step. It’s easy to get started with and it has been developed for use in educational settings. There is a big community and lots of models to use. And if you download NetLogo you can download that library of models. Pretty soon, however, I think you’ll find it too limited. There are many other tools you can use… But in general you can use any programming language that you want… Repast and Mason are very common tools. And they are based on Java or C++. You can also use an ABM Python module.

In the folder for this session there are some papers that give a good introduction to agent-based modelling… If we think about agent-based modelling and network theory there are some books I would recommend: Natatame & Chen: Agent-based modelling and Network dynamics. ABM look at Miller & Scott; Gilbert and Troitzsch; Epstein. Network theory – look at Jackson, Watts (& Strogatz), Barabasi.

So, three things:

Simplify! – You don’t need millions of agents. A simple model can be more powerful than a realistic one

Iterate! – Start simple and, as needed, build up complexity, add more features, but only if necessary.

Validate? – You can build models in a speculative way to guide research, to inform data collection… You don’t always have to validate that model as it may be a tool for your thinking. But validation is important if you want to be able to replicate and ensure relevance in the real world.

We started talking about data collection, analysis, and how we build theory based on the data we collect. After lunch we will continue with Carolin, Anne and Fernando on Tracking the Trackers. At the end of the day we’ll have a full panel Q&A for any questions.

And we are back after lunch and a little exposure to the Berlin rain!

Tracking the Trackers (Anne Helmond, Carolin Gerlitz, Esther Weltevrede and Fernando van der Vlist)

Carolin: Having talked about tracking users and behaviours this morning, we are going to talk about studying the media themselves, and of tracking the trackers across these platforms. So what are we tracking? Berry (2011) says:

“For every explicit action of a user, there are probably 100+ implicit data points from usage; whether that is a page visit, a scroll etc.”

Whenever a user makes an action on the web, a series of tracking features are enabled, things like cookies, widgets, advertising trackers, analytics, beacons etc. Cookies are small pieces of text that are placed on the user’s computer indicating that they have visited a site before. These are 1st party trackers and can be accessed by the platforms and webmasters. There are now many third party trackers such as Facebook, Twitter, Google, and many websites now place third party cookies on the devices of users. And there are widgets that enable this functionality with third party trackers – e.g. Disquus.

So we have first party tracker files – text files that remember, e.g. what you put in a shopping cart; third party tracker files used by marketers and data-gathering companies to track your actions across the web; you have beacons; and you have flash cookies.

The purpose of tracking varies, from functionality that is useful (e.g. the shopping basket example) but increasingly prevelant for use in profiling users and behaviours. The increasing use of trackers has resulted in them becoming more visible. There is lots of research looking at the prevalence of tracking across the web, from the Continuum project and the Guardian’s Tracking the Trackers project. One of the most famous plugins that allows you to see the trackers in your own browser is Ghostery – a browser plugin that you can install and immediately detects different kinds of trackers, widgets, cookies, analytics tracking on the sites that you browse to… It shows these in a pop up. It allows you to see the trackers and to block trackers, or selectively block trackers. You may want to selectively block trackers as whole parts of websites disappear when you switch off trackers.

Ghostery detects via tracker library/code snippets (regular expressions). It currently detects around 2295 trackers – across many different varieties. The tool is not uncontroversial. It started as an NGO but was bought by analytics company Evidon in 2010, using the data for marketing and advertising.

So, we thought that if we, as researchers, want to look at trackers and there are existing tools, lets repurpose existing tools. So we did that, creating a Tracker tracker tool based on Ghostery. It takes up a logic of Digital Methods, working with lists of websites. So the Tracker Tracker tool has been created by the Digital Methods Initiative (2012). It allows us to detect which tracers are present on lists of wevsites and create a network view. And we are “repurposing analytical capabilities”. So, what sort of project can we use this with?

One of our first project was on the Like Economy. Our starting point was the fact that social media widgets place cookies (Gerlitz and Helmond 2013), where are they present. These cookies track both platform users and website users. We wanted to see how pervasive these cookies were on the web, and on the most used sites on the web.

We started by using Alexa to identify a collection of 1000 most-visited websites. We inputted it into the Tracking Tracker tool (it’s only one button so options are limited!). Then we visualised the results with Gephi. And what did we get? Well, in 2012 only 18% of top websites had Facebook trackers – if we did it again today it would probably be different. This data may be connected to personal user profiles – when a user has been previously logged in and has a profile – but it is also being collected for non-users of Facebook, they create anonymous profiles but if they subsequently join Facebook that tracking data can be fed into their account/profile.

Since we did this work we have used this method on other projects. Now I’ll hand over to Anne to do a methods walkthrough.

Anne: Now you’ve had a sense of the method I’m going to do a dangerous walkthrough thing… And then we’ll look at some other projects here.

So, a quick methodological summary:

  1. Research question: type of tracker and sites
  2. Website (URL) collection making: existing expert list.
  3. Input list for Tracker Tracker
  4. Run Tracker Tracker
  5. Analyse in Gephi

So we always start with a research question… Perhaps we start with websites we wouldn’t want to find trackers on – where privacy issues are heightened e.g. childrens’ websites, porn websites, etc. So, homework here – work through some research question ideas.

Today we’ll walk through what we will call “adult sites”. So, we will go to Alexa – which is great for locating top sites in categories, in specific countries, etc. We take that list, we put it into Tracker Tracker – choosing whether or not to look at the first level of subpages – and press the button. The tool looks at the Ghostery database, which now scans those websites for the possible 2600 trackers that may exist.

Carolyn: Maybe some of you are wondering if it’s ok to do this with Ghostery? Well, yes, we developed Tracker Tracker in collaboration with Ghostery when it was an NGO, with one of their developers visiting us in Amsterdam. One other note here: if you use Ghostery on your machine, it may be different to your neighbours trackers. Trackers vary by machine, by location, by context. That’s something we have to take into account when requesting data. So for news websites you may, for instance, have more and more trackers generated the longer the site is open – this tool only captures a short window of time so may not gather all of the trackers.

Anne: Also in Europe you may encounter a so-called cookie walls. You have to press OK to accept cookies… And the tool can’t emulate user experience in clicking beyond the cookie walls… So zero trackers may indicate that issue, rather than no trackers.

Q: Is it server side or client side?

A: It is server side.

Q: And do you cache the tracker data?

A: Once you run the tool you can save the CSV and Gephi files, but we don’t otherwise cache.

Anne: Ghostery updates very frequently so that makes it most useful to always use the most up to date list of trackers to check against.

So, once we’ve run the Tracker Tracker tool you get outputs that can be used in a variety of flexible formats. We will download the “exhaustive” CSV – which has all of the data we’ve found here.

If I open that CSV (in Excel) we can see the site, the scheme, the patterns that was used to find the tracker, the name of the tracker… This is very detailed information. So for these adult sites we see things like Google Analytics, the Porn Ad network, Facebook Connect. So, already, there is analysis you could do with this data. But you could also do further analysis using Gephi.

Now, we have steps of this procedure in the tutorial that goes with today’s session. So here we’ve coloured the sites in grey, and we’ve highlighted the trackers in different colours. The purple lines/nodes are advertising trackers for instance.

If you want to create this tracker at home, you have all the steps here. And doing this work we’ve found trackers we’d never seen before – for instance the porn industry ad network DoublePimp (a play on DoubleClick) – and to see regional and geographic difference between trackers, which of course has interesting implications.

So, some more examples… We have taken this approach looking at Jihadi websites, working with e.g. governments to identify the trackers. And found that they are financially dependent on advertising included SkimLinks, DoubleClick, Google AdSense.

Carolyn: And in almost all networks we encounter DoubleClick, AdSense, etc. And it’s important to know that webmasters enable these trackers, they have picked these services. But there is an issue of who selects you as a client – something journalists collaborating on this work raised with Google.

Anne: The other usage of these trackers has been in historical tracking analysis using the internet archive. This enables you to see the website in the context in a techno-commercial configuration, and to analyse it in that context. So for instance looking at New York Times trackers and the wevsite as an ecosystem embedded in the wider context – in this case trackers decreased but that was commercial concentration, from companies buying each other therefore reducing the range of trackers.

Carolyn: We did some work called the Trackers Guide. We wanted to look not only at trackers, but also look at Content Delivery Networks, to visualise on a website how websites are not single items, but collections of data with inflows and outflows. The result became part artwork, part biological fieldguide. We imagined content and trackers as little biological cell-like clumps on the site, creating a whole booklet of this guide. So the image here shows the content from other spaces, content flowing in and connected…

Anne: We were also interested in what kind of data is being collected by these trackers. And also who owns these trackers. And also the countries these trackers are located in. So, we used this method with Ghostery. And then we dug further into those trackers. For Ghostery you can click on a tracker and see what kind of data it collects. We then looked at privacy policies of trackers to see what it claims to collect… And then we manually looked up ownership – and nationality – of the trackers to understand rules, regulations, etc. – and seeing where your data actually ends up.

Carolyn: Working with Ghostery, and repurposing their technology, was helpful but their database is not complete. And it is biased to the English-speaking world – so it is particularly lacking in Chinese contexts for instance. So there are limits here. It is not always clear what data is actually being collected. BUT this work allows us to study invisible participation in data flows – that cannot be found in other ways; to study media concentration and the emergence of specific tracking ecologies. And in doing so it allows us to imagine alternative spatialities of the web – tracker origins and national ecologies. And it provides insights into the invisible infrastructures of the web.

Slides for this presentation: http://www.slideshare.net/cgrltz/aoir-2016-digital-methods-workshop-tracking-the-trackers-66765013

Multiplatform Issue Mapping (Jean Burgess & Ariadna Matamoros Fernandez)

Jean: I’m Jean Burgess and I’m Professor of Digital Media and Director of the DRMC at QUT. Ariadna is one of our excellent PhD students at QUT but she was previously at DMI so she’s a bridge to both organisations. And I wanted to say how lovely it is to have the DRMC and DMI connected like this today.

So we are going to talk about issue mapping, and the idea of using issue mapping to teach digital research methods, particularly with people who may not be interested in social media outside of their specific research area. And about issue mapping as an approach that is outside the dominant “influencers” narrative that is dominant in the marketing side of social media.

We are in the room with people who have been working in this space for a long time but I just want to raise that we are making connections to AMT and cultural and social studies. So, a few ontological things… Our approach combines digital methods and controversy analysis. We understand there to be Controversies which are discreet, acute, often temporality that are sites of intersectionality, bringing together different issues in new combination. And drawing on Latour, Callon etc. we see controversies as generative. They can reveal the dynamics of issues, bring them together in new combinations, trasform them and mode them forward. And we undertake network and content analysis to understand relations among stakeholders, arguments and objects.

There are both very practical applications and more critical-reflexive possibilities of issue mapping. And we bring our own media studies viewpoint to that, with an interest in the vernacular of the space.

So, issue mapping with social media frequently starts with topical Twitter hashtags/hashtag communities. We then have iteractive “issue inventories” – actors, hashtags, media objects from one dataset used as seeds on their own. We then undertake some hybrid network/thematic analysis – e.g. associations among hashtags; thematic network clusters And we inevitably meet the issue of multi-platform/cross-platform engagement. And we’ll talk more about that.

One project we undertook on the #agchatoz, which is a community in Australia around weekly Twitter chats, but connected to a global community, explored the hashtag as a hybrid community. So here we looked at, for instance, the network of followers/followees in this network. And within that we were able to identify clusters of actors (across: Left-learning Twitterati (30%); Australian ag, farmers (29%); Media orgs, politicians (13%); International ag, farmers (12%); Foodies (10%); Right-wing Australian politics and others), and this reveals some unexpected alliances or crossovers – e.g. between animal rights campaigners and dairy farmers. That suggests opportunities to bridge communities, to raise challenges, etc.

We have linked, in the files for this session, to various papers. One of these, Burgess and Matamoros-Fernandez (2016) looks at Gamergate and I’m going to show a visualisation of the YouTube video network (Reider 2015; Gephi), which shows videos mentioned in tweets around that controversy, showing those that were closely related to each other.

Ariadna: My PhD is looking at another controversy, this one is concerned by Adam Goodes, an Australian Rules Footballer who was a high profile player until he retired last year. He has been a high profile campaigner against racism, and has called out racism on the field. He has been criticised for that by one part of society. And in 2014 he performed an indiginous war dance on the pitch, which again received booing from the crowd and backlash. So, I start with Twitter, follow the links, and then move to those linked platforms and moving onwards…

Now I’m focusing on visual material, because the controversy was visual, it was about a gesture. So there is visual content (images, videos, gifs) are mediators of race and racism on social media. I have identified key media objects through qualitative analysis – important gestures, different image genres. And the next step has been to reflect on the differences between platform traces – YouTube relates videos, Facebook like network, Twitter filters, notice and take down automatic messages. That gives a sense of the community, the discourse, the context, exploring their specificities and how their contributes to the cultural dynamics of face and racism online.

Jean: And if you want to learn more, there’s a paper later this week!

So, we usually do training on this at DMRC #CCISS16 Workshops. We usually ask participants to think about YouTube and related videos – as a way to encourage to people to think about networks other than social networks, and also to get to grips with Gephi.

Ariadna: Usually we split people into small groups and actually it is difficult to identify a current controversy that is visible and active in digital media – we look at YouTube and Tumblr (Twitter really requires prior collection of data). So, we go to YouTube to look for a key term, and we can then filter and find results changing… Usually you don’t reflect that much. So, if you look at “Black Lives Matter”, you get a range of content… And we ask participants to pick out relevant results – and what is relevant will depend on the research question you are asking. That first choice of what to select is important. Once this is done we get participants to use the YouTube Data Tools: https://tools.digitalmethods.net/netvizz/youtube/. This tool enables you to explore the network… You can use a video as a “seed”, or you can use a crawler that finds related videos… And that can be interesting… So if you see an Anti-Islamic video, does YouTube recommend more, or other videos related in other ways?

That seed leads you to related videos, and, depending on the depth you are interested in, videos related to the related videos… You can make selections of what to crawl, what the relevance should be. The crawler runs and outputs a Gephi file. So, this is an undirected network. Here nodes are videos, edges are relationships between videos. We generally use the layout: Force Atlas 2. And we run the Modularity Report to colour code the relationships on thematic or similar basis. Gephi can be confusing at first, but you can configure and use options to explore and better understand your network. You can look at the Data Table – and begin to understand the reasons for connection…

So, I have done this for Adam Goodes videos, to understand the clusters and connections.

So, we have looked at YouTube. Normally we move to Tumblr. But sometimes a controversy does not resonate on a different social media platform… So maybe a controversy on Twitter, doesn’t translate on Facebook; or one on YouTube doesn’t resonate on Tumblr… Or keywords will vary greatly. It can be a good way to start to understand the cultures of the platforms. And the role of main actors etc. on response in a given platform.

With Tumblr we start with the interface – e.g. looking at BlackLivesMatter. We look at the interface, functionality, etc. And then, again, we have a tool that can be used: https://tools.digitalmethods.net/netvizz/tumblr/. We usually encourage use of the same timeline across Tumblr and YouTube so that they can be compared.

So we can again go to Gephi, visualise the network. And in this case the nodes and edges can look different. So in this example we see 20 posts that connect 141 nodes, reflecting the particular reposting nature of that space.

Jean: The very specific cultural nature of the different online spaces can make for very interesting stuff when looking at controversies. And those are really useful starting points into further exploration.

And finally, a reminder, we run our summer schools in DMRC in February. When it is summer! And sunny! Apply now at: http://dmrcss.org/!

Analysing and visualising geospatial data (Peta Mitchell)

Normally when I would do this as a workshop I’d give some theoretical and historical background of the emergence of geospatial data, and then move onto the practical workshop on Carto (was CartoDB). Today though I’m going to talk about a case study, around the G8 meeting in Melbourne, and then talk about using Carto to create a social media map.

My own background is a field increasingly known as the geo humanities or the spatial humanities. And I did a close reading project of novels and films to create a Cultural Atlas of Australia. And how locations relate to narrative. For instance almost all films are made in South Australia, regardless of where they are set, mapping patterns of representation. We also created a CultureMap – an app that went with a map to alert you to literary or filmic places nearby that related back to that atlas.

I’ll talk about that G8 stuff. I now work on rapid spatial analytics; participatory geovisualisation and crowdsourced data; VGI – Volunteered Geographic Information; placemaking etc. But today I’ll be talking about emerging forms of spatial information/geodata, neogeographical tools etc.

So Godon and de Souza e Silva (2011) talk about us witnessing the increasing proliferation of geospatial data. And this is sitting alongside a geospatial revolution – GPS enabled devices, geospatial data permeating social media, etc. So GPS emerged in the late ’90s/early 00’s with a slight social friend-finder function. But the geospatial web really begins around 2000, the beginning of the end of the idea of the web as a “placeless space”. To an extent this came from a legal case brought by a French individual against Yahoo!, who were allowing Nazi memorabilia to be sold. That was illegal in France, and Yahoo! claimed that the internet is global, and claimed that it wasn’t possible. A French judge found in favour of the individual, Yahoo! were told it was both doable and easy, and Yahoo! went on to financially benefit from IP based location information. As Richard Rogers that case was the “revenge of geography against the idea of cyberspace”.

Then in 2005 Google Maps was described by John Yudell as that platform having the potential to be a “service factory for the geospatial web”. So in 2005 the “geospatial web” really is there as a term. By 2006 the concept of “Neogeography” was defined by Andrew (?) to describe the kind of non-professional, user-orientated, web 2.0-enabled mapping. There are are critiques in cultural geography, in geospatial literature about this term, and the use of the “neo” part of it. But there are multiple applications here, from humanities to humanitariasm; from cultural mapping to crisis mapping. An example here is Ushahidi maps, where individuals can send in data and contribute to mapping of crisis. Now Ushahidi is more of a platform for crisis mapping, and other tools have emerged.

So there are lots of visualisation tools and platforms. There are traditional desktop GIS – ArcGIS, QGIS. There is basic web-mapping (e.g. Google Maps); Online services (E.g. CARTO, Mapbox); Custom map design applications (e.g. MapMill); and there are many more…

Spatial data is not new, but there is a growth in ambient and algorithmic spatial data. So for instance ABC (TV channel in Australia) did some investigation, inviting audiences to find out as much as they could based on their reporter Will Ockenden’s metadata. So, his phone records, for instance, revealed locations, a sensitive data point. And geospatial data is growing too.

We now have a geospatial sub stratum underpinning all social media networks. So this includes check-in/recommendation platforms: Foursquare, Swarm, Gowalla (now defunct), Yelp; Meetup/hookup apps: Tinder, Grindr, Meetup; YikYak; Facebook; Twitter; Instagram; and Geospatial Gaming: Ingress; Pokemon Go (from which Google has been harvesting improvements for its pedestrian routes).

Geospatial media data is generated from sources ranging from VGI (Volunteered geographic information) to AGI (ambient geographic information), where users are not always aware that they are sharing data. That type of data doesn’t feel like crowd sourced data or VGI, hence the potential challenges, potential and ethical complexity of AGI.

So, the promises of geosocial analysis include a focus on real-time dynamics – people working with geospatial data aren’t used to this… And we also see social media as a “sensor network” for crisis events. There is also potential to provide new insights into spatio-temporal spread of ideas and actions; human mobilities and human behaviours.

People do often start with Twitter – because it is easier to gather data from it – but only between 1% and 3% of Tweets are located. But when we work at festivals we see around 10% being location data – partly a nature of the event, partly as Tweets are often coming through Instagram… On Instagram we see between 20% and 30% of images georeferenced, but based on upload location, not where image is taken.

There is also the challenge of geospatial granularity. On a tweet with Lat Long, that’s fairly clear. When we have a post tagged with a place we essentially have a polygon. And then when you geoparse, what is the granularity – street, city? Then there are issues of privacy and the extent to which people are happy to share that data.

So, in 2014 Brisbane hosted the G20, at a cost of $140 AUS for one highly disruptive weekend. In preceeding G20 meetings there had been large scale protests. At the time the premier of the city was former military and he put the whole central business district was in lockdown and designated a “declared area” – under new laws made for this event. And hotels for G20 world leaders were inside the zone. So, Twitter mapping is usually during crisis events – but you don’t know where this will happen, where to track it, etc. In this case we knew in advance where to look. So, a Safety and Security Act (2013) was put in place for this event, requiring prior approval for protests; arrests for the duration of the event; on the spot strip search; banning of eggs in the central Business District, no manure, no kayaks or floatation devices, no remote control cars or reptiles!

So we had these fears of violent protests, given all of these draconian measures. We had elevated terror levels. And we had war threatened after Abbott said he would “shirtfront” Vladimir Putin over MH17. But all that concern made city leaders concerned that the city might be a ghost town, when they wanted it marketed as a new world city. They were offering free parking etc. to incentivise them to come in. And tweets reinforced the ghost town trope. So, what geosocial mapping enabled was a close to realtime sensor network of what might be happening during the G20.

So, the map we did was the first close to real time social media map that was public facing, using CARTODB, and it was never more than an hour behind reality. We had few false matches. But we had clear locations and clear keywords – e.g. G20 – to focus on. A very few “the meeting will now be held in G20” but otherwise no false matches. We tracked the data through the meeting… Which ran over a weekend and bank holiday. This map parses around 17,000(?) tweets, most of which were not geotagged but parsed. Only 10% represent where someone was when they tweeted, the remaining 90% are subjects of posts from geoparsing of tweets.

Now, even though that declared area isn’t huge, there are over 300 streets there. I had to build a manually constructed gazeteer, using Open Street Map (OSM) data, and then new data. Picking a bounding box that included that area generated a whole range of features – but I wasn’t that excited about fountains, benches etc. I was looking for places people might mention. And I wanted to know about features people might actually mention in their tweets. So, I had a bounding box, and the declared area before… Would have been ideal if the G20 had given me their bounding polygon but we didn’t especially want to draw attention to what we were doing.

So, at the end we had lat, long, amenity (using OSM terms), name (e.g. Obama was at the Marriott so tweets about that), associated search terms – including local/vernacular versions of names of amenities; Status (declared or restricted); and confidence (of location/coordinates – score of 1 for geospatially tagged tweets, 0.8 for buildings, etc.). We could also create category maps of different data sets. On our map we showed geospatial and parsed tweets inside the area, but we only used geotweets outside the declared area. One of my colleagues created a Python script to “read” and parse tweets, and that generated a CSV. That CSV could then be fed into CARTODB. CARTODB has a time dimension, could update directly every half hour, and could use a Dr0pbox source to do that.

So, did we see much disruption? Well no… About celebrity spotting – the two most tweeted images were Obama with a koala and Putin with a koala. It was very hot and very secured so little disruption happened. We did see selfies with Angela Merkel, images of phallic motorcade. And after the G20 there was a complaint filed to board of corruption about the cooling effect of security on participation, particularly in environmental protests. There was still engagement on social media, but not in-person. Disruption, protest, criticism were replaced by spectacle and distant viewing of the event.

And, with that, we turn to an 11 person panel session to answer questions, wrap up, answer questions, etc. 

Panel Session

Q1) Each of you presented different tools and approaches… Can you comment on how they are connected and how we can take advantage of that.

A1 – Jean) Implicitly or explicitly we’ve talked about possibilities of combining tools together in bigger projects. And tools that Peta and I have been working on are based on DMI tools for instance… It’s sharing tools, shared fundamental techniques for analytics for e.g. a Twitter dataset…

A1 – Richard) We’ve never done this sort of thing together… The fact that so much has been shared has been remarkable. We share quite similar outlooks on digital methods, and also on “to what end” – largely for the study of social issues and mapping social issues. But also other social research opportunities available when looking at a variety of online data, including geodata. It’s online web data analysis using digital methods for issue mapping and also other forms of social research.

A1 – Carolyn) All of these projects are using data that hasn’t been generated by research, but which has been created for other purposes… And that’s pushing the analysis in their own way… And tools that we combine bring in levels, encryptions… Digital methods use these, but also a need to step back and reflect – present in all of the presentations.

Q2) A question especially for Carolyn and Anne: what do you think about the study of proprietary algorithms. You talked a bit about the limitations of proprietary algorithms – for mobile applications etc? I’m having trouble doing that…

A2 – Anne) I think in the case of the tracker tool, it doesn’t try to engage with the algorithm, it looks at presence of trackers. But here we have encountered proprietary issues… So for Ghostery, if you download a Firefox plugin you can access the content. We took the library of trackers from that to use as a database, we took that apart. We did talk to Ghostery, to make them aware… The question of algorithms… Of how you get to the blackbox things… We are developing methods to do this… One way in is to see the outputs, and compare that. Also Christian Zudwig is doing the auditing algorithms work.

A2 – Carolyn) Was just a discussion on Twitter about currency of algorithms and research on them… We’ve tried to ride on them, to implement that… Otherwise difficult. One element was on studying mobile applications. We are giving a presentation on this on Friday. Similar approach here, using infrastructures of app distribution and description etc. to look into this… Using existing infrastructures in which apps are built or encountered…

A2 – Anne) We can’t screenscrape and we are moving to this more closed world.

A2 – Richard) One of the best ways to understand algorithms is to save the outputs – e.g. we’ve been saving Google search outputs for years. Trying to save newsfeeds on Facebook, or other sorts of web apps can be quite difficult… You can use the API but you don’t necessarily get what the user has seen. The interface outputs are very different from developer outputs. So people think about recording rather than saving data – an older method in a way… But then you have the problem of only capturing a small sample of data – like analysing TV News. The new digital methods can mean resorting to older media methods… Data outputs aren’t as friendly or obtainable…

A2 – Carolyn) This one strand is accessing algorithms via transparancy; you can also think of them as situated and in context, seeing it in operation and in action in relation to the data, associated with outputs. I’d recommend Salam Marocca on the Impact of Big Data which sits in legal studies.

A2 – Jean) One of the ways we approach this is the “App Walkthrough”, a method Ben Light and I have worked on and will shortly be published in Media and Society, is to think about those older media approaches, with user studies part of that…

Q3) What is your position as researchers on opening up data, and doing ethically acceptable data on the other side? Do you take a stance, even a public stance on these issues.

A3 – Anne) Many of these tools, like the YouTube tool, and his Facebook tools, our developer took the conscious decision to anonymise that data.

A3 – Jean) I do have public positions. I’ve published on the political economy of Twitter… One interesting thing is that privacy discourses were used by Twitter to shut down TwapperKeeper at a time it was seeking to monetise… But you can’t just published an archive of tweets with username, I don’t think anyone would find that acceptable…

A3 – Richard) I think it is important to respect or understand contextual privacy. People posting, on Twitter say, don’t have an expectation of its use in commercial or research uses. Awareness of that is important for a researcher, no matter what terms of service the user has signed/consented to, or even if you have paid for that data. You should be aware and concerned about contextual privacy… Which leads to a number of different steps. And that’s why, for instance, NetVis – the Facebook tool – usernames are not available for comments made, even though FacePager does show that. Tools vary in that understanding. Those issues need to be thought about, but not necessarily uniformly thought about by our field.

A3 – Carolyn) But that becomes more difficult in spaces that require you to take part to research them – WhatsApp? for instance – researchers start pretending to be regular users… to generate insights.

Comment (me): on native vs web apps and approaches and potential for applying Ghostery/Tracker Tracker methods to web apps which are essentially pointing to URLs.

Q4) Given that we are beholden to commercial companies, changes to algorithms, APIs etc, and you’ve all spoken about that to an extent, how do you feel about commercial limitations?

A4 – Richard) Part of my idea of digital methods is to deal with ephemerality… And my ideal to follow the medium… Rather than to follow good data prescripts… If you follow that methodology, then you won’t be able to use web data or social media data… Unless you either work with the corporation or corporate data scientist – many issues there of course. We did work with Yahoo! on political insights… categorising search queries around a US election, which was hard to do from outside. But the point is that even on the inside, you don’t have all the insight or the full access to all the data… The question arises of what can we still do… What web data work can we still do… We constantly ask ourselves, I think digital methods is in part an answer to that, otherwise we wouldn’t be able to do any of that.

A4 – Jean) All research has limitations, and describing that is part of the role here… But also when Axel and I started doing this work we got criticism for not having a “representative sample”… And we have people from across humanities and social sciences seem to be using the same approaches and techniques but actually we are doing really different things…

Q5) Digital methods in social sciences looks different from anthropology where this is a classical “informant” problem… This is where digital ethnography is there and understood in a way that it isn’t in the social sciences…

Resources from this workshop:

Aug 092016
 
Notes from the Unleashing Data session at Repository Fringe 2016

After 6 years of being Repository Fringe‘s resident live blogger this was the first year that I haven’t been part of the organisation or amplification in any official capacity. From what I’ve seen though my colleagues from EDINA, University of Edinburgh Library, and the DCC did an awesome job of putting together a really interesting programme for the 2016 edition of RepoFringe, attracting a big and diverse audience.

Whilst I was mainly participating through reading the tweets to #rfringe16, I couldn’t quite keep away!

Pauline Ward at Repository Fringe 2016

Pauline Ward at Repository Fringe 2016

This year’s chair, Pauline Ward, asked me to be part of the Unleashing Data session on Tuesday 2nd August. The session was a “World Cafe” format and I was asked to help facilitate discussion around the question: “How can the respository community use crowd-sourcing (e.g. Citizen Science) to engage the public in reuse of data?” – so I was along wearing my COBWEB: Citizen Observatory Web and social media hats. My session also benefited from what I gather was an excellent talk on “The Social Life of Data” earlier in the event from the Erinma Ochu (who, although I missed her this time, is always involved in really interesting projects including several fab citizen science initiatives).

I won’t attempt to reflect on all of the discussions during the Unleashing Data Session here – I know that Pauline will be reporting back from the session to Repository Fringe 2016 participants shortly – but I thought I would share a few pictures of our notes, capturing some of the ideas and discussions that came out of the various groups visiting this question throughout the session. Click the image to view a larger version. Questions or clarifications are welcome – just leave me a comment here on the blog.

Notes from the Unleashing Data session at Repository Fringe 2016

Notes from the Unleashing Data session at Repository Fringe 2016

Notes from the Unleashing Data session at Repository Fringe 2016

If you are interested in finding out more about crowd sourcing and citizen science in general then there are a couple of resources that made be helpful (plus many more resources and articles if you leave a comment/drop me an email with your particular interests).

This June I chaired the “Crowd-Sourcing Data and Citizen Science” breakout session for the Flooding and Coastal Erosion Risk Management Network (FCERM.NET) Annual Assembly in Newcastle. The short slide set created for that workshop gives a brief overview of some of the challenges and considerations in setting up and running citizen science projects:

Last October the CSCS Network interviewed me on developing and running Citizen Science projects for their website – the interview brings together some general thoughts as well as specific comment on the COBWEB experience:

After the Unleashing Data session I was also able to stick around for Stuart Lewis’ closing keynote. Stuart has been working at Edinburgh University since 2012 but is moving on soon to the National Library of Scotland so this was a lovely chance to get some of his reflections and predictions as he prepares to make that move. And to include quite a lot of fun references to The Secret Diary of Adrian Mole aged 13 ¾. (Before his talk Stuart had also snuck some boxes of sweets under some of the tables around the room – a popularity tactic I’m noting for future talks!)

So, my liveblog notes from Stuart’s talk (slightly tidied up but corrections are, of course, welcomed) follow. Because old Repofringe live blogging habits are hard to kick!

The Secret Diary of a Repository aged 13 ¾ – Stuart Lewis

I’m going to talk about our bread and butter – the institutional repository… Now my inspiration is Adrian Mole… Why? Well we have a bunch of teenage repositories… EPrints is 15 1/2; Fedora is 13 ½; DSpace is 13 ¾.

Now Adrian Mole is a teenager – you can read about him on Wikipedia [note to fellow Wikipedia contributors: this, and most of the other Adrian Mole-related pages could use some major work!]. You see him quoted in two conferences to my amazement! And there are also some Scotland and Edinburgh entries in there too… Brought a haggis… Goes to Glasgow at 11am… and says he encounters 27 drunks in one hour…

Stuart Lewis at Repository Fringe 2016

Stuart Lewis illustrates the teenage birth dates of three of the major repository softwares as captured in (perhaps less well-aged) pop hits of the day.

So, I have four points to make about how repositories are like/unlike teenagers…

The thing about teenagers… People complain about them… They can be expensive, they can be awkward, they aren’t always self aware… Eventually though they usually become useful members of society. So, is that true of repositories? Well ERA, one of our repositories has gotten bigger and bigger – over 18k items… and over 10k paper thesis currently being digitized…

Now teenagers also start to look around… Pandora!

I’m going to call Pandora the CRIS… And we’ve all kind of overlooked their commercial background because we are in love with them…!

Stuart Lewis at Repository Fringe 2016

Stuart Lewis captures the eternal optimism – both around Mole’s love of Pandora, and our love of the (commercial) CRIS.

Now, we have PURE at Edinburgh which also powers Edinburgh Research Explorer. When you looked at repositories a few years ago, it was a bit like Freshers Week… The three questions were: where are you from; what repository platform do you use; how many items do you have? But that’s moved on. We now have around 80% of our outputs in the repository within the REF compliance (3 months of Acceptance)… And that’s a huge change – volumes of materials are open access very promptly.

So,

1. We need to celebrate our success

But are our successes as positive as they could be?

Repositories continue to develop. We’ve heard good things about new developments. But how do repositories demonstrate value – and how do we compare to other areas of librarianship.

Other library domains use different numbers. We can use these to give comparative figures. How do we compare to publishers for cost? Whats our CPU (Cost Per Use)? And what is a good CPU? £10, £5, £0.46… But how easy is it to calculate – are repositories expensive? That’s a “to do” – to take the cost to run/IRUS cost. I would expect it to be lower than publishers, but I’d like to do that calculation.

The other side of this is to become more self-aware… Can we gather new numbers? We only tend to look at deposit and use from our own repositories… What about our own local consumption of OA (the reverse)?

Working within new e-resource infrastructure – http://doai.io/ – lets us see where open versions are available. And we can integrate with OpenURL resolvers to see how much of our usage can be fulfilled.

2. Our repositories must continue to grow up

Do we have double standards?

Hopefully you are all aware of the UK Text and Data Mining Copyright Exception that came out from 1st June 2014. We have massive massive access to electronic resources as universities, and can text and data mine those.

Some do a good job here – Gale Cengage Historic British Newspapers: additional payment to buy all the data (images + XML text) on hard drives for local use. Working with local informatics LTG staff to (geo)parse the data.

Some are not so good – basic APIs allow only simple searchers… But not complex queries (e.g. could use a search term, but not e.g. sentiment).

And many publishers do nothing at all….

So we are working with publishers to encourage and highlight the potential.

But what about our content? Our repositories are open, with extracted full-text, data can be harvested… Sufficient but is it ideal? Why not do bulk download from one click… You can – for example – download all of Wikipedia (if you want to).  We should be able to do that with our repositories.

3. We need to get our house in order for Text and Data Mining

When will we be finished though? Depends on what we do with open access? What should we be doing with OA? Where do we want to get to? Right now we have mandates so it’s easy – green and gold. With gold there is PURE or Hybrid… Mixed views on Hybrid. Can also publish locally for free. Then for gree there is local or disciplinary repositories… For Gold – Pure, Hybrid, Local we pay APCs (some local option is free)… In Hybrid we can do offsetting, discounted subscriptions, voucher schemes too. And for green we have UK Scholarly Communications License (Harvard)…

But which of these forms of OA are best?! Is choice always a great thing?

We still have outstanding OA issues. Is a mixed-modal approach OK, or should we choose a single route? Which one? What role will repositories play? What is the ultimate aim of Open Access? Is it “just” access?

How and where do we have these conversations? We need academics, repository managers, librarians, publishers to all come together to do this.

4. Do we now what a grown-up repository look like? What part does it play?

Please remember to celebrate your repositories – we are in a fantastic place, making a real difference. But they need to continue to grow up. There is work to do with text and data mining… And we have more to do… To be a grown up, to be in the right sort of environment, etc.

Q&A

Q1) I can remember giving my first talk on repositories in 2010… When it comes to OA I think we need to think about what is cost effective, what is sustainable, why are we doing it and what’s the cost?

A1) I think in some ways that’s about what repositories are versus publishers… Right now we are essentially replicating them… And maybe that isn’t the way to approach this.

And with that Repository Fringe 2016 drew to a close. I am sure others will have already blogged their experiences and comments on the event. Do have a look at the Repository Fringe website and at #rfringe16 for more comments, shared blog posts, and resources from the sessions. 

Jul 122016
 

This week I am at the European Conference on Social Media 2016. I’m presenting later today, and have a poster tomorrow, but will also be liveblogging here. As usual the blog is live so there may be small errors or typos – all corrections and additions are very much welcomed!

We are starting with an introduction to EM Normandie, which has 4 campuses and 3000 students.

Introduction from Sue Nugus, ACPI, welcoming us to the event and the various important indexing etc.

Christine Bernadas, ECSM is co-chair and from EM Normandie, is introducing our opening keynote Abi Ouni, Co-founder and CEO of Spectrum Group. [http://www.spectrumgroupe.fr/]

Keynote Address:Ali Ouni,Spectrum Group, France – Researchers in Social Media, Businesses Need You!!!

My talk today is about why businesses need social media. And that, although we have been using social media for the last 10-15 years, we still need some approaches and frameworks to make better use of it.

My own personal background is in Knowledge Manageent, with a PhD from the Ecole Centrale Paris and Renault. Then moved to KAP IT as Head of Enterprise 2.0, helping companies to integrate new technologies, social media, in their businesses. I belive this is a hard question – the issue of how we integrate social media in our businesses. And then in 2011 I co-founded Spectrum Groupe, a consulting firm of 25 people who work closely with researchers to define new approaches to content management, knowledge management, to define new approaches. And our approach is to design end to end approaches, from diagnostic, to strategy development through to technologies, knowledge management, etc.

When Christine asked me to speak today I said “OK, but I am no longer a researcher”, I did that 12-15 years ago, I am now a practitioner. So I have insights but we need you to define the good research questions based on them.

I looked back at what has been said about social media in the last 10-15 years: “Organisationz cannot afford not to be listening to what is being said about them or interacting with their customers in the space where they are spending their time and, increasingly, their money too” (Malcolm Alder, KPMG, 2011).

And I agree with that. This space has high potential for enterprises… So, lets start with two slides with some statistics. So, these statistics are from We Are Social’s work on digital trends. They find internet activity increasing by 10% every year; 10% growth in social media users; and growth of 4% in social media users accessing via mobile; which takes us to 17% of the total population actively engaging in social media on mobile.

So, in terms of organisations going to social media, it is clearly important. Ut it is also a confusion question. We can see that in 2010 70%+ of big international organisations were actively using social media, but of these 80% have not achieved the intended businesses. So, businesses are expending time and energy on social media but they are not accruing all of the benefits that they have targeted.

So, for me social media are new ways of working, new business models, new opportunities, but also bringing new risks and challenges. And there are questions to be answered that we face every day in an organisational context.

The Social Media Landscape today is very very diverse, there is a high density… There are many platforms, sites, medias… Organisationsa re confused by this landscape and they require help to navigate this space. The choice they have is usually to go to the biggest social media in terms of total users – but is that a good strategy? They need to choose sites with good business value. There are some challenges when considering external sites versus internal sites – should they replicate functionality themselves? And where are the values and risks of integrating social media platforms with enterprise IT systems? For instance listening to social media and making connecting back to CRMs (Customer Relationship Management System(s)).

What about using social media for communications? You can experiement, and learn from those… But that makes more sense when these tools are new, and they are not anymore. Is experimenting always the best approach? How ca we move faster? Clients often ask if they can copy/adopt the digital strategies of their competitors but I think generally not, that these approaches have to be specific to the context and audience.

Social media has a fast evolution speed, so agility is required… Organisations can struggle with that in terms of their own speed of organizational change. A lot of agility is requires to address new technologies, new use cases, new skills. And decisions over skills and whether to own the digital transformation process, or to delegate to others.

The issue of Return on Investment (ROI) is long standing but still important. Existing models do not work well with social media – we are in a new space, new technology, a new domain. There is a need to justify the value of these kinds of projects, but I think a good approach is to work on new social constructs, such as engagement, sentiment, retention, “ROR” – Return on Relationship, collective intelligence… But how does one measure these?

And organisations face challenges of governance… Understanding rules and policies of engagement on social media, on understanding issues of privacy and data protection. And thought around who can engage on social media.

So, I have presented some key challenges… Just a few. There are many more on culture, change, etc. that need to be addressed. I think that it is important that businesses and researchers work together on social media.

Q&A

Q1) Could you tell me something on Return on Relationships… ?

A1) This is a new approach. Sometimes the measure of Return on Investment is to measure every conversation and all time spent… ROR is about long term relationships with customers, partners, suppliers… and it is about having benefits after a longer period of time, rather than immediate Return on Investment. So some examples include turning some customers into advocates –so they become your best salespeople. That isn’t easy, but organisations are really very aware about these social constructs.

Q1) And how would you calculate that?

Comment) That is surely ROI still?

Comment) So, if I have a LinkedIn contact, and they buy my software, then that is a return on investment, and value from social capital… There is a time and quality gain too – you identify key contact and context here. Qualitative but eventually quantitative.

A1) There absolutely is a relationship between ROR and ROI.

Q2) It was interesing to hear your take on research. What you said reminded me of 20 years ago when we talked about “Quality Management” and there was a tension between whether that should be its own role, or part of everyone’s role.

A2) Yes, so we have clients that do want “community management” and ask us to do that for them – but they are the experts in their own work and relationships. The quality of content is key, and they have that expertise. Our expertise is around how to use social media as part of that. The good approach is to think about new ways to work with customers, and to define with our consulting customers what they need to do that. We have a coaching role, helping them to design a good approach.

Q3) Thank you for your presentation. I would like to ask you if you could think of a competency framework for good community management, and how you would implement that.

A3) I couldn’t define that framework, but I think rom what I see there are some key skills in community management are about expertise – people from the business who understands their own structure, needs, knowledge. I think that communication skills need to be good – writing skills, identifying good questions, an ability to spot and transform key questions. From our experience, knowing the enterprise, communication skills and coordinating skills are all key.

Q3) What about emotional engagement?

A3) I think emotional engagement is both good and dangerous. It is good to be invested in the role, but if they are too invested there is a clear line to draw beteen professional engagement and personal engagement. And that can make it dangerous.

Stream B – Mini Track on Empowering Women Through Social Media (Chair – Danilo Piaggesi)

Danilo: I proposed this mini track as I saw that the issues facing women in social media were different, but that women were self-organising and addressing these issues, so that is the genesis of this strand. My own background is in ICT in development and developing countries – which is why I am interested in this area of social media… The UN Sustainable Development Goals (SDG), which include ICT, have been defined as needing to apply to developing and developed countries. And there is a specific goal dedicated to Women and ICT, which has a deadline of 2030 to achieve this SDG.

Sexting & Intimate Relations Online: Identifying How People Construct Emotional Relationships Online & Intimacies Offline
Spurling – Esme, Coventry University, West Midlands, UK

Sexting and intimate relations online have accelerated with the use of phones and smart phones, particularly platforms such as SnapChat and Whats App… Sexting for the purpose of this paper is about the sharing of intimate texts through digital information. But this raises complexity for real life relationships, and how the online experience relates to that, and how heterosexual relationships are mediated. My work is based on interviewees.

I will be talking about “sex selfies”, which are distributed to a global audience online. These selfies (Ellie is showing examples on the “sexselfie” hashtags) purport to be intimate, despite their global sharing and nature. The hashtags here (established around 2014) show heterosexual couples… There is (by comparison to non-heterosexual selfies) a real focus on womens bodies, which is somewhat at odds with the expectations of girls and women showing an interest in sex. Are we losing our memory of what is intimate? Are sexselfies a way to share and retain that memory?

I spoke to women in the UK and US for my research. All men approached refused to be interviewed. We have adapted to the way we communicate face to face through the way we connect online. My participants reflect social media trends already reported in the media, of the blurring of different spheres of public and private. And that is feeding into our intimate lives too. Prensky (2001) refers to this generation as “Digital Natives” (I insert my usual disclaimer that this is the speaker not me!), and it seems that this group are unable to engage in that intimacy without sharing that experience. And my work focuses on shairng online, and how intimacy is formed offline. I took an ethnographic approach, and my participants are very much of a similar age to me, which helped me to connect as I spoke to them about their intimate relationships.

There becomes a dependency on mobile technologies, of demand and expectation… And that is leading to a “leisure for pleasure” mentality (Cruise?)… You need that reward and return for sharing, and that applies to sexting. Amy Hassenhof notes that sexting can be considered a broadcast media. And mainstream media has also been scrutinising sexting and technology, and giving coverage to issues such as “Revenge Porn” – which was made a criminal offence in 2014. This made texting more taboo and changed public perceptions – with judgement online of images of bodies shared on Twitter. When men participate they sidestep a label, being treated in the highly gendered “boys will be boys” casualness. By contrast women showing their own agency may be subject to “slut shaming” (2014 onwards), but sexting continues. And I was curious to find out why this continues, and how the women in my studies relate to comments that may be made about them. Although there is a feeling of safety (and facelessness) about posting online, versus real world practices.

An expert interview with Amy Hassenhof raised the issue of expectations of privacy – that most of those sexting expect their image to be private to the recipient. Intimate information shared through technology becomes tangled with surveillance culture that is bound up with mobile technologies. Smartphones have cameras, microphone… This contributes to a way of imagining the self that is formed only by how we present ourselves online.

The ability to sext online continues despite Butler noting the freedom of expression online, but also the way in which others comment and make a real impact on the lives of those sharing.

In conclusion it is not clear the extent to which digital natives are sharing deliberately – perceptions seemed to change as a result of the experience encountered. One of my participants felt less in control after reflective interviews about her practice, than she had before. We demand communication instantly… But this form of sharing enables emotional reliving of the experience.

Q&A

Q1) Really interesting research. Do you have any insights in why no men wanted to take part?

A1) The first thing is that I didn’t want to interview anyone that I knew. When I did the research I was a student, I managed to find fellow student participants but the male participants cancelled… But I have learned a lot about research since I undertook my evidence gathering. Women were happy to talk about – perhaps because they felt judged online. There is a lot I’d do differently in terms of the methodology now.

Q2) What is the psychological rationale for sharing details like the sex selfies… Or even what they are eating. Why is that relevant for these people?

A2) I think that the reason for posting such explicit sexual images was to reinforce their heterosexual relationships and that they are part of the norm, as part of their identity online. They want others to know what they are doing… As their identity online. But we don’t know if they have that identity offline. When I interviewed Amy Hassenhof she suggested it’s a “faceless identity” – that we adopt a mask online, and feel able to say something really explicit…

A Social Network Game for Encouraging Girls to Engage in ICT and Entrepreneurship: Findings of the Project MIT-MUT
–  Natalie Denk, Alexander Pfeiffer and Thomas Wernbacher, Donau Universität Krems, Ulli Rohsner, MAKAM Research Gmbh, Wien, Austria and Bernhard Ertl,Universität der Bundeswehr, Munich, Germany

This work is based on a mixture of literature review, qualitative analysis of interviews with students and teachers, and the development of the MIT-MUT game, with input and reflection from students and teachers. We are testing the game, and will be sharing it with schools in Austria later this year.

Our intent was to broaden career perspectives of girls at the age of 12-14 – this is younger than is usually targeted but it is the age at which they have to start making decisions and steps in their academic life that will impact on their career. Their decisions are impacted by family, school, peer groups. But the issue is that a lot of girls don’t even see a career in ICT as an option. We want them to show that that is a possibility, to show them the skills they already have, and that this offers a wide range of opportunities, possible career pathways. We also want to provide a route to mentors who are role models, as this is still a male dominated field especially when it comes to entrepreneurship.

Children and young people today grow up as “digital natives” (Prensky 2001) (again, my usual critical caveat), they have a strong affinity towards digital media, they frequently use internet, they use social media networks – primarily WhatsApp, but also Facebook and Instagram. Girls also play games – it’s not just boys that enjoy online gaming – and they do that on their phones. So we wanted to bring this all together.

The MIT-MUT game takes the form of a 7 week long live challenge. We piloted this in Oct/Nov 2015 with 6 schools and 65 actie players in 17 teams. The main tasks in the game are essentially role playing ICT entrepreneurship… Founding small start up companies, creating a company logo, and find an idea for an app for the target group of youth. They needed to then turn their game into a paper prototype – drawing screens and ideas on paper to demonstrate basic functionality and ideas. The girls had to make a video of this paper prototype, and also present their company on video. We deliberately put few technological barriers in place, but the focus was on technology, and the creative aspects of ICT. We wanted the girls to use their skills, to try different roles, to have opportunity to experiment and be creative.

To bring the schools and the project team we needed a central connecting point… We set up a SeN (Social Enterprise ?? Network), and we did that with Gemma – a Microsoft social networking tool for use within companies, that are closed to outside organisations. This was very important for us, given the young age and need for safety in our target user group. They had many of the risks and opportunities of any social network but in this safe bounded space. And, to make this more interesting for the girls, we created a fictional mentor character, “Rachel Lovelace” (named for Ada Lovelace), who is a Silicon Valley entrepreneur, coming to Austria to invest. And the students see a video introduction – we had an actress record about 15 video messages. So everything from the team was through the character of Rachel, whether video or in her network.

A social network like Gemma is perfect for gamification aspects – we did have winners and prizes – but we also had achievements throughout the challenge for finishing a face, making a key contribution, etc. And if course there is a “like” button, the ability to share or praise someone in the space, etc. We also created some mini games, based on favourite genres of the girls – the main goal of these were as starting points for discussing competencies in ICT and Entrepreneurship contexts. With the idea that if you play this game you have these competencies, and why not considering doing more with that.

So, within Gemma, the interface looks a lot like Facebook… And I’ll show you one of these paper prototypes in action (it’s very nicely done!), see all of the winning videos: http://www.mitmut.at/?page_id=940.

To evaluate this work we had a quantitative approach – part of the game presented by Rachel – as well as a quantitative approach based on feedback from teachers and some parents. We had 65 girls, 17 teams, 78% completed the challenge at least to phase 4 (the video presentation – all the main tasks completed). 26% participated in the voting phase (phase 5). Of our participants 30 girls would recommend the game to others, 10 were uncertain, and 4 would not recommend the game. They did enjoy the creativity, design, the paper prototyping. They didn’t like the information/the way the game was structured. The communication within the game was rated in quite a mixed way – some didn’t like it, some liked it. The girls interested in ICT rated the structure and communication more highly than others. The girls stayed motivated but didn’t like the long time line of the game. And we saw a significant increase in knowledgeability of ICT professions, they reported increase in feeling talented, and they had a higher estimation of their own presentation skills.

In the qualitative approach students commented on the teamwork, the independence, the organisational skills, the presentation capabilities. They liked having a steady contact person (the Rachel Lovelace character), the chance of winning, and the feeling of being part of a specialist project.

So now we have a beta version, we have added a scoring system for contributions with points and stars. We had a voting process but didn’t punish girls for not delivering on time, wanted to be very open… But girls thought that we should have done this and given more objective, more strict feedback. And they wanted more honest and less enthusiastic feedback from “Rachel”. They felt she was too enthusiastic. We also restructured the information a bit…

For future development we’d like to make a parallel programme for boys. The girls appreciated the single sex nature of the network. And I would personally really like to develop a custom made social media network for better gamifiation integration, etc. And I’d like

Q&A

Q1) I was interested that you didn’t bring in direct technical skills – coding, e.g. on Raspberry PIs etc. Why was that?

A1) Intentionally skipped programming part… They have lessons and work on programming… But a lack of that idea of creative ways to use ICT, the logical and strategic skills you would need… But they already do informatics as part of their teaching.

Q2) You set this up because girls and women are less attracted to ICT careers… But what is the reason?

A2) I think they can’t imagine to have a career in ICT… I think that is mainly about gender stereotypes. They don’t really know women in ICT… And they can’t imagine what that is as a career, what it means, what that career looks like… And to act out their interests…

And with that I’ve switched to the Education track for the final part of this session… 

Social Media and Theatre Pedagogy for the 21C: Arts-Based Inquiry in Drama Education – Amy Roberts and Wendy Barber, University of Ontario, Canada

Amy is starting her presentation with a video on social media and performance pedagogy, the blurring of boundaries and direct connection that it affords. The video notes that “We have become a Dramaturgical Community” and that we decide how we present ourselves.

Theatre does not exist without the audience, and theatre pedagogy exists at the intersection between performance and audience. Cue another video – this time more of a co-presentation video – on the experience of the audience being watched… Blau in The Audience (1990) talks about the audience “not so much as a mere congregation of people as a body of thought and desire”.  Being an audience member is now a standard part of everyday life – through YouTube, Twitter, Facebook, Vine… We see ourselves every day. The song “Digital Witness” by Saint Vincent sums this up pretty well.

YouTube Preview Image

Richard Allen in 2013 asked whether audience actually wants conclusive endings in their theatre, instead showing preference for more videogame open ended type experiences. When considering what modern audiences want… Liveness is prioritised in all areas of life and that that does speak to immediacy of theatre. Originally “live” was about co-presence but digital spaces are changing that. The feeling of liveness comes from our engagement with technology – if we engage with machines, like we do with humans, and there is a response, then that feels live and immediate. Real time experiences gives a feeling of liveness… One way to integrate that with theatre is through direct digital engagement across the audience, and with performance. Both Baker and Auslander agree that liveness is about immediate human contact.

The audience is demanding for live work that engages them in its creation and consumption through the social media spaces they use all the time. And that means educators have to be part of connecting the need for art and tech… So I want to share some of my experiences in attempting “drama tech” research. I’m calling this: “Publicly funded social board presents… Much ado about nothing”. I had been teaching dramatic arts for many years, looking at new technologies and the potential for new tools to enable students to produce “web theatre” around the “theatre of the oppressed” for their peers, with collaboration with audience as creator and viewer. I was curious to see how students would use the 6 second restriction of Vine, and that using familiar tools students could create tools familiar to the students.

The project had ethics approval… All was set but a board member blocked the project as Twitter and Vine “are not approved learning tools”… I was told I’d have to use Moodle… Now I’ve used Moodle before… And it’s great but NOT for theatre (see Nicholls and Phillip 2012). Eisner (2009) talks about “Education can learn from the arts that form and content cannot be separated.How something is said or done shapes the content of experience.”. The reason for this blocking was that there was potential that students might encounter risks and issues that they shouldn’t access… But surely that is true of television, of life, everything. We have to teach students to manage risks… Instead we have a culture of blocking of content, e.g. anything with “games” in the name – even if educational tools. How can you teach media literacy if you don’t have the support to do that, to open up. And this seems to be the case across publicly funded Ontario schools. I am still hoping to do this research in the future though…

Q&A

Q1) How do you plan to overcome those concerns?

A1) I’m trying to work with those in power… We had loads of safeguards in place… I was going to upload the content myself… It was really silly. The social media policy is just so strict.

Q1) They’ll have reasons, you have to engage with those to make that case…

Q2) Can I just ask what age this work was to take place with?

A2) I work with Grade 9-12… But this work specifically was going to focus on 17 and 18 year olds.

Q3) I think that many arts teachers are quite scared by technology – and you made that case well. You focus on technology as a key tool at the end there… And that has to be part of that argument.

A3) It’s both… You don’t teach hammer, you teach how you use the hammer… My presentation is part of a much bigger paper which does address both the traditional and that affordances of technology.

Having had a lovely chat with Amy over lunch, I have now joined Stream B – Monitoring and Privacy on Social media – Chair – Andree Roy

Monitoring Public Opinion by Measuring the Sentiment of Re-tweets on Twitter – LashariIntzar Ali and Uffe KockWiil,University of Southern Denmark, Denmark

I have just completed my PhD at the University of Southern Denmark, and I’ll be talking about some work I’ve been doing on measuring public opinion using social media. I have used Twitter to collect data – this is partly as Twitter is most readibly accessible and it is structured in a way that suits this type of analysis – it operates in real time, people use hashtags, and there are frequent actors and influencers in this space. And there are lots of tools available for analysis such as Tweetreach, Google Analytics, Cytoscope. My project, CBTA, is combining monitoring and analysis of Tweets…

I have been looking for dictation on geographical location based tweets, using a trend based data analyser, with data collection of a specific date and using network detection on negative comments. I also limited my analysis to tweets which have been retweeted – to show they have some impact. In terms of related studies supporting this approach: Steiglitx (2012) found that retweets is a simple powerful mechanism for information diffusion; Shen (2015) found re-tweeting behaviour is an influencing behaviour from the post of influential user. The sentiment analysis – a really useful quick assessment of content – looks at “positive”, “negative” and “neutral” content. I then used topic base monitoring an overview of the wider public. The intent was to move towards real-time monitoring and analysis capabilities.

So, the CBTA Tool display shows you trending topics, which you can pick from, and then you can view tweets and filter by positive, negative, or neutral posts. The tool is working and the code will be shared shortly. In this system there is a keyword search of tweets which collects tweets, these are then filtered. Once filtered (for spam etc), tweets are classified using NLTK which categorises into “Endorse RT”, “Oppose RT” and “Report RT”, the weighted retweets are then put through a process to compute net influence.

So for my work has looked at data from Pakistan around terms: Zarb-e-Azb; #OpZarbeAzb; #Zerb-e-asb etc. And I gathered tweets and retweets, and deduplicated those tweets with more than one hashtag. Once collected the algorithm for measuring re-tweets influence used follower counts, onward retweets etc. And looking at the influence here, most of the influential tweets were those with a positive/endorsing tone.

But we now have case studies for Twitter, but also for other social media sites. We will be making case studies available online. And looking at other factors, for instance we are interested in the location of tweets as a marker for accuracy/authenticity and to understand how other areas are influencing/influenced by global events.

Q&A

Q1) I have a question about the small amount of negative sentiment… What about sarcasm?

A1) When you look at data you will see I found many things… There was some sarcasm there… I have used NLTK but I added my own analysis to help deal with that.

Q2) So it registers all tweets right across Twitter? So can you store that data and re-parse it again if you change the sentiment analysis?

A2) Yes, I can reprocess it. In Twitter there is limited availability of Tweets for 7 days only so my work captures a bigger pool of tweets that can then be analysed.

Q3) Do you look at confidence scores here? Sentiment is one thing…

A3) Yes, this processing needs some human input to train it… But in this approach it is trained by data that is collected each week.

Social Media and the European Fundamental Rights to Privacy and Data Protection – BeyversEva, University of Passau and TilmanHerbrich, University of Leipzig, Germany

Tilman: Today we will be talking about Data Protection and particularly potential use in commercial contexts, particularly marketing. This is a real area of conflict in social media. We are going to talk about those fundamental rights to privacy and data protection in the EU, the interaction with other fundamental rights, and things like profiling etc. The Treaties and the Charter of Fundamental Rights (CFR) are primary law based on EU law. There is also secondary law including Directives (requiring transposition into national law, but are not binding until then), and Regulations (binding in entirity on all member states, they are automatically law in all member states).

In 2018 the CFR will become legally binding across the piece. In this change private entities and public bodies will all be impacted by the CFR. But how does one enforce those? They could institute a proceeding before a national court, then the National Court must refer questions to the European Court of Human Rights who will answer and provide clarifications, that will then enable the National Courts to take a judgement on the specific case at hand.

When we look across the stakeholders, we see that they all have different rights under the law. And that means there is a requirement to balance those rights. The European Court of Justice (ECJ) has always upheld that concerned rights and interests must be considered, evaluated and weighed in order to find an adequate balance between colliding fundamental rights – as an example the Google Spain Data Protection case in Spain where their commercial rights were deemed secondary to the inidividual rights to privacy.

Eva: Most social media sites are free to use, but this is made possible by highly profiled advertising. Profiling is articulated in Article 4 in the CFR as including aspects of behaviours, personality, etc. Profiling is already seen as an issue that is a threat to Data Protection. We would argue that it poses an even greater threat: users are frequently comfortable to give their real name in order to find others which means they are easily identifiable; users private lives are explicity part of the individual’s profile and may include sensitive data; further this broad and comprehensive data set has very wide scope.

So, on the one hand the users individual privacy is threatened, but so is the freedom to conduct a business (Art 16 CFR). The right to data protection (Article 8, CFR) rests on the idea of consent – and the way that consent is articulated in the law – that consent must be freely given, informed and specific – is incompatible with social networking services and the heavy level of data processing associated with them. These spaces adopt excessive processing, there is dynamic evolution of these platforms, and their concept is networking. Providers make changes in platform, affordances, advertising, etc. create continued changes of the use and collection of data – at odds with specific requirements for consent. The concept of networking means that individuals manage information that is not just about themselves but also others – their image, their location, etc. European Data Protection law does nothing to accommodate the privacy of others in this way. There has been no specific ruling on the interaction of business and personal rights here, but given previous trends it seems likely that business will win.

These data collections by social networking sites also has commercialisation potential to exploit users data. It is not clear how this will evolve – perhaps through greater national law in the changing or terms and conditions?

This is a real tension, with rights of businesses on one side, the individual on the other. The European legislator has upheld fundamental data protection law, but there is still much to examine here. We wanted to give you an overview of relevant concepts and rights in social media contexts and we hope that we’ve done this.

Q&A

Q1) How do these things change when Europe is outwith the legislative jurisdiction of most social media companies are – they are global
A1) General Data Protection Law 2018 will target companies in the EU, if they profile there. It was unclear until now… Previously you had to have a company here in Europe (usually Ireland), but in 2018 it will be very clear and very strict.

Q2) How has the European Court of Human rights fared so far in judgements?

A2) In Google Spain case, in another Digital Rights case the ECHR has upheld personal rights. And we see this also on the storage and retention of data… But the regulation is quite open, right now there are ways to circumvent.

Q3) What are the consequences of non-compliance? Maybe the profit I make is greater than that risk?

A3) That has been an issue until now. Fines have been small. From 2018 it will be up to 5% of worldwide revenue – that’s a serious fine!

Q4) Is private agreement… Is the law stronger than private agreement? Many agree without reading, or without understanding, are they protected if they agree to something illegal.

A4) Of course you are able to contract and agree to data use. But you have to be informed… So if you don’t understand, and don’t care… The legislator cannot change this. This is a problem we don’t have an approach for. You have to be informed, have to understand purpose, and understand means and methods, so without that information the consent is invalid.

Q5) There has been this Safe Harbour agreement breakdown. What impact is that having on regulations and practices?

A5) The regulations, probably not? But the effect is that data processing activities cannot be based on Safe Harbour agreement… So companies have to work around or work illegally etc. So now you can choose a Data Protection agreement – standardised contracts to cover this… But that is insecure too.

Digital Friendship on Facebook and Analog Friendship Skills – KordoutisPanagiotis, Panteion University of Social and Political Sciences, Athens and EvangeliaKourti,University of Athens, Greece

Panagiotis: My colleague and I were keen to look at friendship on Facebook. There is a lot of work on this topic of course, but very little work connecting Facebook and real life friendship from a psychological perspective. But lets start by seeing how Facebook describes itself and friendship… Facebook talk about “building, strengthening and enriching friendships”. Operationally they define friendship through digital “Facebook acts” such as “like”, “comment”, “chat” etc. But this creates a paradox… You can have friends you have never met and will never meet – we call them “unknown friends” and they can have real consequences for life.

People perceive friendship in Facebook in different ways. In Greece (Savrami 2009, Kourti, Kourdoutis, Madaglou 2016) young people see Facebook friendship as a “phony” space, due to “unknown friends” and the possibility of manipulating self presentation. As a tool for popularity, public relations, useful acquaintances; a doubtful and risky mode of dating; the resort of people with a limited nnumber of friends and lack of “real” social live; and the resort of people who lack friendship skills (Buotte, wood and pratt 2009). BUT it is widely used and most are happy with their usage…

So, how about psychological definitions of analog friendship? Baron-Cohen and Wheelright (2003) talk about friendship as survival supporting social interdependence based on attachment and instrumentality skills.

Attachment involves high interdependence, commitment, systematic support, responsiveness, communication, investment in joint outcomes, high potential for developing the friendship – it is not static but dynamic. It is being satisfied by the interaction with each other, with the company of each other. They are happy to just be with someone else.

Instrumentality is also part of friendship though and it involves low interdependence, low commitment, non-systematic support, low responsiveness, superficial communication, expectations for specific benefits and personal outomes, little potential for developing the relationship – a more static arrangements. And they are satisfied by interacting with others for a specific goal or activity.

Now the way that I have presented this can perhaps look like the good and the bad side… But we need both sides of that equation, we need both sets of skills. What we perceive as friendship in analog life usually has a prevalence of attachement over instrumentality…

So, why are we looking at this? We wanted to look into whether those common negative attitudes about Facebook and friendship were accurate. Will FB users with low friendship skills have more Fb friends? Engage in more Fb “friendship acts”; will they use Fb more intensely; will they have more “unknown” friends than users with stronger friendship skills”. And when I say stronger friendship skills – I mean those with more attachment skills versus those with more instrumental skills.

In our method here we had 201 participants, most were women (139) from Universities and technological Institutes in metropolitan areas of Greece. All had profiles in Fb. median age was 20, all had used Facebook for 2 hours the day before, and many reported being online at least 8 hours a day, some on a permanent ongoing basis. We asked them how many friends they have… Then we asked them for an estimate of how many they know in-person. Then we asked them how many of these friends they have never met or will never meet – they provided an estimation. There were other questions about interactions in Facebook. We used a scale called the Facebook Insensity Scale (Ellison, Steinfield and Lampe 2007) which looks at importance of Facebook in the persons life (this is a 12 pt Likert scale). We also used an Active Digital Sociability Scale which we came up with – this was a 12 pt likert scale on Fb Friendship acts etc. And we used a Friendship Questionnaire (Baron-Cohen and Wainwright 2003). This was a paper exercise, for less than 30 minutes.

When we looked at stronger and weaker friendship skills groups – we had 44.3% of participants in the stronger friendship skills group, 52% in the weaker friendship skills group. More women had stronger friendship skills – consistent with the general population across countries.

So, firstly do people with weaker friendship skills have more friends? No, there was no difference. But we found a gender result – men had more friends in facebook, and also had weaker friendship skills.

Do people with weaker friendships skills engage more frequently in Fb friendship operations of friendship acts? No. No difference. Chatting wa smost popular, browsing adn liking were most frequet acts regardless of skills. Less frequent were participating in groups, check in and gaming. BUT a very telling difference: Men were more likely to comment than women, and that’s significant for me.

Do people with weaker friendship skills engage in Fb use it more intensively? Yes and No. There was a difference… But those with stronger friendship skills showed high Fb intensity, compared to those with weaker friendship. Men with stronger skills were more intensive in their use than women with strong skills.

Do people with weaker friendship skills have more friends on facebook? No. Do they have more unknown friends? No. But there was a gender effect. 16% of men have unknown friends, ony 9% of women do. Do those with weaker friendship skills interact more with unknown friends? No, opposite. Those with stroger skills, interact more with unknown friends. And so on.

And do those with weaker friendship skills actually meet unknown friends from Fb in real life? Yes, but opposite to expected. If they have stronger skills I’m more likely to meet you in real life… If I am a man… The percentages are small (3% of men, 1% of women).

So, what do I make of all this? Facebook is not the resort of people with weak friendship skills. Our data suggests it may be advantageous space for those with higher friendship skills, it is a socail space regulated by lots of social norms – it is an extension of what happens in real life. And what is the norm at play? It is the famous idea that men are encouraged to be bold, women to be cautious and apprehensive. Women have stronger social skills, but Facebook and the dynamics suppresses them, and enhances men with weaker skills… So, that’s my conclusion here!

Q&A

Q1) Very interesting. When men start to see someone they haven’t met before… Wouldn’t it be women? To hit on them?

A1) Actually yes, often it is dating. But men are eager to go on about it… to interact and go on to meet. Women are very cautious. We have complimented this work with qualitative work that shows women need much longer interaction – they need to interact for maybe 3 years before meeting. Men are not so concerned.

Q2) You haven’t talked about quality etc. of your quantitative data?

A2) I haven’t mentioned it here, but it’s in the paper (in the Proceedings). The Friendship questionnaire is based on established work, saw similar distribution ratios as seen elsewhere. We haven’t tried it (but are about to) with those with clinical status, Aspergers, etc. The Facebook Intensity questionnaire had a high reliability alpha.

Q3) Did you do any comparison of this data with any questions on trolling, cyber bullying, etc. as the consequences for sharing opinion or engaging with strangers for women is usually harsher than for men.

A3) Yes, some came up in the qualitative study where individuals were able to explain their reasons.

Q4) Did your work look at perceptions by employers etc. And how that made a difference to selecting friends?

A4) We didn’t look at this, but others have. Some are keen not to make friends in specific groups – they use Facebook to sell a specific identity to a specific audience.

Q5) The statistics you produced are particularly interesting… What is your theoretical conjecture as a result of this work?

A5) My feeling is that we have to see looking at Facebook as an alternative mode of socialising. It has been normalised so the same social rules functioning in the rest of society do function in Facebook. This was an example. It sounds commonplace but it is important.

The Net Generation’s Perceptions of Digital Activism –  StochLouise and SumarieRoodt, University of Cape Town, South Africa

Sumarie: I will be talking about how the Net Generation view digital activism. And the reason this is of interest to me is because of the many examples of digital activism we see around us. I’ll talk a bit about activism in South Africa, and particularly a recent campaign called “Fees Must Fall”.

There are various synonyms for Digital Activism but that’s the term I’ll use. So what is this? It’s origins start with the internet, with connection and mobilisation. We saw the rise of social media and the huge increase in people using it. We saw economies and societies coming online and using these spaces over the last 10 years. What does this mean for us? Well it enables quick and far-reaching information sharing. And there is a video that goes with this too.

Joyce 2013 defines Digital Activism as being about “the use of digital media in collective efforts to bring about social or political change, using methods outside of routine decision-making processes”. “It is non-violent and civil but can involve hacking (Edwards et al. 2013). We see digital activism across a range of approaches: from Slacktivism (things that are easy to participate in); online activism; internet activism; cyber activism; hacktivism. That’s a broad range, there are subtleties that divide into these and other terms, and the different characteristics of these types of activism.

Some examples…

In 2011 we saw revolutions in Egypt, Tunisia, Occupy Wall Street;

2012-14 we saw BringBackOurGirls, and numerous others;

2015 onwards we have:

  • RhodesMustFall – on how Cecil John Rhodes took resources from the indigenous communities, and recent removals of statues etc. and naming of buildings, highly sensitive.
  • FeesMustFall  – about providing free education to everybody, particularly university – less than 10% of South Africans go to University and they tend to be those from the more privileged background – as a result of that we weren’t allowed to raise our fees for now, and we are encouraged to find other funders to subsidise education and we cannot exclude anyone because of lack of economic access, the government will help but…. a lot of conflict there particularly around corruption, but government also classified universities as advantaged or non advantaged university and distributes funds much more to non advantaged university.
  • ZumaMustFall – our president is also famous for causing havoc politically and economically for what many see as very poor decisions, particularly under public scrutiny in the last 12 months.

In the room we are having a discussion about other activist activities, including an Israeli campaign against internet censorship law targeted at pornography etc. but including political and cultural aspects. Others mention 38 degrees etc. and successful campaigns to get issues debated. 

Now, digital activism can be on any platform – not necessarily Facebook or Twitter.

When we look at who our students are today – the “Net Generation”, “Millennials”, “Digital Natives” – and characteristics (Oblinger and Oblinger) associated this group include: confidence with technologu, always connected, immediate, social and team orientated, diverse, visual, education driven, emotionally open. But this isn’t homogenous, not all students will have these qualities.

So, what did we do with our students to assess students view? We looks at 230 students, and targeted those looked at in the literature: those born in any year from 1983 to 2003, and they needed to be those with some form of online identit(ies). We had an online questionnare that ran over 5 days. We analysed with Qualtrics, and thematic analysis. There are limitations here – all students were registered in the Comms department – business etc.

In terms of the demographics: Male participants were 38%, female were 62%; Average age was 22, minimum was 17, maximum was 33. We asked about the various characteristics, using a Likert scale questions… Showing that all qualify suffiently to be this “Net Generation”. We asked if they paid attention to digital activism… Most did, but it’s not definitive. Now this is the beginning of a much bigger project…

We asked if the participants had ever signed an online petition – 145 had; and 144 believed online petitions made a difference. We also asked if the internet and social media have a positive effect on an activism campaign – 92% do, and that has huge interest to companies and advertisers. And 89% of participants felt the use of social media in these causes has contributed to creating a society that is more aware of important issues.

What did we learn? Well we did see that this generation are inclined to participate in slacktivism. They believe digital activism mades a difference. They pay attention to online campaigns and are aware of which ones have been successful – at least in terms of having some form of impact or engagement.

Now, if you’d like access to the surveys, etc. do get in touch.

Q&A

Q1) How does UCT engage with the student body around local activism?

A1) Mostly that has been digitally, with the UCT Facebook page. There were also official statements from the University… But individual staff were discouraged from reacting. But freedom of speech for the students. It increased conflict in some way, but it also made students feel heard. Hard to call which side it fell on. Policy change is being made as a result of this work… They had a chance to be heard. We wanted free speech (unless totally inappropriate).

Q2) I see that you use a lot of “yes” and “no” questions… I like that but did you then also get other data?

A2) Yes. I present that work here. This paper doesn’t show the thematic analysis – we are still working on submitting that somewhere. We have that data, so once the full piece is in a journal we can let you know.

Q3) Do you know any successful campaigns in your context?

A3) Yes, FeesMustFall started in individual universities, and turned then to the government. It actually got quite serious, quite violent, but that definitely has changed their approach. And that campaign continues and will continue for now.

At this point of the day my laptop lost juice, the internet connection dropped, and there was a momentary power outage just as my presentation was about to go ahead! All notes from my strand are therefore from those taken on my mobile – apologies for more typos than usual!

Stream C – Teaching and Supporting Students – Chair – Ted Clark

Students’ Digital Footprints: Curation of Online Presences, Privacy and Peer Support – Nicola Osborne and Louise Connelly,University of Edinburgh, UK

That was me!

My slides are available on Prezi herehttps://prezi.com/hpphwg6u-f6b/students-digital-footprints-curation-of-online-presences-privacy-and-peer-support/

The paper can be found in the ECSM 2016 Proceedings, and will also be shared on the University of Edinburgh Research Explorer along with others on the Managing Your Digital Footprint (research strand) researchhttp://www.research.ed.ac.uk/portal/en/publications/students-digital-footprints(5f3dffda-f1b4-470f-abd4-24fd6081ab98).html 

Please note that the remaining notes are very partial as taken on my smartphone and, unfortunately, somewhat eaten by the phone in the process… 

How do you Choose a Friend? Greek Students’ Friendships in Facebook – KourtiEvangelia, University of Athens and PanagiotisKordoutisand AnnaMadoglou,Panteion University of Social and Political Sciences, Greece

This work, relating to Panagiotis’ paper earlier (see above) looked at how individuals make friends on Facebook. You can find out more about the methodology in this paper and Panagiotis’ paper on Analog and Facebook friends.

We asked our cohort of students to tell us specifically about their criteria for making new friends, whether they were making the approach for friendship or responding to others’ requests. We also wanted to find out how they interacted with people who were not (yet) their friends in Facebook, and what factors played a part. The data was collected in a paper questionnaire with the same cohort as reported in Panagiotis’ paper earlier today.

Criteria for interacting with a friend, never met before within Facebook. The most frequent answer was “I never do” but the next most popular responses were common interests and interest in getting to know others better. physical appearance seems to play a factor, more so than previous interactions but less so than positive personality traits. 

Criteria for deciding to meet a previously unknown friend. Most popular response here was “I never do so”, followed by sufficient previous FB interaction, common acquaintances, positive personality etc. less so.

Correspondence Analysis – I won’t go into here, very interesting in terms of gender. Have a look at the Proceedings. 

Conclusion is that Facebook operated as social identity tool. And supporting offline relationships. self involvement with the medium seems to define selection criteria compatible with different social goals reinforcing one real-life social network.

Q&A

Q1) I’m very interested in how FB suggests new friends. Did students comment on that. 

A1) We didn’t ask about that.

Q2) isn’t your data gender biased in some way – most of your participants are female.

A2) Yes. But we continue this… With qualitative data it’s a problem, but means and standard deviation cover that. 

Q2) Reasons for sending a request to who you don’t know. First work by Ellison etc. showed people connecting with already known people… I wonder if it is still true? 

A2) Interesting questions. We must say that students answer to their professor in a uni context, that means maybe this is an explanation… 

Comment) Facebook gives you status for numbers and types of friends etc. 

A2) it’s about social identity and identity construction. Many have different presences with different goals. 

Comment) there is a bit of showing off in social. For status. 

Professional Development of Academic Staff in the use of Social Media for Teaching and Learning – Julie Willems, Deakin University, Burwood, Australia

This work has roots in 2012. from then to 2015 I ran classes for staff on using social media. This follows conversations I’ve heard around the place about expecting staff to use social media without training. 

Now I use a very broad definition of social media – from mainstream sites to mobile apps to gaming etc. Media that accesses digital means for communication in various forms. 

Why do we need staff development for social media? To deal with concerns of staff, students move there, also super enthusiasm.. 

My own experience is of colleagues who have run with it, which has raised all sorts of concerns. Some would say that an academic should be doing teaching, research, service and development can end up being the missing leg on the chair there. And staff development is not just about development on social media but also within social media. 

We ran some webinars within Zoom webinar, showing Twitter use with support online, offline and on Twitter – particularly important  for a distributed campus like we have. 

When we train staff we have to think about the pedagogy, we have to think about learning outcomes. We need to align the course structure with LOs, and also to consider staff workload in how we do that training. What will our modes of delivery be? What types of technology will they meet and use – and what prep/overhead is involved in that? We also need to consider privacy issues. And then how do you fill that time. 

So the handout I’ve shared here was work for one days course, to be delivered in a flipped classroom – prep first, in person, then online follow up. Could be completed quickly but many spent more time on these.

This PPT from a module I developed for staff at Monash university, with social media at the intersection of formal and informal learning, and the interaction of teacher-directed learning and student-centred learning. That quadrant model is useful to be aware of: Willem Blakemore(?): 4QF.

Q&A

Q1) What was the object among staff at your university?

A1) First three years were optional. This last year Monash require staff to do 3 one day courses per year. One can be a conference with a full report. Social Media is one of 8 options. Wanted to give an encouragement for folk to attend. 

Q2) How many classes use your social media as a result?

A2) I’ve just moved institution. One of our architecture lecturers was using FB in preference to LMS: students love it, faculty concerned. Complex. At my current university social media isn’t encouraged but it is use. Regardless of attitude social media is in use… And we at least have to be aware of that. 

Q3) I was starting to think that you were encouraging faculty staff to use Social media alone, rather than with LMS.

A3) At Monash reality was using social alongside LMS. That connection discouraged in my new faculty. 

Q4) I loved that you brought up that pressure from teaching staff – as so many academics in social media now, they are min more active and a real pressure to integrate.

A4) I think that gap is growing too… Between resisters and those keen to use. Students are aware of what they share – a Demi formal space… Have to be aware.

Q5) do you have a range of social media tools or just Facebook?

A5) mainly Facebook, sometimes Twitter and Linked In. I’m in engineering and architecture. 

Q5) Are they approved for use by faculty?

A5) Yes, the structure you have there had been. 

Q6) also encourage academic staff to use academic networking sites?

A6) depends on context. Depends… ResearchGate good for pubs, Academic.edu like bus card. 

Q7) Reward and recognition

A7) Stuff on sheet was for GCAP… Came out of that… 

Q8) Will we still have these requirements to train in, say, 5 years time? Surely they’ll be like pen and pencil now?

A8) Maybe. Universities are keen for good profiles though, which means this stuff matters in this competitive academic marketplace. 

And with that Day One has drawn to a close. I’m off to charge a lot of devices and replace my memory sticks! More tomorrow in a new liveblog post. 

 July 12, 2016  Posted by at 9:22 am Events Attended, LiveBlogs Tagged with: , ,  No Responses »
Jul 072016
 

On 27th June I attended a lunchtime seminar, hosted by the University of Edinburgh Centre for Research in Digital Education with Professor Catherine Hasse of Aarhus University

Catherine is opening with a still from Ex-machina (2015, dir. Alex Garland). The title of my talk is the difference between human and posthuman learning, I’ll talk for a while but I’ve moved a bit from my title… My studies in posthuman learning has moved me to more of a posthumanistic learning… Today human beings are capable of many things – we can transform ourselves, and ourselves in our environment. We have to think about that and discuss that, to take account of that in learning.

I come from the centre for Future Technology, Culture and Learning, Aarhus University, Denmark. We are hugely interdisciplinary as a team. We discuss and research what is learning under these new conditions, and to consider the implications for education. I’ll talk less about education today, more about the type of learning taking place and the ways we can address that.

My own background is in anthropology of education in Denmark, specifically looking at physicists.In 2015 we got a big grant to work on “The Technucation Project” and we looked at the anthropology of education in Denmark in nurses and teachers – and the types of technological literacy they require for their work. My work (in English) has been about “Mattering” – the learning changes that matter to you. The learning theories I am interested in acknowledge cultural differences in learning, something we have to take account of. What it is to be human is already transformed. Posthumanistics learning is a new conceptualisations and material conditions that change what it was to be human. It was and it ultra human to be learners.

So… I have become interested in robots.. They are coming into our lives. They are not just tools. Human beings encounter tools that they haven’t asked for. You will be aware of predictions that over a third of jobs in the US may be taken over by automated processes and robots in the next 20 years. That comes at the same time as there is pressure on the human body to become different, at the point at which our material conditions are changing very rapidly. A lot of theorists are picking up on this moment of change, and engaging with the idea of what it is to be human – including those in Science and Technology Studies, and feminist critique. Some anthropologist suggest that it is not geography but humans that should shape our conceptions of the world (Anthrpos- Anthropocene), others differ and conceive of the capitalocene. When we talk about the posthuman a lot of the theories acknowledge that we can’t talk about the fact that we can’t think of the human in the same way anymore. Kirksey & Helmreich (2010) talk of “natural-cultural hybrids”, and we see everything from heart valves to sensors, to iris scanning… We are seeing robots, cybords, amalgamations, including how our thinking feeds into systems – like the stockmarkets (especially today!). The human is de-centered in this amalgamation but is still there. And we may yet get to this creature from Ex-machina, the complex sentient robot/cyborg.

We see posthuman learning in uncanny valley… gradually we will move from robots that feel far away, to those with human tissues, with something more human and blended. The new materialism and robotics together challenge the conception of the human. When we talk of learning we talk about how humans learn, not what follows when bodies are transformed by other (machine) bodies. And here we have to be aware that in feminism that people like Rosa Predosi(?) have been happy with the discarding of the human: for them it was always a narrative, it was never really there. The feminist critique is that the “human” was really retruvian man.. But they also critique the idea that Posthu-man is a continuation of individual goal-directed and rational self-enhancing (white male) humans. And that questions the post human…

There are actually two ways to think of the post human. One way is the posthuman learning as something that does away with useless, biological bodies (Kurzweil 2005) and we see transhumanists, Verner Vinge, Hans Moravec, Natasha Vita-More in this space that sees us heading towards the singularity. But the alternative is a posthumanistic approach, which is about cultural transformations of boundaries in human-material assemblages, referencing that we have never been isolated human beings, we’ve always been part of our surroundings. That is another way to see the posthuman. This is a case that I make in an article (Hayles 1999) that we have always been posthuman. We also see have, on the other hand, Spinozists approach which is about how are we, if we understand ourselves as de-centered, able to see ourselves as agents. In other words we are not separate from the culture, we are all Nature-cultural…Not of nature, not of culture but naturacultural (Hayles; Haraway).

But at the same time if it is true that human beings can literally shape the crust of the earth, we are now witnessing anthropomorphism on steroids (Latour, 2011 – Waiting for Gaia [PDF]). The Anthropocene perspective is that, if human impact on Earth can be translated into human responsibility fr the earth, the concept may help stimulate appropriate societal responses and/or invoke appropriate planetary stewardship (Head 2014); the capitalocene (see Jason Moore) talks about moving away from cartesian dualism in global environmental change, the alternative implies a shift from humanity and nature to humanity in nature, we have to counter capitalism in nature.

So from the human to the posthuman, I have argue that this is a way we can go with our theories… There are two ways to understand that, the singularist posthumanism or spinozist posthumanism. And I think we need to take a posthumanistic stance with learning – taking account of learning in technological naturecultures.

My own take here… We talk about intra-species differentiations. This nature is not nature as resource but rather nature as matrices – a nature that operates not only outside and inside our bodies (from global climate to the microbiome) but also through our bodies, including embodied minds. We do create intra-species differentiation, where learning changes what maters to you and others, and what matters changes learning. To create an ecological responsible ultra-sociality we need to see ourselves as a species of normative learners in cultural organisations.

So, my own experience, after studying physicists as an anthropologists I no longer saw the night sky the same way – they were stars and star constellations. After that work I saw them as thousands of potetial suns – and perhaps planets – and that wasn’t a wider discussion at that time.

I see it as a human thing to be learners. And we are ultra social learning. And that is a characteristic of being human. Collective learning is essentially what has made us culturally diverse. We have learning theories that are relavent for cultural diversity. We have to think of learning in a cultural way. Mediational approachs in collective activity. Vygotsky takes the idea of learners as social learners before we become personal learners and that is about the mediation – not natureculture but cultureculture (Moll 2000). That’s my take on it. So, we can re-centre human beings… Humans are not the centre of the universe, or of the environment. But we can be at the centre and think about what we want to be, what we want to become.

I was thinking of coming in with a critique of MOOCs, particularly as those being a capitolocene position. But I think we need to think of social learning before we look at individual learning (Vygotsky 1981). And we are always materially based. So, how do we learn to be engaged collectively? What does it matter – for MOOCs for instance – if we each take part from very different environments and contexts, when that environment has a significant impact. We can talk about those environments and what impact they have.

You can buy robots now that can be programmed – essentially sex robots like “Roxxxy” – and are programmed by reactions to our actions, emotions etc. If we learn from those actions and emotions, we may relearn and be changed in our own actions and emptions. We are seeing a separation of tool-creation from user-demand in Capitalocene. The introduction of robots in work places are often not replacing the work that workers actually want support with. The seal robots to calm dementia patients down cover a role that many carers actually enjoyed in their work, the human contact and suport. But those introducing them spoke of efficiency, the idea being to make employees superfluous but described as “simply an attempt to remove some of the most demeaning hard task from the work with old people so the wor time ca be used for care and attention” (Hasse 2013).

These alternative relations with machines are things we always react too, humans always stretch themselves to meet the challenge or engagement at hand. An inferentialist approach (Derry 2013) acknowledges many roads to knowledge but materiality of thinking reflects that we live in a world of not just case but reason. We don’t live in just a representationalism (Bakker and Derry 2011) paradigm, it is much more complex. Material wealth will teach us new things.. But maybe these machines will encourage us to think we should learn more in a representative than an inferentialist way. We have to challenge robotic space of reasons. I would recommend Jan Derry’s work on Vygotsky in this area.

For me robot representationalism has the capacity to make convincing representations… You can give and take answers but you can’t argue space and reasons… They cannot reason from this representation. Representational content is not articulated by determinate negation and complex concept formation. Algorithmic learning has potential and limitations, and is based on representationalism. Not concept formation. I think we have to take a position on posthumanistic learning, with collectivity as a normative space of reasons; acknowledge mattering matter in concept formation; acknowledge human inferentialism; acknowledge transformation in environment…

Discussion/Q&A

Q1) Can I ask about causes and reasons… My background is psychology and I could argue that we are more automated than we think we are, that reasons come later…

A1) Inferentialism is challenging  the idea of giving and taking reasons as part of normative space. It’s not anything goes… It’s sort of narrowing it down, that humans come into being in terms of learning and thinking in a normative space that is already there. Wilfred Sellers says there is no “bare given” – we are in a normative space, it’s not nature doing this… I have some problems with the term dialectical… But it is a kind of dialective process. If you give an dtake reasons, its not anything goes. I think Jen Derry has a better phrasing for this. But that is the basic sense. And it comes for me from analytical philosophy – which I’m not a huge fan of – but they are asking important questions on what it is to be human, and what it is to learn.

Q2) Interesting to hear you talk about Jan Derry. She talks about technology perhaps obscuring some of the reasoning process and I was wondering how representational things fitted in?

A2) Not in the book I mentioned but she has been working on this type of area at University of London. It is part of the idea of not needing to learn representational knowledge, which is built into technological systems, but for inferentialism we need really good teachers. She has examples about learning about the bible, she followed a school class… Who look at the bible, understand the 10 commandments, and then ask them to write their own bible 10 commandments on whatever topic… That’s a very narrow reasoning… It is engaging but it is limited.

Q3) An ethics issue… If we could devise robots or machines, AI, that could think inferentially, should we?

A3) A challenge for me – we don’t have enough technical people. My understanding is that it’s virtually impossible to do that. You have claims but the capacities of AI systems so far are so limited in terms of function. I think that “theory of mind” is so problematic. They deteriorise what it means to be human, and narrow what it means to be our species. I think algorithmic learning is representational… I may be wrong though… If we can… There are poiltical issues. Why make machines that are one to one to human beings… Maybe to be slaves, to do dirty work. If they can think inferentiality, should they not have ethical rights. In spinostas we have a responsibility to think about those ethical issues.

Q4) You use the word robot, that term is being used to be something very embodies and physical.. But algorithmic agency, much less embodied and much less visible – you mentioned the stock market – and how that fits in.

A4) In a way robots are a novelty, a way to demonstrate that. A chatbot is also a robot. Robot covers a lot of automated processes. One of the things that came out of AI at one point was that AI couldn’t learn without bodies.. That for deep learning there needs to be some sort of bodily engagement to make bodily mistakes. But then encounters like Roxy and others is that they become very much better… As humans we stretch to engage with these robots… We take an answer for an answer, not just an algorithm, and that might change how we learn.

Q4) So the robot is a point of engaging for machine learning… A provocation.

A4) I think roboticists see this as being an easy way to make this happen. But everything happens so quickly… Chips in bodies etc. But can also have robots moving in space, engaging with chips.

Q5) Is there something here about artifical life, rather than artifical intelligence – that the robot provokes that…

A5) That is what a lot of roboticists work at, is trying to create artificial life… There is a lot of work we haven’t seen yet. Working on learning algorithms in computer programming now, that evolves with the process, a form of artifical life. They hope to create robots and if they malfunction, they can self-repair so that the next generation is better. We asked at a conference in Prague recently, with roboticists, was “what do you mean by better?” and they simply couldn’t answer that, which was really interesting… I do think they are working on artifical life as well. And maybe there are two little connections between those of us in education, and those that create these things.

Q6) I was approached by robotics folks about teaching robots to learn drawing with charcoal, largely because the robotic hand had enough sensitivity to do something quite complex – to teach charcoal drawing and representation… The teacher gesticulates, uses metaphor, describes things… I teach drawing and representational drawing… There is no right answer there, which is tough for robototics… What is the equivelent cyborg/dual space in learning? Drawing toolsa re cyborg-esque in terms of digital and drawing tools… BUt also that diea of culture… You can manipulate tools, awareness of function and then the hack, and complexity of that hack… I suppose lots of things were ringing true but I couldn’t quite stick them in to what I’m trying to get at…

A6) Some of this is maybe tied to Schuman Enhancement Theory – the idea of a perfect cyborg drawing?

Q6) No, they were interested in improving computer learning, and language, but for me… The idea of human creativity and hacking… You could pack a robot with the history of art, and representation, so much information… Could do a lot… But is that better art? Or better design? A conversation we have to have!

A6) I tend to look at the dark side of the coin in a way… Not because I am techno-determinist… I do love gadgets, technology enhances our life, we can be playful… BUt in the capitalocene… There is much more focus on this. The creative side of technology is what many people are working on… Fantastic things are coming up, crossovers in art… New things can be created… What I see in nursing and teaching learning contexts is how to avoid engaging… So lifting robots are here, but nursing staff aren’t trained properly and they avoid them… Creativity goes many ways… I’m seeing from quite a particular position, and that is partly a position of warning. These technologies may be creative and they may then make us less and less creative… That’s a question we have to ask. For physicists, who have to be creative, are always so tied to the materiality, the machines and technologies in their working environments. I’ve also seen some of these drawing programmes…. It is amazing what you can draw with these tools… But you need purpose, awareness of what those changes mean… Tools are never innocent. We have to analyse what tools are doing to us

Jul 052016
 

This afternoon I’m at UCL for the “If you give a historian code: Adventures in Digital Humanities” seminar from Jean Bauer of Center for Digital Humanities at Princeton University, who is being hosted by Melissa Terras of the UCL Centre for Digital HumanitiesI’ll be liveblogging so, as usual, any corrections and additions are very much welcomed. 

Melissa is introducing Jean, who is in London en route to DH 2016 in Kraków next week. Over to Jean:

I’m delighted to be here with all of the wonderful work Melissa has been doing here. I’m going to talk a bit about how I got into digital humanities, but also about how scholars in library and information sciences, and scholars in other areas of the humanities might find these approaches useful.

So, this image (American Commissioners of the Preliminary Peace Negotiations with Great Britain. By Benjamin West, London, England; 1783 (begun). Oil on canvas. (Unframed) Height: 28 ½” (72.3 cm); Width: 36 ¼” (92.7 cm). 1957.856) is by Benjamin West, the Treaty of Paris, 1783. This is the era that I research and what I am interested in. In particular I am interested in John Adam, the first minister of the United States – he even gets one line in Hamilton: the musical. He’s really interested as he was very concerned with getting thinking and processes on paper. And on the work he did with Europe, where there hadn’t really been American foreign consuls before. And he was also working on areas of the North America, making changes that locked the British out of particular trading blocks through adjustments brought about by that peace treaty – and I might add that this is a weird time to give this talk in England!

Now, the foreign service at this time kind of lost contact once they reached Europe and left the US. So the correspondence is really important and useful to understand these changes. There are only 12 diplomats in Europe from 1775-1788, but that grows and grows with consuls and diplomats increasing steadily. And most of those consuls are unpaid as the US had no money to support them. When people talk about the diplomats of this time they tend to focus on future presidents etc. and I was interested in this much wider group of consuls and diplomats. So I had a dataset of letters, sent to John Jay, as he was negotiating the treaty. To use that I needed to put this into some sort of data structure – so, this is it. And this is essentially the world of 1820 as expressed in code. So we have locations, residences, assignments, letters, people, etc. Within that data structure we have letters – sent to or from individuals, to or from locations, they have dates assigned to them. And there are linkages here. Databases don’t handle fuzzy dates well, and I don’t want invalid dates, so I have a Boolean logic here. And also a process for handling enclosures – right now that’s letters but people did enclose books, shoes, statuettes – all sorts of things! And when you look at locations these connect to “in states” and states and location information… This data set occurs within the Napoleonic wars so none of the boundaries are stable in these times so the same location shifts in meaning/state depending on the date.

So, John Jay has all this correspondence between May 27 and Nov 19, 1794 and they are going from Europe to North America, and between the West Indies and North America. Many of these are reporting on trouble. The West Indies are ship siezures… And there are debts to Britain… And none of these issues get resolved in that treaty. Instread John Jay and Lord Granville set up a series of committees – and this is the historical precident for mediation. Which is why I was keen to understand what information John Jay had available. None of this correspondance got to him early enough in time. There wasn’t information there to resolve the issue, but enough to understand it. But there were delays for safety, for practical issues – the State Department was 6 people at this time – but the information was being collected in Philadephia. So you have a centre collecting data from across the continent, but not able to push it out quickly enough…

And if you look at the people in these letters you see John Jay, and you see Edmund Jennings Randolph mentions most regularly. So, I have this elaborate database (The Early American Foreign Service Database – EAFSD) and lots of ways to visualise this… Which enables us to see connections, linkages, and places where different comparisons highlight different areas of interest. And this is one of the reasons I got into the Humanities. There are all these papers – usually for famous historical men – and they get digitised, also the enclosures… In a single file(!), parsing that with a partial typescript, you start to see patterns. You see not summaries of information being shared, not aggregation and analysis, but the letters being bundled up and sent off – like a repeater note. So, building up all of this stuff… Letters are objects, they have relationships to each others, they move across space and time. You look at the papers of John Adams, or of any political leader, and they are just in order of date sent… Requiring us to flip back and forth. Databases and networks allow us to follow those conversations, to understand new orders to read those letters in.

Now, I had a background in code before I was a graduate student. What I do now at Princeton (as Associate Director of the Center for Digital Humanities) is to work with librarians and students to build new projects. We use a lot of relational databases, and network analysis… And that means a student like one I have at the moment can have a fully described, fully structured data set on a vagrant machine that she can engage with, query, analysis, and convey to her examiners etc. Now this student was an excel junky but approaching the data as a database allows us to structure the data, to think about information, the nature of sources and citation practices, and also to get major demographic data on her group and the things she’s working on.

Another thing we do at Princeton is to work with libraries and with catalogue data – thinking about data in MARC, MODS, or METS record, and thinking about the extract and reformatting of that data to query and rethink that data. And we work with librarians on information retrieval, and how that could be translated to research – book history perhaps. Princeton University library brought the personal library of philosopher Jaques Derrida – close to 19,000 volumes (thought it was about 15,000 until they were unpacked), so two projects are happening simultaneously. One is at the Centre for Digital Humanities, looking at how Derrida marked up the texts, and then went on to use and cite in Of Grammatology. The other is with BibFrame – a Linked Open Data standard for library catalogues, and they are looking at books sent to Derrida, with dedications to him. Now there won’t be much overlap of those projects just now – Of Grammatology was his first book so those dedicated/gifted books to him. But we are building our databases for both projects as Linked Open Data, all being added a book at a time, so the hope is that we’ll be able to look at any relationships between the books that he owned and the way that he was using and being gifted items. And this is an experiment to explore those connections, and to expose that via library catalogue… But the library wants to catalogue all works, not just those with research interest. And it can be hard to connect research work, with depth and challenge, back to the catalogue but that’s what we are trying to do. And we want to be able to encourage more use and access to the works, without the library having to stand behind the work or analyse the work of a particular scholar.

So, you can take a data structure like this, then set up your system with appropriate constraints and affordances that need to be thought about as they will shape what you can and will do with your data later on. Continents have particular locations, boundaries, shape files. But you can’t mark out the boundaries for empires and states. The Western boundary at this time is a very contested thing indeed. In my system states are merely groups of locations, so that I can follow mercantile power, and think from a political viewpoint. But I wanted a tool with broader use hence that other data. Locations seem very safe and neutral but they really are not, they are complex and disputed. Now for that reason I wanted this tool – Project Quincy – to have others using it, but that hasn’t happened yet… Because this was very much created for my research and research question…It’s my own little Mind Palace for my needs… But I have heard from a researcher looking to catalogue those letters, and that would be very useful. Systems like this can have interesting afterlives, even if they don’t have the uptake we want Open Source Digital Humanities tools to have. The biggest impact of this project has been that I have the schema online. Some people do use the American Foreign Correspondents databases – I am one of the few places you can find this information, especially about consuls. But that schema being shared online have been helping others to make their own system… In that sense the more open documentation we can do, the better all of our projects could be.

I also created those diagrams that you were seeing – with DAVILA, a programme that creates these allows you to create easy to read, easy to follow, annotated, colour coded visuals. They are prettier than most database diagrams. I hope that when documentation is appealing and more transparent,  that that will get used more… That additional step to help people understand what you’ve made available for them… And you can use documentation to help teach someone how to make a project. So when my student was creating her schema, it was an example I could share or reference. Having something more designed was very helpful.

Q&A

Q1) Can you say more about the Derrida project and that holy grail of hanging that other stuff on the catalogue record?

A1) So the BibFrame schema is not as flexible as you’d like, it’s based on MARC, but it’s Linked Open Data, it can be expressed in RDF or JSON… And that lets us link records up. And we are working in the same library so we can link up on people, locations, maybe also major terms, and on th eaccession id number too. We haven’t tried it yet but…

Q1) And how do you make the distinction between authoritative record and other data.

A1) Jill Benson(?) team are creating authoritative linked open data records for all of the catalogue. And we are creating Linked Open Data, we’ll put it in a relational database with an API and an endpoint to query to generate that data. Once we have something we’ll look at offering a Triple Store on an ongoing basis. So, basically it is two independent data structures growing side by side with an awareness of each other. You can connect via API but we are also hoping for a demo of the Derrida library in BibFrame in the next year or two. At least a couple of the books there will be annotated, so you can see data from under the catalogue.

Q1) What about the commentary or research outputs from that…

A1) So, once we have our data, we’ll make a link to the catalogue and pull in from the researcher system. The link back to the catalogue is the harder bit.

Q2) I had a suggestion for a geographic system you might be interested in called Pelagios… And I don’t know if you could feed into that – it maps historical locations, fictional locations etc.

A2) There is a historical location atlas held by Newbury so there are shapefiles. Last I looked at Pelagios it was concerned more with the ancient world.

Comment) Latest iteration of funding takes it to Medieval and Arabic… It’s getting closer to your period.

A2) One thing that I really like about Pelagios is that they have split locations from their name, which accommodates multiple names, multiple imaginings and understandings etc. It’s a really neat data model. My model is more of a hack together – so in mine “London” is at the centre of modern London… Doesn’t make much sense for London but I do similar for Paris, that probably makes more sense. So you could go in deeper… There was a time when I was really interested in where all of Jay’s London Correspondents were… That was what put me into thinking about networking analysis… 60 letters are within London alone. I thought about disambiguating it more… But I was more interested in the people. So I went down a Royal Mail in London 1794 rabbit hole… And that was interesting, thinking about letters as a unit of information… Diplomatic notes fix conversations into a piece of paper you can refer to later – capturing the information and decisions. They go back and forth… So the ways letters came and went across London – sometimes several per day, sometimes over a week within the city…. is really interesting… London was and is extremely complicated.

Q3) I was going to ask about different letters. Those letters in London sound more like memos than a letter. But the others being sent are more precarious, at more time delay… My background is classics so there you tend to see a single letter – and you’d commission someone like Cicero to write a letter to you to stick up somewhere – but these letters are part of a conversation… So what is the difference in these transatlantic letters?

A3) There are lots of letters. I treat letters capaciously… If there is a “to” or “from” it’s in. So there are diplomatic notes between John Jay and George Hammond – a minister not an ambassadors as the US didn’t warrant that. Hammond was bad at his job – he saw a war coming and therefore didn’t see value in negotiating. They exchange notes, forward conversations back and forth. My data set for my research was all the letters sent to Jay, not those sent by Jay. I wanted to see what information Jay had available. With Hammond he kept a copy of all his letters to Jay, as evidence for very petty disputes. The letters from the West Indies were from Nathanial Cabbot Dickinson, who was sent as an information collector for the US government. Jay was sent to Europe on the treaty…. So the kick off for Jay’s treaty is changes that sees food supplies to British West Indies being stopped. Hammond actually couldn’t find a ship to take evidence against admiralty courts… They had to go through Philadelphia, then through London. So that cluster of letters include older letters. Letters from the coast include complaints from Angry American consuls…. There are urgent cries for help from the US. There is every possible genre… One of the things I love about American history is that Jay needs all the information he can get. When you map letters – like the Republic of Letters project at Stanford – you have this issue of someone writing to their tailor, not just important political texts. But for diplomats all information matters… Now you could say that a letter to a tailor is important but you could also say you are looking to map the boundaries of intellectual history here… Now in my system I map duplicates sent transatlantically, as those really matter, not all arrived, etc. I don’t map duplicates within London, as that isn’t as notable and is more about after the fact archiving.

Q4) Did John Jay keep diaries that put this correspondance in context?

A4) He did keep diaries… I do have analysis of how John Quincy Adams wrote letters in his time. He created subject headings, he analysed them, he recreated a filing system and way of managing his letters – he’d docket his letters, noting date received. He was like a human database… Hence naming my database after him.

Q5) There are a couple of different types of a tool like this. There is your use and then there is reuse of the engineering. I have correspondance earlier than Jay’s, mainly centred on London… Could I download the system and input my own letters?

A5) Yes, if you go to eafsd.org you’ll find more information there and you can try out the system. The database is Project Quincy and that’s on GitHub (GPL 3.0) and you can fire it up in Django. It comes with a nice interface. And do get in touch and I’ll update you on the system etc. It runs in the Django framework, can use any database underneath it. And there may be a smaller tractable letter database running underneath it.

Comment) On BibFrame… We have a Library and Information Studies programme which we teach BibFrame as part of that. We set up a project with a teaching tool which is also on GitHub – its linked from my staff page.

A quick note as follow up:

If you have research software that you have created for your work, and which you are making available under open source license, then I would recommend looking at some of the dedicated metajournals that will help you raise awareness of your project and ensure it is well documented for others to reuse. I would particularly recommend the Journal of Open Research Software (which, for full disclosure, I sit on the Editorial Advisory Board for), or the Journal of Open Source Software (as recommended by the lovely Daniel S. Katz in response to my post).

 

Jun 292016
 

Today I am at theFlood and Coastal Erosion Risk Management Network (FCERM.net) 2016 Annual Assembly in Newcastle. The event brings together a really wide range of stakeholders engaged in flood risk management. I’m here to talk about crowd sourcing and citizen science, with both COBWEB and University of Edinburgh CSCS Network member hats on, as the event is focusing on future approaches to managing flood risk and of course citizen science offers some really interesting potential here. 

I’m going to be liveblogging today but as the core flooding focus of the day is not my usual subject area I particularly welcome any corrections, additions, etc. 

The first section of the day is set up as: Future-Thinking in Flood Risk Management:

Welcome by Prof Garry Pender

Welcome to our third and final meeting of this programme of network meetings. Back at our first Assembly meeting we talked about projects we could do together, and we are pleased to say that two proposals are in the process of submissions. For today’s Assembly we will be looking to the future and future thinking about flood risk management.  There is a lot in the day but also we encourage you to discuss ideas, form your own breakout groups if you want.

And now onto our first speaker. Unfortunately Prof Hayley Fowler, Professor of Climate Change Impacts, Newcastle University cannot be with us today. But Chris Kilby has stepped in for Hayley.

Chris Kilby, Newcastle University – What can we expect with climate change? 

Today is 29th June, which means that four years ago today we had the “Toon Monsoon” –  around 50mm in 2 hours and the full city was in lockdown. We’ve seen some incidents like this in the last year, in London, and people are asking about whether that is climate change. And that incident has certainly changed thinking and practice in the flood risk management community. It’s certainly changed my practice – I’m now working with sewer systems which is not something I ever imagined.

Despite spending millions of pounds on computer models, the so-called GCMs, these models seem increasingly hard to trust as the academic community realise how difficult to predict flooding risk actually is. It is near impossible to predict future rainfall – this whole area is riven with great uncertainty. There is a lot of good data and thinking behind them, but I now have far more concern about the usefulness of these models than 20 years ago – and that’s despite the fact that these models are a lot better than they were.

So, the climate is changing. We see some clear trends both locally and globally. A lot of these we can be confident of. Temperature rises and sea level rise we have great confidence in those trends. Rainfall seasonality change (more in winter, less in summer), and “heavy rainfall” in the UK at least, has been fairly well predicted. What has been less clear is the extremes of rainfall (convective), and extremes of rainfall like the Toon Monsoon. Those extremes are the hardest to predict, model, reproduce.

The so called UKCP09 projections, from 2009, are still there and are still the predictions being used but a lot has changed with the models we use, with the predictions we are making. We haven’t put out any new projections – although that was originally the idea when UK CP09 projections came  out. So, officially, we are still using UKCP09. That produced coherant indications of more frequent and heavy rainfall in the UK. And UKCP09 suggests 15-20% increased in Rmed in winter. But these projections are based on daily rainfall, what was not indicated here was the increased hourly rate. So some of the models looking at decreased summer rainfall, which means a lower mean rainfall per hour, but actually that isn’t looking clear anymore. So there are clear gaps here, firstly with hourly level convective storms, and all climate models have the issue of when it comes to “conveyer belt” sequences of storms, it’s not clear models reliably reproduce these.

So, this is all bad news so far… But there is some good news. More recent models (CMIP5) suggest some more summer storms and accommodate some convective summer storms. And those newer models – CMIP5 and those that follow – will feed into the new projections. And some more good news… The models used in CP09, even high resolution regional models, ran on a resolution of 25km and downscaled using weather generator to 5km but no climate change information beyond 25km. Therefore within the 25km grid box the rain fall is averaged and doesn’t adequately resolve movement of air and clouds, adding a layer of uncertainty, as computers aren’t big/fast enough to do a proper job of resolving individual cloud systems. But Hayley, and colleagues at the Met Office, have been running higher resolution climate models which are similar for weather forecasting models at something like a 1.5km grid size. Doing that with climate data and projecting over the long term do seem to resolve the convective storms. That’s good in terms of new information. Changes look quite substantial: summer precipitation intensities are expected to increase by 30-40% for short duration heavy events. That’s significantly higher than UKCP09 but there are limitations/caveats here too… So far the simulations are on the South East of England only, simulations have been over 10 years in duration, but we’d really want more like 100 year model. And there is still poor understanding of the process and of what makes a thunderstorm – thermodynamic versus circulation changes may conflict. Local thermodynamics are important but that issue of circulation, the impacts of large outbreaks of warm air from across the continent, and that conflict between those processes is far from clear in terms of what makes the difference.

So, Hayley has been working on this with the Met Office, and she now has an EU project with colleagues in the Netherlands which is producing interesting initial results. There is a lot still to do but it does look like a larger increase in convection than we’d previously thought. Looking at winter storms we’ve seen an increase over the last few years. Even the UKCP09 models predicted some of this but so far we don’t see a complete change attributable to climate change.

Now, is any of this new? Our working experience and instrumental records tend to only go back 30-40 years, and that’s not long enough to understand climate change. So this is a quick note of historical work which has been taking place looking at Newcastle flooding history. Trawling through the records we see that the Toon Monsoon isn’t unheard of – we’ve had them three or four times in the last century:

  • 16th Set 1913 – 2.85 inches (72mm) in 90 minutes
  • 22nd June 1941 – 1.97 inches (50mm) in 35 minutes and 3.74 inches (95mm) in 85 minutes
  • 28th June 2012 – 50mm in 90 minutes

So, these look like incidents every 40 years or so looking at historic records. That’s very different from the FEH type models and how they account for Fluvial flooding, storms, etc.

In summary then climate models produce inherently uncertain predictions but major issues remain with extremes in general, and hourly rainfall extremes. Overall picture that is emerging is of increasing winter rainfall (intensity and frequency), potential for increased (summer) convective rainfall, and in any case there is evidence that climate variability over the last century has included similar extremes to those observed in the last decade.

And the work that Hannah and colleagues are working on are generating some really interesting results so do watch this space for forthcoming papers etc.

Q&A

Q1) Is that historical data work just on Newcastle?

A1) David has looked at Newcastle and some parts of Scotland. Others are looking at other areas though.

Q2) Last week in London on EU Referendum day saw extreme rainfall – not as major as here in 2012 – but that caused major impacts in terms of travel, moving of polling station etc. So what else is taking place in terms of work to understand these events and impacts?

A2) OK, so impacts wise that’s a bit difference. And a point of clarification – the “Toon Monsoon” wasn’t really a Monsoon (it just rhymes with Toon). Now the rainfall in London and Brighton being reported looked to be 40mm in an hour, which would be similar or greater than in Newcastle so I wouldn’t downplay them. The impact of these events on cities particularly is significant. In the same way that we’ve seen an increase in fluvial flooding in the last ten years, maybe we are also seeing an increase in these more intense shorter duration events. London is certainly very vulnerable – especially with underground systems. Newcastle Central here was closed because of water ingress at the front – probably wouldn’t happen now as modifications have been made – and metro lines shut. Even the flooding event in Paris a few weeks back was most severely impacting the underground rail/metro, road and even the Louvre. I do worry that city planners have build in vulnerability for just this sort of event.

Q3) I work in flood risk management for Dumfries and Galloway – we were one of the areas experiencing very high rainfall. We rely heavily in models, rainfall predictions etc. But we had an event on 26th/27th January that wasn’t predicted at all – traffic washed off the road, broke instrument peaks, evacuations were planned. SEPA and the Met office are looking at this but there is a gap here to handle this type of extreme rainfall on saturated ground.

A3) I’m not aware of that event, more so with flooding on 26th December which caused flooding here in Newcastle and more widespread. But that event does sound like the issue for the whole of that month for the whole country. It wasn’t just extreme amounts of daily rainfall, but it was the fact that the previous month had also been very wet. That combination of several months of heavy rainfall, followed by extreme (if not record breaking on their own) events really is an issue – it’s the soul of hydrology. And that really hasn’t been recognised to date. The storm event statistics tend to be the focus rather than storms and the antecedent conditions. But this comes down to flood managers having their own rules to deal with this. In the North East this issue has arisen with the River Tyne where the potential for repurposing rivers for flood water retention has been looked at – but you need 30 day predictions to be able to do that. And if this extreme event following a long period of rain really changes that and poses challenges.

Comment – Andy, Environment Agency) Just to note that EA DEFRA Wales have a programme to look at how we extend FEH but also looking at Paleo Geomorphology to extend that work. And some interesting results already.

Phil Younge, Environment Agency – The Future of Flood Risk Management

My role is as Yorkshire Major Incident Recovery Manager, and that involves three things: repairing damage; investing in at-risk communities; and engaging with those communities. I was brought in to do this because of another extreme weather event, and I’ll be talking about the sorts of things we need to do to address these types of challenges.

So, quickly, a bit of background on the Environment Agency. We are the National flood risk agency for England. And we have a broad remit including risk of e.g. implications of nuclear power stations, management of catchment areas, work with other flood risk agencies etc. And we directly look after 7100 km of river, coastal and tidel raised defences; 22,600 defences, with assets worth over 20 billion. There are lots of interventions we can make to reduce the risk to communities. But how do we engage with communities to make them more resiliant for whatever the weather may throw at them? Pause on that thought and I’ll return to it shortly.

So I want to briefly talk about the winter storms of 2015-16. The Foss Barrier in York is what is shown in this image – and what happened there made national news in terms of the impact on central York. The water levels were unprecedentedly high. And this event was across the North of England, with record river levels across the region and we are talking probably 1 metre higher than we had experienced before, since records began. So the “what if” scenarios are really being triggered here. Some of the defences built as a result of events in 2009 were significantly overtopped, so we have to rethink what we plan for in the future. So we had record rainfall, 14 catchments experienced their highest ever river flow. But the investment we had put in made a difference, we protected over 20,000 properties during storms Desmond and Eva – even though some of those defences have been overtopped in 2009. We saw 15k households and 2,600 businesses flooded in 2009, with 150 communities visited by flood support officers. We issued 92 flood warnings – and we only do that when there is a genuine risk to loss of life. We had military support, temporary barriers in place, etc. for this event but the levels were truly unprecedented.

Significant damage was done to our flood defences across the North of England. In parts of Cumbria the speed and impact of the water, the force and energy of that water, have made huge damage to roads and buildings. We have made substantial work to repair those properties to the condition they were in before the rain. We are spending around £24 million to do that and do it at speed for October 2016.

But what do we do about this? Within UK PLC how do we forecast and manage the impact and consequence of flooding across the country? Following the flooding in Cumbria Oliver Letwin set up the Flood Risk Resilience Review, to build upon the plans the Environment Agency and the Government already has, to look at what must be done differently to support communities across the whole of England. The Review has been working hard across the last four months, and there are four strands I want to share:

  • Modelling extreme weather and stress testing resilience to flood risk – What do we plan for? What is a realistic and scalable scenario to plan for? Looking back at that Yorkshire flooding, how does that compare to existing understanding of risk. Reflecting on likely extreme consequences as a yardstick for extreme scenarios.
  • Assessing the resilience of critical local infrastructure – How do we ensure that businesses still run, that we can run as a community. For instance in Leeds on Boxing Day our telecommunications were impacted by flooding. So how can we address that? How do we ensure water supply and treatment is fit for these events? How can we ensure our hospitals and health provision is appropriate? How can we ensure our transport infrastructure is up and running. As an aside the Leeds Boxing Day floods happened on a non working day – the Leeds rail station is the second busiest outside London so if that had happened on a working day the impact could have been quite different, much more severe.
  • Temporary defences – how can we move things around the country to reduce risk as needed, things like barriers and pumps. How do we move those? How do we assess when they are needed? How do we ensure we had the experience and skills to use those temporary defences? A review by the military has been wrapped into this Resilience Review.
  • Flood risk in core cities – London is being used as a benchmark, but we are also looking at cities like Leeds and how we invest to keep these core key cities operating at times of heightened flood risk.

So, we are looking at these areas, but also how we can help our communities to be more resilient. The Environment Agency are looking at community engagement and that’s particularly part of what we are here to do, to develop and work with the wider FCERM community.

We do have an investment programme from 2015-2021 which includes substantial capital investment. We are investing significantly in the North of England (e.g. £54 per person for everyone in Yorkshire, Lancashire, and Cumbria, also East Midlands and Northumbria. And that long planning window is letting us be strategic, to invest based on evidence of need. And in the Budget 2016 there was an additional £700 million for flood risk management to better protect 4,745 homes and 1,700 businesses. There will also be specific injections of investment in places like Leeds, York, Carlisle etc. to ensure we can cope with incidents like we had last year.

One thing that really came out of last year was the issue of pace. As a community we are used to thinking slowly before acting, but there is a lot of pressure from communities and from Government to act fast, to get programmes of work underway within 12 months of flooding incidents. Is that fast? Not if you live in an affected area, but it’s faster than we may be used to. That’s where the wealth of knowledge and experience needs to be available to make the right decisions quickly. We have to work together to do this.

And we need to look at innovation… So we have created “Mr Nosy”, a camera to put down culverts(?) and look for inspect them. We used to (and do) have teams of people with breathing apparatus etc. to do this, but we can put Mr Nosy down so that a team of two can inspect quickly. That saves time and money and we need more innovations that allow us to do this.

The Pitt  Review (2008) looked at climate change and future flood and coastal risk management discussed the challenges. There are many techniques to better defend a community, we need the right blend of approach: “flood risk cannot be managed by building ever bigger “hard” defences”; natural measures are sustainable; multiple benefits for people, properties and wildlife; multi-agency approach is the way forward. Community engagement is also crucial to inform the community to understand the scale of the risk, to understand how to live with risk in a positive way. So, this community enables us to work with research, we need that community engagement, and we need efficiency – that big government investment needs to be well spent, we need to work quickly and to shortcut to answers quickly but those have to be the right answers. And this community is well placed to help us ensure that we are doing the right things so that we can assure the communities, and assure the government that we are doing the right things.

Q&A

Q1) When is that Review due to report?

A1) Currently scheduled for middle of July, but thereabouts.

Q2) You mentioned the dredging of watercourses… On the back of major floods we seem to have dredging, then more 6 weeks lately. For the public there is a perception that that will reduce flood risk which is really the wrong message. And there are places that will continue to flood – maybe we have to move coastal towns back? You can’t just keep building walls that are bigger and bigger.

A2) Dredging depends so much on the circumstances. In Calderdale we are making a model so that people can understand what impact different measures have. Dredging helps but it isn’t the only things. We have complex hydro-dynamic models but how do we simply communicate how water levels are influenced, the ways we influence the river channel. And getting that message across will help us make changes with community understanding. In terms of adaptation I think you are spot on. Some communities will probably adapt because of that, but we can’t just build higher and higher walls. I am keen that flood risk is part of the vision for a community, and how that can be managed. Historically in the North East cities turned their backs on the river, as water quality has improved that has changed, which is great but brings its own challenges.

Q3) You mentioned a model, is that a physical model?

A3) Yes, a physical model to communicate that. We do go out and dredge where it is useful, but in many cases it is not which means we have to explain that when communities think it is the answer to flooding. Physical models are useful, apps are good… But how do we get across some of the challenges we face in river engineering.

Q4) You talked about community engagement but can you say more about what type of engagement that is?

A4) We go out into the communities, listen to the experiences and concerns, gathering evidence, understanding what that flooding means for them. Working with the local authorities those areas are now producing plans. So we had an event in Calderdale marking six months since the flood, discussing plans etc. But we won’t please all the people all of the time, so we need to get engagement across the community. And we need that pace – which means bringing the community along, listening to them, bringing into our plans… That is challenging but it is the right thing to do. At the end of the day they are the people living there, who need to reassured about how we manage risk and deliver appropriate solutions.

The next section of the day looks at: Research into Practice – Lessons from Industry:

David Wilkes – Global Flood Resilience, Arup – Engineering Future Cities, Blue-Green Infrastructure

This is a bit of an amalgam of some of the work from the Blue-Green Cities EPSRC programme, which I was on the advisory board of, and some of our own work at Arup.

Right now 50% of the global population live in cities – over 3.2 billion people. As we look forward, by the middle of this century (2050) we are expecting growth so that around 70% of the world population will live in cities, so 6.3 billion.

We were asked a while ago to give some evidence to the Third Inquiry of the All Party Parliamentary Group for Excellence in the Built Environment info flood migration and resilience, and we wanted to give some clear recommendations that: (1) Spatial planning is the key to long term resilience; (2) Implement programme of improved surface water flood hazard mapping; (3) Nurture capacity within professional community to ensure quality work in understanding flood risk takes place, and a need to provide career paths as part of that nurturing.

We were called into New York to give some support after Hurricane Sandy. They didn’t want a major reaction, a big change, instead they wanted a bottom up resilient approach, cross cutting areas including transportation, energy, land use, insurance and infrastructure finance. We proposed an iterative cycle around: redundancy; flexibility; safe failure; rapid rebound; constant learning. This is a quantum shift from our approach in the last 100 years so that learning is a crucial part of the process.

So, what is a Blue-Green city? Well if we look at the January 2014 rainfall anomaly map we see the shift from average annual rainfall. We saw huge flooding scarily close to London at that time, across the South East of England. Looking at the December 2015 we see that rainfall anomaly map again showing huge shift from the average, again heavily in the South East, but also South West and North of England. So, what do we do about that? Dredging may be part of this… But we need to be building with flood risk in mind, thinking laterally about what we do. And this is where the Blue-Green city idea comes in. There are many levels to this: Understand water cycle at catchment scale; Align with other drivers and development needs; identify partners, people who might help you achieve things, and what their priorities are; build a shared case for investment and action; check how it is working and learn from experience.

Looking, for instance, at Hull we see a city long challenged by flooding. It is a low lying city so to understand what could be done to reduce risk we needed to take a multi faceted view across the long term: looking at frequency/likelihood of risk, understand what is possible, looking at how changes and developments can also feed into local development. We have a few approaches available… There is the urban model, of drainage from concrete into underground drainage – the Blue – and the green model of absorbing surface water and managing it through green interventions.

In the Blue-Green Cities research approach you need to work with City Authority and Community Communications; you need to Model existing Flood Risk Management; Understand Citizens Behaviour, and you need to really make a clear Business Case for interventions. And as part of that process you need to overcome barriers to innovation – things like community expectations and changes, hazards, etc. In Newcastle, which volunteered to be a Blue-Green city research area, we formed the Newcastle Learning and Action Alliance to build a common shared understanding of what would be effective, acceptable, and practical. We really needed to understand citizens’ behaviours – local people are the local experts and you need to tap into that and respect that. Please really value Blue-Green assets but only if they understand how they work, the difference that they make. And indeed people offered to maintain Blue-Green assets – to remove litter etc. but again, only if they value and understand their purpose. And the community really need to feel a sense of ownership to make Blue-Green solutions work.

It is also really important to have modelling, to show that, to support your business case. Options include hard and soft defences. The Brunton Park flood alleviation scheme included landscape proposals, which provided a really clear business case. OfWATT wanted investment from the energy sector, they knew the costs of conventional sewerage, and actually this alternative approach is good value, and significantly cheaper – as both sewer and flood solution – than the previous siloed approach. There are also Grey-Green options – landscaping to store water in quite straightforward purposes, more imaginative purposes, and the water can be used for irrigation, wash down, etc. Again, building the business case is absolutely essential.

In the Blue-Green Cities research we were able to quantify direct and indirect costs to different stakeholders – primary industry, manufacturing, petroleum and chemical, utilities sector, construction, wholesale and retail, transport, hotels and restaurants, info and communication, financial and professional, other services. When you can quantify those costs you really have a strong case for the importance of interventions that reduce risk, that manage water appropriately. That matters whether spending tax payers money or convincing commercial partners to contribute to costs.

Commission of Inquiry into flood resilience of the future: “Living with Water” (2015), from the All Party Group for Excellence in the Built Environment, House of Commons, talk about “what is required is a fundamental change in how we view flood management…”

Q&A

Q1) I wanted to ask about how much green we would have to restore to make a difference? And I wanted to ask about the idea of local communities as the experts in their area but that can be problematic…

A1) I wouldn’t want to put a figure on the green space, you need to push the boundaries to make a real difference. But even small interventions can be significant. If the Blue-Green asset interrupts the flood path, that can be hugely significant. In terms of the costs of maintaining Blue-Green assets, well… I have a number of pet projects and ideas and I think that things like parks and city bike ways, and to have a flood overflow that also encourages the community to use it, will clearly be costlier than flat tarmac. But you can get Sustrans, local businesses, etc. to support that infrastructure and, if you get it right, that supports a better community. Softer, greener interventions require more maintenance but that can give back to the community all year round, and there are ways to do that. You made another point about local people being the experts. Local people do know about their own locality. Arguably as seasoned professionals we also know quite a bit. The key thing is to not be patronising, not to pretend you haven’t listened, but to build concensus, to avoid head to head dispute, to work with them.

Stephen Garvin, Director Global Resilience Centre, BRE – Adapting to change – multiple events and FRM

I will be talking about the built environment, and adaptations of buildings for flood resilience. I think this afternoon’s workshops can develop some of these areas a bit. I thought it would be good to reflect on recent flooding, and the difficulty of addressing these situations. The nature of flooding can vary so greatly in terms of the type and nature of floods. For instance the 2007 floods were very different from the 2012 flooding and fro the 2014 floods in terms of areas effected, the nature of the flood, etc. And then we saw the 2015/16 storms – the first time that every area at risk of flooding in Scotland and the North of the UK flooded – usually not all areas are hit at once.

In terms of the impact water damage is a major factor. So, for instance in Cumbria 2015, we had record rainfall, over-topped defences, Rivers Eden and Petrol, Water depth of 1.5m in some properties in Carlisle. That depth of flooding was very striking. A lot of terraced properties, with underfloor voids, were affected in Carlisle. And water was coming in from every direction. We can’t always keep water from coming in, so in some ways the challenge is getting water out of the properties. How do we deal with it? Some of these properties had had flood resilience measures before – such as raising the height of electrical sockets – but they were not necessarily high enough or useful enough in light of the high water. And properties change hands, are rented to new tenants, extensions are added – the awareness isn’t consistently there and some changes increase vulnerability to flooding.

For instance, one property had, after 2005 less severe floods had led to flood prevention measures being put in place – door surrounds, airbrick covers, and despite those measures water inundated the property. Why? Well there had been a conservatory added which, despite best efforts to seal it, let in a great deal of water. They had also added an outdoor socket for a tumble dryer a few feet off the ground. So we have to think about these measures – are they appropriate? Do they manage the risk sufficiently? How do we handle the flood memory? You can have a flood resilient kitchen installed, but what happens when it is replaced?

There are two approaches really: Flood resilience essentially allows the water to come in, but the building and its materials are able to recover from flooding; by comparison Flood resistance is about keeping water out, dry proof materials etc. And there are two dimensions here as we have to have a technical approach in design, construction, flood defences, sustainable approaches to drainage; and non-technical approaches – policy, regulation, decision making and engagement, etc. There are challenges here – construction are actually very small companies on the whole – more than 5 people is a big company. And we see insurers who are good at swinging into action after floods, but they do not always consider resilience or resistance that will have a long term impact so we are working to encourage that approach, that idea of not replacing like for like but replacing with better more flood resilient or resistant options. For instance there are solutions for apertures that are designed to keep water out to high depths – strong PVC doors, reinforced, and multi-point lockable for instance. In Germany, in Hamburg they have doors like this (though perforated brick work several feet higher!). You can also use materials to change materials, change designs of e.g. power sockets, service entries, etc.

Professor Eric Nehlsen came up with the idea of cascading flood compartments with adaptive response, starting from adaptation to flooding dry and wet-proofing (where we tend to work) through to more ambitious ideas like adaptation by floating and amphibious housing… Some US coastal communities take the approach of raising properties off the ground, or creating floating construction, particularly where hurricanes occur, but that doesn’t feel like the right solution in many cases here… But we have to understand and consider alternative approaches.

There are standards for floor repair – supported by BRE and industry – and there are six standards that fit into this area, which outline approaches to Flood risk assessment, planning for FRR, Property surveys, design and specification of flood resilient repair, construction work, maintenance and operation (some require maintenance over time). I’m going to use those standards for an FRR demonstration. We have offices in Watford in a Victorian Terrace, a 30 square metre space where we can test cases – have done this for energy efficiency before, have now done for flooding. This gives us a space to show what can be achieved, what interventions can be made, to help insurers, construction, policy makers see the possibilities. The age of the building means it is a simple construction – concrete floor and brick walls – so nothing fancy here. You can imagine some tests of materials, but there are no standards for construction products for repair and new builds for flood resistance and resilience. It is still challenging to drive adoption though – essentially we have to disrupt normal business and practice to see that change to resistant or resilient building materials.

Q&A

Q1) One of the challenges for construction is that insurance issue of replacing “like by like”…

A1) It is a major challenge. Insurance is renewed every year, and often online rather than by brokers. We are seeing some insurers introducing resilience and resistance but not wide-scale yet. Flood resilience grants through ECLG for Local Authorities and individuals are helpful, but no guarantee of that continuing. And otherwise we need to make the case to the property owner but that raises issues of affordability, cost, accessibility. So, a good question really.

Jaap Flikweert – Flood and Coastal Management Advisor, Royal HaskoningDHV – Resilience and adaptation: coastal management for the future

I’m going to give a practitioners perspective on ways of responding to climate change. I will particularly talk about adaptation which tends to be across three different areas/meanings: Protection (reduce likelihood); Resilience (reduce consequence); and Adaptation, which I’m going to bluntly call “Relocation” (move receptors away). And I’ll talk about inland flooding, coastal flooding and coastal erosion.. But I tend not to talk as much on coastal erosion as if we focus only on risk we can miss the opportunities. But I will be talking about risk – and I’ll be highlighting some areas for research as I talk.

So, how do we do our planning to think about how we do our city planning to manage the risk. I think the UK – England and Wales especially – are at the lead here in terms of Shoreline Management Plans – they are long term and broad scale view, there is a policy for coastal defence (HtL (Hold the Line)/MR (Managed Realignment)/NAI (No Active Intervention), Strong interaction with other sectors. Scotland are making progress here too. But there is a challenging to be flexible, to think about the process of change.

Setting future plans can be challenging – there is a great deal of uncertainty in terms of climate change, in terms of finances. We used to talk about a precautionary approach but I think we need to talk about “Managed-adaptive” approaches with decision pathways. For instance The Thames Barrier is an example of this sort of approach. This isn’t necessarily new work, there is a lot of good research to date about how to do this but it’s much more about mainstreaming that understanding and approach.

When we think about protection we need to think about how we sustain defences in a future with climate change? We will see loading increase (but extent uncertain); value at risk will increase (but extent uncertain); coastal squeeze and longshore impacts. We will see our beaches disappear – with both environmental and flood risk implications. An example from the Netherlands shows HtL feasible and affordable up to about 6m in sea level rise; with sandy solutions (also deal with coastal squeeze), and radical innovation is of vital importance.

We can’t translate that to the UK, it is a different context, but we need to see this as inspirational. In the UK we won’t hold the line for ever… So how do we deal with that? We can think about the structures, and I think there is research opportunity here about how we justify buying time for adaption, and how we design for short life (~20 years), and how we develop adaptable solutions. We can’t Hold the Line forever, but some communities are not ready for that change so we have to work on what we can achieve and how.

In terms of Resilience we need to think about coastal flooding – in principle not different from inland flooding, design to minimise impact, but in practice that is more difficult with lower change/higher consequence raising challenges of less awareness, more catastrophic if it does happen. New Orleans would be a pertinent example here. And when we see raised buildings – as David mentioned – those aren’t always suitable for the UK, they change how a community looks which may not be acceptable… Coastal erosion raises its own challenges too.

When we think of Adaptation/Relocation we have to acknowledge that protection is always technically possible but what if it was unaffordable or unsustainable. For example a disaster in Grahamstown, Queensland saw a major event in January 2011 lead to protective measures but the whole community moving in land in December 2011. There wasn’t a delay on funding etc. as this was an emergency, it forced the issue. But how can we do that in a planned way? We have Coastal change Pathfinders. This approach is very valuable including actual relocation, awareness, engagement lessons, policy innovation. But the approach is very difficult to mainstream because of funding, awareness, planning policy, local authority capacity. And here too I see research opportunities around making the business case for adaptation/relocation.

To take an example here that a colleague is working on. Fairbourne, Gwynedd, on the West Coast of Wales, is a small community, with a few buildings from the 1890s which has grown to 400 properties and over 800 people. Coastal defences were improved in 1981, and again in 2012. But this is a community which really shouldn’t be in that location in the long term, they are in the middle of flood plans. The Parish Council have adopted an SMP policy which has goals across different timings: in the short term to Hold the Line; medium term – Managed Realignment, and long term – No Active Intervention. There is a need to plan now to look at how we move from one position to another… So this isn’t dissemination needed here, it is true communication and engagement with the community, identifying who that community is to ensure that is effective.

So, in closing I think there is research needed around design for short life; consultation and engagement – about useful work done, lessons learned, moving from informing to involving to ownership, defining what a community is; Making the business case for supporting adaptation/relocation – investment in temporary protection to buy time; investment in increasing communities’ adaptive capacity; value of being prepared vs unprepared – damage (to the nation) such as lack of mobility, employability, burden on health and social services. And I’d like to close with the question: should we consider relocation for some inland areas at risk of flooding?

Q&A

Q1) That closing question… I was driving to a settlement in our area which has major flood risk, is frequently cut off by snow in the summer. There are few jobs there, it is not strategically key although it has a heritage value perhaps. We could be throwing good money after bad to protect a small settlement like that which has minimal contribution. So I would agree that we should look at relocation of some inland properties. Also, kudos to the parish council of Fairbourne for adopting that plan. We face real challenges as politicians are elected on 5 year terms, and getting them persuaded that they need to get communities to understand the long term risks and impacts is really challenging.

A1) I think no-one would claim that Fairbourne was an easy process. The Council adopted the SMP but who goes to parish meetings? But BBC Wales picked it up, rather misreported the timelines, but that raised interest hugely. But it’s worth noting that a big part of Gwynedd and mid Wales faces these challenges. Understanding what we preserve, where investment goes… How do we live with the idea of people living below sea level. The Dutch manage that but in a very different way and it’s the full nation who are on board, very different in the UK.

Q2) What about adopting Dutch models for managing risk here?

A2) We’ve been looking at various ways that we can learn from Dutch approaches, and how that compares and translates to a UK context.

And now, in a change to plans, we are rejuggling the event to do some reflection on the network – led by Prof. Garry Pender – before lunch. We’ll return with 2 minute presentations after that. Garry is keen that all attending complete the event feedback forms on the network, the role of the network, resources and channels such as the website, webinars, events, etc. I am sure FCERM.net would also welcome comments and feedback by email from those from this community who are not able to attend today. 

Sharing Best Practice – Just 2-minutes – Mini presentations from delegates sharing output, experience and best practice

 

I wasn’t able to take many notes from this session, as I was presenting a 2 minute session from my COBWEB colleague Barry Evans (Aberystwyth University), on our co-design work and research associated with our collaboration with the Tal-y-bont Floodees in Mid-Wales. In fact various requirements to re-schedule the day meant that the afternoon was more interactive but also not really appropriate for real time notation so, from hereon, I’m summarising the day. 

At this point in the day we moved to the Parallel Breakout sessions on Tools for the Future. I am leading Workshop 1 on crowd sourcing so won’t be blogging them, but include their titles here for reference:

  • Workshop 1 – Crowd-Sourcing Data and Citizen Science – An exploration of tools used to source environmental data from the public led by Nicola Osborne CSCS Network with case studies from SEPA. Slides and resources from this session will be available online shortly.
  • Workshop 2 – Multi-event modelling for resilience in urban planning An introduction to tools for simulating multiple storm events with consideration of the impacts on planning in urban environments with case studies from BRE and Scottish Government
  • Workshop 3 – Building Resilient Communities Best-practice guidance on engaging with communities to build resilience, led by Dr Esther Carmen with case studies from the SESAME project

We finished the day with a session on Filling the Gaps– Future Projects:

Breakout time for discussion around future needs and projects

I joined a really interesting Community Engagement breakout session, considering research gaps and challenges. Unsurprisingly much of the discussion centred on what we mean by community and how we might go about identifying and building relationships with communities. In particular there was a focus on engaging with transient communities – thinking particularly about urban and commuter areas where there are frequent changes in the community. 

Final Thoughts from FCERM.net – Prof. Garry Pender 

As the afternoon was running behind Garry closed with thank yous to the speakers and contributors to the day. 

Jun 272016
 
This afternoon I’m at the eLearning@ed/LTW monthly Showcase and Network event, which this month focuses on Assessment and Feedback.
I am liveblogging these notes so, as usual, corrections and updates are welcomed. 
The wiki page for this event includes the agenda and will include any further notes etc.: https://www.wiki.ed.ac.uk/x/kc5uEg
Introduction and Updates, Robert Chmielewski (IS Learning, Teaching and Web)
Robert consults around the University on online assessment – and there is a lot of online assessment taking place. And this is an area that is supported by everybody. Students are interested in submitting and receiving feedback online, but we also have technologists who recognise the advantages of online assessment and feedback, and we have the University as a whole seeing the benefits around, e.g. clarity over meeting timelines for feedback. The last group here is the markers and they are more and more appreciative of the affordances of online assessment and feedback. So there are a lot of people who support this, but there are challenges too. So, today we have an event to share experiences across areas, across levels.
Before we kick off I wanted to welcome Celeste Houghton. Celeste: I an the new Head of Academic Development for Digital Education at the University, based at IAD, and I’m keen to meet people, to find out more about what is taking place. Do get in touch.
eSubmission and eFeedback in the College of Humanities and Social Science, Karen Howie (School of History, Classics & Archaeology)
This project started about 2-3 years back in February 2015. The College of Humanities and Social Sciences wants 100% electronic submission/feedback where “pedagogically appropriate” by 2016/17 academic year. Although I’m saying electronic submission/feedback the in-between marking part hasn’t been prescribed. The project board for this work includes myself, Robert and many others any of whom you are welcome to contact with any questions.
So, why do this? Well there is a lot of student demand for various reasons – legibility of comments; printing costs; enabling remote submission. For staff the benefits are ore debatable but they can include (as also reported by Jisc) increased efficiency, and convenience. Benefits for the institution (again as reported by Jisc) include measuring feedback response rates, and efficiencies that free up time for student support.
Now some parts of CHSS are already doing this at the moment. Social and Political Studies are using an in-house system. Law are using Grademark. And other schools have been running pilots, most of them with GradeMark, and these have been mostly successful. But we’ve had lots of interesting conversations around these technologies, around quality of assessment, about health and safety implications of staring at a screen more.
We have been developing a workflow and process for the college but we want this to be flexible to schools’ profiles – so we’ve adopted a modular approach that allows for handling of groups/tutors; declaration of own work; checking for non-submitters; marking sheets and rubrics; moderation, etc. And we are planning for the next year ahead, working closely with the Technology Enhanced Learning group in HSS. We are having some training – for markers it’s a mixture of in-School and is with College input/support; and for administrators by learning technologies in the school or through discussions with IS LTW EDE. To support that process we have screencasts and documentation currently in development. PebblePad isn’t part of this process, but will be.
To build confidence in the system we’re facing some myth busting etc. For instance, anonymity vs pastoral care issues – a receipt dropbox has been created; and we have an agreement with EUSA that we can deanonymise if identification is not provided. And we have also been looking at various other regulations etc. to ensure we are complying and/or interpreting them correctly.
So, those pilots have been running. We’ve found that depending on your preocesses the administration can be complex. Students have voiced concerns around “generic” feedback. Students were anxious – very anxious in some cases. It is much quicker for markers to get started with marking, as soon as the deadline has passed. But there are challenges though – including when networks go down, for instance there was an (unusual) DDOS attack during our pilots that impacted our timeline.
Feedback from students seems relatively good. 14 out of 36 felt quality of marking was better than on paper – but 10 said it was less good. 29 out of 36 said feedback was more legible. 10 felt they had received more feedback than noral, 11 less. 3 out of 36 would rather submit on paper, 31 would would rather submit online. In our first pilot with first year students around 10% didn’t look at feedback for essay, 36% didn’t look at tutorial feedback. In our second pilot about 10% didn’t look at either assignments submissions.
Markers reported finding the electronic marking easier, but some felt that the need to work on screen was challenging or less pleasant than marking on paper.
Q&A
Q1) The students who commented on less or more feedback than normal – what were they comparing to?
A1) To paper-based marking, which they would have had for other courses. So when we surveyed them they would have had some paper-based and some electronic feedback already.
Q2) A comment about handwriting and typing – I read a paper that said that on average people write around 4 times more words when typing than when hand writing. And in our practice we’ve found that too.
A2) It may also be student perceptions – looks like less but actually quite a lot of work. I was interested in students expectations that 8 days was a long time to turn around feedback.
Q2) I think that students need to understand how much care has been taken, and that that adds to how long these things take.
Q3) You pointed out that people were having some problems and concerns – like health and safety. You are hoping for 100% take up, and also that backdrop of the Turnitin updates… Are there future plans that will help us to move to 100%
A3) The health and safety thing came up again and again… But it’s maybe to do with how we cluster assignments. In terms of Turnitin there are updates but not all of those emerge rather slowly – there is a bit more competition now, and some frustration across the UK, so looking likely that there will be more positive developments.
Q4) It was interesting that idea that you can’t release some feedback until it is all ready… For us in the Business School we ended up releasing feedback when there was a delay.
A4) In our situation we had some marks ready in a few days, others not due for two weeks. A few days would be fair, a few weeks would be problematic. It’s an expectation management issue.
Comment) There is also a risk that is marking is incomplete or partially done it can cause students great distress…
Current assessment challenges, Dr. Neil Lent (Institute for Academic Development)
My focus is on assessment and feedback. Initially the expectation was that I’d be focused on how to do assessment and feedback “better”. And you can do that to an extent but… The main challenge we face is a cultural rather than a technical challenge. And I mean technical in the widest sense – technological, yes, but also technical in terms of process and approach. I also think we are talking about “cultures” rather than “culture” when we think about this.
So, why are we focussing on assessment and feedback? Well we have low NSS scores, low league table position and poor student experience reported around this area. Also issues of (un)timely feedback, low utility, and the idea that we are a research-led university and the balance of that and learning and teaching. Some of these areas are more myth than reality. I think as a university we now have an unambiguous focus on teaching and learning but whether that has entirely permeated our organisational culture is perhaps arguable. When you have competing time demands it is hard to do things properly, and the space to actually design better assessment and feedback.
So how do we handle this? Well is we look at the “Implementation Staircase” (Reynolds and Saunders 1987) we can see that it comes from senior management, then to colleges, to schools, to programmes, to courses, to students. Now you could go down that staircase or you can go back up… And that requires us to think about our relationships with students. Is this model dialogic? Maybe we need another model?
Activity theory (Engestrom 1999) is a model for a group like a programme team, or course cohort, etc. So we have a subject here – it’s all about the individual in the context of an object, the community, mediating tool, rules and conventions, division of labour. This is a classic activity theory idea, with modern cultural aspects included. So for us the subject might be the marker, the object the assignment, the mediating tool something like the technological tools or processes, rules and conventions may include the commitment to return marks within 2 weeks, division of labour could include colleagues and sharing of marking, community could be students. It’s just a way to conceptualise this stuff.
A cultural resolution would see culture as practice and discourse. Review and reflection need to be embedded and internalised way of life. We have multiple stakeholders here – not always the teacher or the marker. And we need a bit of risk taking – but that’s scary when we are thinking about risk taking. That can feel at odds with the need to perform at a high level but risk taking is needed. And we need best practice to share experience in events such as this.
So there are technical things we could do better, do right. But the challenge we face is more of a collective one. We need to create time and space to genuinely reflect on their teaching practice, to interact with that culture. But you don’t change practice overnight. And we have to think about our relationship with our students, and thinking about how we encourage and enable them to be part of the process, and building up their own picture of what good/bad work looks like. And then the subject, object, culture will be closer together. Sometimes real change comes from giving examples of what works, inspiring through those examples etc. Technological tools can make life easier, if you have the time to spend time to understand them and how to make them work for you.
Q&A
Q1) Not sure if it’s a question or comment or thought… But I’m wondering what we take from those NSS scores, and if that’s what we should work to or if we should think about assessment and feedback in a different kind of paradigm.
A1) When we think about processes we can kid ourselves that this is all linear, it’s cause and effect. It isn’t that simple… The other thing about concentrating on giving feedback on time, so they can make use of it. But when it comes to the NSS it commodifies feedback, which challenges the idea of feedback as dialogic. There are cultural challenges for this. And I think that’s where risk, and the potential for interesting surprises come in…
Q2) As a parent of a teenager I now wonder about personal resilience, to be able to look at things differently, especially when they don’t feel confident to move forwards. I feel that for staff and students a problem can arise and they panic, and want things resolved for them. I think we have to move past that by giving staff and students the resilience so that they can cope with change.
A2) My PhD was pretty much on that. I think some of this comes from the idea of relatively safe risk taking… That’s another kind of risk taking. As a sector we have to think that through. Giving marks for everything risks everything not feeling like a safe space.
Q3) Do we not need to make learning the focus.
A3) Schools and universities push that grades, outcomes really matter when actually we would say “no, the learning is what matters”, but that’s hard in the wider context in which the certificate in the hand is valued.
Comment) Maybe we need that distinction that Simon Riley talked about at this year’s eLearning@ed conference, of distinguishing between the task and the assignment. So you can fail the task but succeed that assignment (in that case referring to SLICCs and the idea that the task is the experience, the assignment is writing about it whether it went well or poorly).
Not captured in full here: a discussion around the nature of electronic submission, and students concern about failing at submitting their assignments or proof of learning… 
Assessment Literacy: technology as facilitator, Prof. Susan Rhind (Assistant Principal Assessment and Feedback)
I’m going to talk about assessment literacy, and about technology as a facilitator. I’m also going to talk about something I’m hoping you may be able to advise about.
So, what is assessment literacy? It is being talked about a lot in Higher Education at the moment. There is a book all about it (Price et al 2012) that talks about competencies and practices. For me what is most important is the idea of ensuring some practical aspects are in place, that students have an understanding of the nature, meaning and level of assessment standards, that they have skills in self and peer assessment. The idea is to narrow the gap between students and teaching staff. Sadler (1989,2010) and Bod and Molloy (2013) talk about students needing to understand the purpose of assessment and process of assessment. It means understanding assessment as a central part of curriculum design (Medland 2016, Gibbs and Dunbar-Goddet, 2009). We need assessment and feedback at the core, at the heart of our learning and teaching.
We also have to understand assessment in the context of quality of teaching and quality of assessment and feedback. For me there is a pyramid of quality (with programme at bottom, individual at top, course in the middle). When we talk about good quality feedback we have to conceptualise it, as Neil talked about, as a dialogic process. So there is individual feedback… But there is also course design and programme design in terms of assessment and feedback. No matter how good a marker is in giving feedback, it is much more effective when the programme design supports good quality feedback. In this model technology can be a facilitator. For instance I wanted to plug Fiona Hale’s Edinburgh Learning Design Roadmap (ELDeR) workshops and processes. This sort of approach lets us build for longer term improvement in these areas.
Again, thinking about feedback and assessment quality, and things that courses can do, we have a table here that compares different types of assessment, the minimum pre-assessment activity to ensure they have assessment literacy, and then enhancement examples. a minimum requirement for feedback and some exemplars for marking students work.
An example here would be work we’ve done at the Vet School around student use of Peerwise MCQs – here students pushed for use in 3rd year, and for revision at the end of the programme. By the way if you are interested in assessment literacy, or have experience to share, we now have a channel for Assessment and Feedback, and for Assessment Literacy on MediaHopper.
Coming back to that exemplars of students work… We run Learning to be an Examiner sessions which students could take part in, and which includes the opportunity to mark exemplars of students work. That leads to conversations, and exchange of opinions, to understand the reasons behind the marking. And I would add that any place we can bring the students and teaching staff closer together only benefits us and our NSS scores. The themes coming out of this work was that there was real empathy for staff, and quelling fears. Students also noted that as they took part, the better they understood the requirements, the less important feedback felt.
There have been some trials using ACJ (Adaptive Comparative Judgement), which is the idea that with enough samples of work you can use comparison to put work into an order or ranking. So you present staff several assignments and they can rank them. We ran this as an experiment as it provides a chance for students to see others’ work and compare to their own as well as others. We ran a survey after this experiment but students felt seeing others’ responses, and also to understand others’ approaches to comparison and marking.
So, my final point here is a call for help… As we think about what excites and encourages students I would like to find a Peerwise like system for free text type questions. Student feedback was good, but they wanted to do that for a lot more questions than just those we were able to set. So I would like to take Peerwise away from the MCQ context so that students could see and comment and engage with each others work. And I think that anything that brings students and staff closer together in their understanding is important.
Q&A
Q1) How do we approach this in a practical way. We’ve asked students to look at exemplar essays but we bump into problems doing them. It’s easy to persuade those who wrote good essays and have moved to later years, but it’s hard to find those with poorer.
A1) We were doing this with short questions, not long essays. Hazel Marzetti was encouraging sharing of essays and they were reluctant. I think there’s something around expectation management – creating the idea up front that work will be available for others… That one has to opt out rather than opt out. Or you can mock up essays but you lose that edge of it being the real thing.
Q2) On the idea of exemplars… How do we feel about getting students to do a piece of work, and then sharing that with others on, say, the same topic. You could pick a more tangental topic, but that risks being less relevant, that a good essay is properly authentic… But for others there is a risk of copying potential.
A2) I think that it’s about understanding risk and context. We don’t use the idea of “model answers” but instead “outline answers”. Some students do make that connection… But they are probably those with a high degree of assessment literacy who will do well anyway.
Q3) By showing good work, showing a good range with similar scores, but also when you show students exemplars you don’t just give out the work, you annotate it, point out what makes it good, features that make it notable… A way to inspire students and help them develop assessment literacy when judging others’ work.
And with that our main presentations have drawn to a close with a thank you for all our lovely speakers and contributors.  We are concluding with an Open Discussion on technology in Assessment and Feedback.
Susan: Yeah, I’m quite a fan of mandatory activities but which do not carry a mark. But I’d seriously think about not assigning marks for all feedback activities… 
Comment: But the students can respond with “if it’s so important, why doesn’t this carry credit?”
Susan: Well you can make it count. For instance our vet students have to have a portfolio, and are expected to discuss that annually. That has been zero credits before (now 10 credits) but still mandatory. Having said that our students are not as focused on marking in that way.
Comment: I don’t want to be the “ah, but…” person here… But what if a student fails that mandatory non marked work? What’s the make-up task?
Susan: For us we are able to find a suitable bespoke negotiated exercise for the very few students this applies to…
Comment: What about equity?
Susan: I think removing the mark actually removes that baggage from the argument… Because the important thing here is doing the right tasks for the professional world. I think we should be discussing this more in the future.  
And with that Robert is drawing the event to a close. The next eLearning@ed/LTW monthly meet up is in July, on 27th July and will be focused on the programme for attaining the CMALT accreditation.  
Jun 152016
 

Today I’m at the University of Edinburgh Principal’s Teaching Award Scheme Forum 2016: Rethinking Learning and Teaching Together, an event that brings together teaching staff, learning technologists and education researchers to share experience and be inspired to try new things and to embed best practice in their teaching activities.

I’m here partly as my colleague Louise Connelly (Vet School, formerly of IAD) will be presenting our PTAS-funded Managing Your Digital Footprint project this afternoon. We’ll be reporting back on the research, on the campaign, and on upcoming Digital Foorprints work including our forthcoming Digital Footprint MOOC (more information to follow) and our recently funded (again by PTAS) project: “A Live Pulse: YikYak for Understanding Teaching, Learning and Assessment at Edinburgh.

As usual, this is a liveblog so corrections, comments, etc. welcome. 

Velda McCune, Deputy Director of the IAD who heads up the learning and teaching team, is introducing today:

Welcome, it’s great to see you all here today. Many of you will already know about the Principal’s Teaching Award Scheme. We have funding of around £100k from the Development fund every year, since 2007, in order to look at teaching and learning – changing behaviours, understanding how students learn, investigating new education tools and technologies. We are very lucky to have this funding available. We have had over 300 members of staff involved and, increasingly, we have students as partners in PTAS projects. If you haven’t already put a bid in we have rounds coming up in September and March. And we try to encourage people, and will give you feedback and support and you can resubmit after that too. We also have small PTAS grants as well for those who haven’t applied before and want to try it out.

I am very excited to welcome our opening keynote, Paul Ashwin of Lancaster University, to kick off what I think will be a really interesting day!

Why would going to university change anyone? The challenges of capturing the transformative power of undergraduate degrees in comparisons of quality  – Professor Paul Ashwin

What I’m going to talk about is this idea of undergraduate degrees being transformative, and how as we move towards greater analytics, how we might measure that. And whilst metrics are flawed, we can’t just ignore these. This presentation is heavily informed by Lee Schumers work on Pedagogical Content Knowledge, which always sees teaching in context, and in the context of particular students and settings.

People often talk about the transformative nature of what their students experience. David Watson was, for a long time, the President for the Society of Higher Education (?) and in his presidential lectures he would talk about the need to be as hard on ourselves as we would be on others, on policy makers, on decision makers… He said that if we are talking about education as educational, we have to ask ourselves how and why this transformation takes place; whether it is a planned transformation; whether higher education is a nesseccary and/or sufficient condition for such transformations; whether all forms of higher education result in this transformation. We all think of transformation as important… But I haven’t really evidenced that view…

The Yerevan Communique: May 2015 talks about wanting to achieve, by 2020, a European Higher Education area where there are common goals, where there is automatic recognition of qualifictions and students and graduates can move easily through – what I would characterise is where Bologna begins. The Communique talks about higher education contributing effectively to build inclusive societies, found on democratic values and human rights where educational opportunities are part of European Citizenship. And ending in a statement that should be a “wow!” moment, valuing teaching and learning. But for me there is a tension: the comparability of undergraduate degrees is in conflict with the idea of transformational potential of undergraduate degrees…

Now, critique is too easy, we have to suggest alternative ways to approach these things. We need to suggest alternatives, to explain the importance of transformation – if that’s what we value – and I’ll be talking a bit about what I think is important.

Working with colleagues at Bath and Nottingham I have been working on a project, the Pedagogic Quality and Inequality Project, looking at Sociology students and the idea of transformation at 2 top ranked (for sociology) and 2 bottom ranked (for sociology) universities and gathered data and information on the students experience and change. We found that league tables told you nothing about the actual quality of experience. We found that the transformational nature of undergraduate degrees lies in changes in students sense of self through their engagement with discplinary knowledge. Students relating their personal projects to their disciplines and the world and seeing themselves implicated in knowledge. But it doesn’t always happen – it requires students to be intellectually engaged with their courses to be transformed by it.

To quote a student: “There is no destination with this discipline… There is always something further and there is no point where you can stop and say “I understaood, I am a sociologist”… The thing is sociology makes you aware of every decision you make: how that would impact on my life and everything else…” And we found the students all reflecting that this idea of transformation was complex – there were gains but also losses. Now you could say that this is just the nature of sociology…

We looked at a range of disciplines, studies of them, and also how we would define that in several ways: the least inclusive account; the “watershed” account – the institutional type of view; and the most inclusive account. Mathematics has the most rich studies in this area (Wood et al 2012) where the least inclusive account is “Numbers”, watershed is “Models”, most inclusive is “approach to life”. Similarly Accountancy moves from routine work to moral work; Law from content to extension of self; Music from instrument to communicating; Geograpy is from general world to interactions; Geoscience is from composition of earth – the earth, to relations earth and society. Clearly these are not all the same direction, but they are accents and flavours of the same time. We are going to do a comparison next year on chemistry and chemical engineering, in the UK and South Africa, and actually this work points at what is particular to Higher Education being about engaging with a system of knowledge. Now, my colleague Monica McLean would ask why that’s limited to Higher Education, couldn’t it apply to all education? And that’s valid but I’m going to ignore it just for now!

Another students comments on transformation of all types, for example from wearing a tracksuit to lectures, to not beginning to present themselves this way. Now that has nothing to do with the curriculum, this is about other areas of life. This student almost dropped out but the Afro Carribean society supported and enabled her to continue and progress through her degree. I have worked in HE and FE and the way students talk about that transformation is pretty similar.

So, why would going to university change anyone? It’s about exposure to a system of knowledge changing your view of self, and of the world. Many years ago an academic asked what the point of going to university was, given that much information they learn will be out of date. And the counter argument there is that engagement with seeing different perspectives, to see the world as a sociologist, to see the world as a geographer, etc.

So, to come back to this tension around the comparability of undergraduate degrees, and the transformational potential of undergraduate degrees. If we are about transformation, how do we measure it? What are the metrics for this? I’m not suggesting those will particularly be helpful… But we can’t leave metrics to what is easy to gather, we have to also look at what is important.

So if we think of the first area of compatibility we tend to use rankings. National and international higher education rankings are a dominant way of comparing institutions’ contributions to student success. All universities have a set of figures that do them well. They have huge power as they travel across a number of contexts and audiences – vice chancellors, students, departmental staff. It moves context, it’s portable and durable. It’s nonsense but the strength of these metrics is hard to combat. They tend to involved unrelated and incomparable measures. Their stability reinforces privilege – higher status institutions tend to enrol a much greated proportion of privileged students. You can have some unexpected outcomes but you have to have Oxford, Cambridge, Edinburgh, UCL, Imperial all near the top then your league table is rubbish… Because we already know they are the good universities… Or at least those rankings reinforce the privilege that already exists, the expectations that are set. They tell us nothing about transformation of students. But are skillful performances shaped by generic skills or students understanding of a particular task and their interactions with other people and things?

Now the OECD has put together a ranking concept on graduate outcomes, the AHELO, which uses tests for e.g. physics and engineering – not surprising choices as they have quite international consistency, they are measurable. And they then look at generic tests – e.g a deformed fish is found in a lake, using various press releases and science reports write a memo for policy makers. Is that generic? In what way? Students doing these tests are volunteers, which may not be at all representative. Are the skills generic? Education is about applying a way of thinking in an unstructured space, in a space without context. Now, the students are given context in these texts so it’s not a generic test. But we must be careful about what we measure as what we measure can become an index of quality or success, whether or not that is actually what we’d want to mark up as success. We have strategic students who want to know what counts… And that’s ok as long as the assessment is appropriately designed and set up… The same is true of measures of success and metrics of quality and teaching and learning. That is why I am concerned by AHELO but it keeps coming back again…

Now, I have no issue with the legitimate need for comparison, but I also have a need to understand what comparisons represent, how they distort. Are there ways to take account of students’ transformation in higher education?

I’ve been working, with Rachel Sweetman at University of Oslo, on some key characteristics of valid metrics of teaching quality. For us reliability is much much more important than availability. So, we need ways to assess teaching quality that:

  • are measures of the quality of teaching offered by institutions rather than measures of institutional prestige (e.g. entry grades)
  • require improvements in teaching practices in order to improve performance on the measures
  • as a whole form a coherent set of metrics rather than a set of disparate measures
  • are based on established research evidence about high quality teaching and learning in higher education
  • reflect the purposes of higher education.

We have to be very aware of Goodhearts’ rule that we must be wary of any measure that becomes a performance indicator.

I am not someone with a big issue with the National Student Survey – it is grounded in the right things but the issue is that it is run each year, and the data is used in unhelpful distorted ways – rather than acknowledging and working on feedback it is distorting. Universities feel the need to label engagement as “feedback moments” as they assume a less good score means students just don’t understand when they have that feedback moment.

Now, in England we have the prospect of the Teaching Excellence Framework English White Paper and Technical Consultation. I don’t think it’s that bad as a prospect. It will include students views of teaching, assessment and academic support from the National Student Survey, non completion rates, measures over three years etc. It’s not bad. Some of these measures are about quality, and there is some coherence. But this work is not based on established research evidence… There was great work here at Edinburgh on students learning experiences in UK HE, none of that work is reflected in TEF. If you were being cynical you could think they have looked at available evidence and just selected the more robust metrics.

My big issue with Year 2 TEF metrics are how and why these metrics have been selected. You need a proper consultation on measures, rather than using the White Paper and Technical Consultation to do that. The Office for National Statistics looked at measures and found them robust but noted that the differences between institutions scores on the selected metrics tend to be small and not significant. Not robust enough to inform future work according to the ONS. It seems likely that peer review will end up being how we differentiate between institution.

And there are real issues with TEF Future Metrics… This comes from a place of technical optimism that if you just had the right measures you’d know… This measure ties learner information to tax records for “Longitudinal Education Outcomes data set” and “teaching intensity”. Teaching intensity is essentially contact hours… that’s game-able… And how on earth is that about transformation, it’s not a useful measure of that. Unused office hours aren’t useful, optional seminars aren’t useful…  Keith Chigwell told me about a lecturer he knew who lectured a subject, each week fewer and fewer students came along. The last three lectures had no students there… He still gave them… That’s contact hours that count on paper but isn’t useful. That sort of measure seems to come more from ministerial dinner parties than from evidence.

But there are things that do matter… There is no mechanism outlines for a sector-wide discussion of the development of future metrics. What about expert teaching? What about students relations to knowledge? What about the first year experience – we know that that is crucial for student outcomes? Now the measures may not be easy, but they matter. And what we also see is the Learning Gains project, but they decided to work generically, but that also means you don’t understand students particular engagement with knowledge and engagement. In generic tests the description of what you can do ends up more important than what you actually do. You are asking for claims for what they can do, rather than performing those things. You can see why it is attractive, but it’s meaningless, it’s not a good measure of what Higher Education can do.

So, to finish, I’ve tried to put teaching at the centre of what we do. Teaching is a local achievement – it always shifts according to who the students are , what the setting is, and what the knowledge is. But that also always makes it hard to capture and measure. So what you probably need is a lot of different imperfect measures that can be compared and understood as a whole. However, if we don’t try we allow distorting measures, which reinforce inequalities, to dominate. Sometimes the only thing worse than not being listened to by policy makers, is being listened to them. That’s when we see a Frankenstein’s Monster emerge, and that’s why we need to recognise the issues, to ensure we are part of the debate. If we don’t try to develop alternative measures we leave it open to others to define.

Q&A

Q1) I thought that was really interesting. In your discussion of transformation of undergraduate students I was wondering how that relates to less traditional students, particularly mature students, even those who’ve taken a year out, where those transitions into adulthood are going to be in a different place and perhaps where critical thinking etc. skills may be more developed/different.

A1) One of the studies I talked about was London Metropolitan University has a large percentage of mature students… And actually there the interactions with knowledge really did prove transformative… Often students lived at home with family whether young or mature students. That transformation was very high. And it was unrelated to achievements. So some came in who had quite profound challenges and they had transformation there. But you have to be really careful about not suggesting different measures for different students… That’s dangerous… But that transformation was there. There is lots of research that’s out there… But how do we transform that into something that has purchase… recognising there will be flaws and compromises, but ensuring that voice in the debate. That it isn’t politicians owning that debate, that transformations of students and the real meaning of education is part of that.

Q2) I found the idea of transformation that you started with really interesting. I work in African studies and we work a lot on decolonial issues, and of the need to transform academia to be more representative. And I was concerned about the idea of transformation as a decolonial type issue, of being like us, of dressing like that… As much as we want to challenge students we also need to take on and be aware of the biases inherent in our own ways of doing things as British or Global academics.

A2) I think that’s a really important question. My position is that students come into Higher Education for something. Students in South Africa – and I have several projects there – who have nowhere to live, have very little, who come into Higher Education to gain powerful knowledge. If we don’t have access to a body of knowledge, that we can help students gain access to and to gain further knowledge, then why are we there? Why would students waste time talking to me if I don’t have knowledge. The world exceeds our ability to know it, we have to simplify the world. What we offer undergraduates is powerful simplifications, to enable them to do things. That’s why they come to us and why they see value. They bring their own biographies, contexts, settings. The project I talked about is based in the work of Basil Bernstein who argues that the knowledge we produce in primary research… But when we design curriculum it isn’t that – we engage with colleagues, with peers, with industry… It is transformed, changed… And students also transform that knowledge, they relate it to their situation, to their own work. But we are only a valid part of that process if we have something to offer. And for us I would argue it’s the access to body of knowledge. I think if we only offer process, we are empty.

Q3) You talked about learning analytics, and the issues of AHELO, and the idea of if you see the analytics, you understand it all… And that concept not being true. But I would argue that when we look at teaching quality, and a focus on content and content giving, that positions us as gatekeepers and that is problematic.

A3) I don’t see knowledge as content. It is about ways of thinking… But it always has an object. One of the issues with the debate on teaching and learning in higher education is the loss of the idea of content and context. You don’t foreground the content, but you have to remember it is there, it is the vehicle through which students gain access to powerful ways of thinking.

Q4) I really enjoyed that and I think you may have answered my question.. But coming back to metrics you’ve very much stayed in the discipline-based silos and I just wondered how we can support students to move beyond those silos, how we measure that, and how to make that work.

A4) I’m more course than discipline focused. With the first year of TEF the idea of assessing quality across a whole institution is very problematic, it’s programme level we need to look at. inter-professional, interdisciplinary work is key… But one of the issues here is that it can be implied that that gives you more… I would argue that that gives you differently… It’s another new way of seeing things. But I am nervous of institutions, funders etc. who want to see interdisciplinary work as key. Sometimes it is the right approach, but it depends on the problem at hand. All approaches are limited and flawed, we need to find the one that works for a given context. So, I sort of agree but worry about the evangelical position that can be taken on interdisciplinary work which is often actually multidisciplinary in nature – working with others not genuinely working in an interdisciplinary way.

Q5) I think to date we focus on objective academic ideas of what is needed, without asking students what they need. You have also focused on the undergraduate sector, but how applicable to the post graduate sector?

A5) I would entirely agree with your comment. That’s why pedagogic content matters so much. You have to understand your students first, as well as then also understanding this body of knowledge. It isn’t about being student-centered but understanding students and context and that body of knowledge. In terms of your question I think there is a lot of applicability for PGT. For PhD students things are very different – you don’t have a body of knowledge to share in the same way, that is much more about process. Our department is all PhD only and there process is central. That process is quite different at that level… It’s about contributing in an original way to that body of knowledge as its core purpose. That doesn’t mean students at other levels can’t contribute, it just isn’t the core purpose in the same way.

Parallel Sessions from PTAS projects: Social Media – Enhancing Teaching & Building Community? – Sara Dorman, Gareth James, Luke March

Gareth: It was mentioned earlier that there is a difference between the smaller and larger projects funded under this scheme – and this was one of the smaller projects. Our project was looking at whether we could use social media to enhance teaching and community in our programmes but in wider areas. And we particularly wanted to look at the use of Twitter and Facebook, to engage them in course material but also to strengthen relationships. So we decided to compare the use of Facebook used by Luke March in Russian Politics courses, with the use of Twitter and Facebook  in African Politics courses that Sara and I run.

So, why were we interested in this project? Social media is becoming a normal area of life for students, in academic practice and increasingly in teaching (Blair 2013; Graham 2014). Twitter increasingly used, Facebook well established. It isn’t clear what the lasting impact of social media would be but Twitter especially is heavily used by politicians, celebrities, by influential people in our fields. 2014 data shows 90% of 18-24 year olds regularly using social media. For lecturers social media can be an easy way to share a link as Twitter is a normal part of academic practice (e.g. the @EdinburghPIR channel is well used), keeping staff and students informed of events, discussion points, etc. Students have also expressed interest in more community, more engagement with the subject area. The NSS also shows some overall student dissatisfaction, particularly within politics. So social media may be a way to build community, but also to engage with the wider subject. And students have expressed preference for social media – such as Facebook groups – compared to formal spaces like Blackboard Learn discussion boards. So, for instance, we have a hashtag #APTD – the name of one of our courses – which staff and students can use to share and explore content, including (when you search through) articles, documents etc. shared since 2013.

So, what questions did we ask? Well we wanted to know:

  • Does social media facilitate student learning and enhance the learning experience?
  • Does social media enable students to stay informaed?
  • Does it facilitate participation in debates?
  • Do they feel more included and valued as part of the suject area?
  • Is social media complementary to VLEs like Learn?
  • Which medium works best?
  • And what disadvantages might there be around using these tools? \

We collected data through a short questionnaire about awareness, usage, usefulness. We designed just a few questions that were part of student evaluation forms. Students had quite a lot to say on these different areas.

So, our findings… Students all said they were aware of these tools. There was slightly higher levels of awareness among Facebook users, e.g. Russian Politics for both UG and PG students. Overall 80% said they were aware to some extent. When we looked at usage – meaning access of this space rather than necessarily meaningful engagement – we felt that usage of course materials on Twitter and Facebook does not equal engagement. Other studies have found students lurking more than posting/engaging directly. But, at least amongst our students (n=69), 70% used resources at least once. Daily usage was higher amongst Facebook users, i.e. Russian Politics. Twitter more than twice as likely to have never been used.

We asked students how useful they found these spaces. Facebook was seen as more useful than Twitter. 60% found Facebook “very” or “somewhat useful”. Only a third described Twitter as “somewhat useful” and none said “very useful”. But there were clear differences between UG and PG students. UG students were generally more positive than PG students. They noted that it was useful and interesting to keep up with news and events, but not always easy to tie that back to the curriculum. Students claimed it “interesting” a lot – for instance comparing historical to current events. More mixed responses included that there was plenty of material on Learn, so didn’t use FB or Twitter. Another commented they wanted everything on Learn, in one place. One commented they don’t use Twitter so don’t want to follow the course there, would prefer Facebook or Learn. Some commented that too many posts were shared, information overload. Students thought some articles were random, couldn’t tell what was good and what was not.

A lot of these issues were also raised in focus group discussions. Students do appreciate sharing resources and staying informed, but don’t always see the connection to the course. They recognise potential for debate and discussion but often it doesn’t happen, but when it does they find it intimidating for that to be in a space with real academics and others, indeed they prefer discussion away from tutors and academics on the course too. Students found Facebook better for network building but also found social vs academic distinction difficult. Learn was seen as academic and safe, but also too clunky to navigate and engage in discussions. Students were concerned others might feel excluded. Some also commented that not liking or commenting could be hurtful to some. One student comments “it was kind of more like the icing than the cake” – which I think really sums it up.

Students commented that there was too much noise to pick through. And “I didn’t quite have the know-how to get something out of”. “I felt a bit intimidated and wasn’t sure if I should join in”. others commented only using social media for social purpose – that it would be inappropriate to engage with academics there.  Some saw Twitter as a professional, Facebook as social.

So, some conclusions…

It seems that Facebook is more popoular with students than Twitter, seen as better for building community. Some differences between UG and PG students, with UG more interested. Generally less enthusiasm than anticiapted. Students were interested in nd aware of benefits of joining in discussions but also wary of commenting too much in “public”. This suggests that we need to “build community” in order for the “community building” tools to really works.

There is also an issue of lack of integration between FB, Twitter and Learn. Many of our findings reflect others, for instance Matt Graham in Dundee – who saw potential for HE humanities students. Facebook was particularly popular for their students than Twitter. He looked more at engagement and saw some students engaging more deeply with the wider African knowledge. But one outcome was that student engagement did not occur or engage sustainably without some structure – particular tasks and small nudges, connected to Learning Outcomes, flagging clear benefits at beginning, and that students should take a lead in creating groups – which came out of our work too – also suggested.

There are challenges here: inappropriate use, friending between staff and students for instance. Alastair Blair notes in an article that the utility of Twitter, despite the challenge, cannot be ignored. For academics thinking about impact it is important, but also for students it is important for alignment with wider subject area that moves beyond the classroom.

Our findings suggest that there is no need to rush into social media. But at the same time Sara and I still see benefits for areas like African Studies which is fast moving and poorly covered in the mainstream media. But the idea of students wanting to be engaged in the real world was clearly not carried through. Maybe more support and encouragement is needed for students – and maybe for staff too. And it would be quite interesting to see if and how students experiences of different politics and events – #indyref, #euref, etc. differ. Colleagues are considering using social media in a course on the US presidential election, might work out differently as students may be more confident to discuss these. The department has also moved forward with more presences for staff and students, also alumni.

Closing words from Matt Graham that encouraging students to question and engage more broadly with their subject is a key skill.

Q&A

Q1) What sort of support was in place, or guidelines, around that personla/academic identity thing?

A1) Actually none. We didn’t really realise this would happen. We know students don’t always engage in Learn. We didn’t really fully appreciate how intimidating students really found this. I don’t think we felt the need to give guidelines…

A1 – SD) We kind of had those channels before the course… It was organic rather than pedagogic…

Q1) We spoke to students who wanted more guidance especially for use in teaching and learning.

A1 – SD) We did put Twitter on the Learn page… to follow up… Maybe as academics we are the worst people to understand what students would do… We thought they would engage…

Q1) Will you develop guidelines for other courses…

A1) And a clearer explanation might encourage students to engage a bit more… Could be utility in doing some of that. University/institution wise there is cautious adoption and you see guidance issued for staff on using these things… But wouldn’t want overbearing guidance there.

Q1) We have some guidance under CC licence that you can use, available from Digital Footprints space.

Q2) Could you have a safer filtered space for students to engage. We do writing courses with international PG students and thought that might be useful to have social media available there… But maybe it will confuse them.

A2) There was a preference for a closed “safer” environment, talking only to students in their own cohort and class. I think Facebook is more suited to that sort of thing, Twitter is an open space. You can create a private Facebook group… One problem with Russian Politics was that they have a closed group… But had previous cohorts and friends of staff…

A2 – SD) We were trying to include students in real academia… Real tensions there over purpose and what students get out of it… The sense of not knowing… Some students might have security concerns but think it was insecurity in academic knowledge. They didn’t see themselves as co-producers. That needs addressing…

A2) Students being reluctant to engage isn’t new, but we thought we might have more engagement in social media. Now this was the negative side but actually there was positive things here – that wider awareness, even if one directional.

Q3) I just wanted to ask more about the confidence to participate and those comments that suggested that was a bigger issue – not just in social media – for these students, similarly information seeking behaviour

A3) There is work taking place in SPS around study skills, approaching your studies. Might be some room to introduce this stuff earlier on in school wide or subject wide courses… Especially if we are to use these schools. I completely agree that by the end of these studies you should have these skills – how to write properly, how to look for information… The other thing that comes to mind having heard our keynote this morning is the issue of transformative process. It’s good to have high expectations of UG students, and they seem to rise to the occasion… But I think that we maybe need to understand the difference between UG and PG students… And in PG years they take that further more fully.

A3 – SD) UG are really big courses – which may be part of the issue. In PG they are much smaller… Some students are from Africa and may know more, some come in knowing very little… That may also play in…

Q4) On the UG/PG thing these spaces move quickly! Which tools you use will change quickly. And actually the type of thing you post really matters – sharing a news article is great, but how you discuss and create follow up afterwards – did you see that, the follow up, the creation, the response…

A4 – SD) Students did sometimes interact… But the people who would have done that with email/Learn were the same that used social media in that way.

A4) Facebook and Twitter are new technologies still… So perhaps students will be increasingly more engaged and informed and up for engaging in these space. I’m still getting to grips with the etiquette of Twitter. There was more discussion on Facebook Groups than on Twitter… But also can be very surface level learning… It complements what we are doing but there are challenges to overcoming them… And we have to think about whether that is worthwhile. Some real positives and real challenges.

Parallel Sessions from PTAS projects: Managing Your Digital Footprint (Research Strand) – Dr Louise Connelly 

This was one of the larger PTAS-funded projects. This is the “Research Strand” is because it ran in parallel to the campaign which was separately funded.

There is so much I could cover in this presentation so I’ve picked out some areas I think will be practical and applicable to your research. I’m going to start by explaining what we mean by “Digital Footprint” and then talk more about our approach and the impact of the work. Throughout the project and campaign we asked students for quotes and comments that we could share as part of the campaign – you’ll see these throughout the presentation but you can also use these yourself as they are all CC-BY.

The project wouldn’t have been possible without an amazing research team. I was PI for this project – based at IAD but I’m now at the Vet School. We also had Nicola Osborne (EDINA), Professor Sian Bayne (School of Education). We also had two research students – Phil Sheail in Semester 1 and Clare Sowton in Semester 2. But we also had a huge range of people across the Colleges and support services who were involved in the project.

So, I thought I’d show you a short video we made to introduce the project:

YouTube Preview Image

The idea of the video was to explain what we meant by a digital foorprint. We clearly defined what we meant as we wanted to emphasis to students and staff – though students were the focus – was that your footprint is not just what you do but also what other people post about you, or leave behind about you. That can be quite scary to some so we wanted to address how you can have some control about that.

We ran a campaign with lots of resources and materials. You can find loads of materials on the website. That campaign is now a service based in the Institute for Academic Development. But I will be focusing on the research in this presentation. This all fitted together in a strategy. The campaig was to raise awareness and provide practical guidance, the research sought to gain an in-depth understanding of student’s usage and produce resources for schools. Then to feed into learning and teaching on an ongoing basis. Key to the resaerch was a survey we ran during the campaign, which was analysed by the research team..

In terms of the gap and scope of the campaign I’d like to take you back to the Number 8 bus… It was an idea that came out of myself and Nicola – and others – being asked regularly for advice and support. There was a real need here, but also a real digital skills gap. We also saw staff wanting to embed social media in the curriculum and needing support. The brainwave was that social media wasn’t the campaign that was needed, it was about digital footprint and the wider issues. We also wanted to connect to current research. boyd (2014) who works on networked teens talks about the benefits as well as the risks… as it is unclear how students are engaging with social/digital media and how they are curating their online profiles. We also wanted to look at the idea of eprofessionalism (Chester et al 2013), particularly in courses where students are treated as paraprofessionals – a student nurse, for instance, could be struck off before graduating because of social media behaviours so there is a very real need to support ad raise awareness amongst students.

Our overall research aim was to: work with students across current delivery modes (UG, PGT, ODL, PhD) in order to better understand how they 

In terms of our research objectives we wanted to: conduct research which generates a rich understanding; to develop a workshop template – and ran 35 workshops for over 1000 students in that one year; to critically analyse social media guidelines – it was quite interesting that a lot of it was about why students shouldn’t engage, little on the benefits; to work in partnership with EUSA – important to engage around e.g. campaign days; to contribute to the wider research agenda; and to effectively disseminate project findings – we engaged with support services, e.g. we worked with Careers about their LinkedIn workshops which weren’t well attended despite students wanting professional presence help and just rebranding the sessions was valuable. We asked students where they would seek support – many said the Advice Place rather than e.g. IS, so we spoke to them. We spoke to the Councelling service too about cyberbullying, revenge porn, sexting etc.

So we ran two surveys with a total of 1,457 responses. Nicola and I ran two lab-based focus groups. I interviewed 6 individuals over a range of interviews with ethnographic tracing. And we gathered documentary analysis of e.g. social media guidelines. We used mixed methods as we wanted this to be really robust.

Sian and Adam really informed our research methods but Nicola and I really led the publications around this work. We have had various publications and presentations including presentations at the European Conference on Social Media, for the Social Media for Higher Education Teaching and Learning conference. Also working on a Twitter paper. We have other papers coming. Workshops with staff and students have happened and are ongoing, and the Digital Ambassador award (Careers and IS) includes Digital Footprint as a strand. We also created a lot of CC-BY resources – e.g. guidelines and images. Those are available for UoE colleagues, but also for national and international community who have fed into and helped us develop those resources.

I’m going to focus on some of the findings…

The survey was on Bristol Online Survey. It was sent to around 1/3rd of all students, across all cohorts. The central surveys team did the ethics approval and issuing of surveys. Timing had to fit around other surveys – e.g. NSS etc. And we we had relatively similar cohorts in both surveys, the second had more responses but that was after the campaign had been running for a while.

So, two key messages from the surveys: (1) Ensure informed consent – crucial for students (also important for staff) – students need to understand the positive and negative implications of using these non traditional non university social media spaces. In terms of what that means – well guidance, some of the digital skills gap support etc. Also (2) Don’t assume what students are using and how they are using it. Our data showed age differences in what was used, cohort differences (UG, PGT, ODL, PhD), lack of awareness e.g. T&Cs, benefits – some lovely anecdotal evidence, e.g. UG informatics student approached by employers after sharing code on GitHub. Also the important of not making assumptions around personal/educational/professional environments – especially came out of interviews, and generally the implications of Digital Footprint. One student commented on being made to have a Twitter account for a course and not being happy about not having a choice in that (e.g. through embedding of tweets in Learn for instance).

Thinking about platforms…

Facebook is used by all cohorts but ODL less so (perhaps a geographic issue in part). Most were using it as a “personal space” and for study groups. Challenges included privacy management. Also issues of isolation if not all students were on Facebook.

Twitter is used mainly by PGT and PhD students, and most actively by 31-50 year olds. Lots of talk about how to use this effectively.

One of the surprises for us was that we thought most courses using social media would have guidelines in place for the use of social media in programme handbooks. But students reported them not being there, or not being aware of it. So we created example guidance which is on the website (CC-BY) and also an eprofessionalism guide (CC-BY) which you can also use in your own programme handbooks.

There were also tools we weren’t aware were in usage and that has led to a new YikYak research project which has just been funded by PTAS and will go ahead over the next year with Sian Bayne leading, myself, Nicola and Informatics. The ethnographic tracing and interviews gave us a much richer understanding of the survey data.

So, what next? We have been working with researchers in Ireland, Australia, New Zealand… EDINA has had some funding to develop an external facing consultancy service, providing training and support for NHS, schools, etc. We have the PTAS funded YikYak project. We have the Digital Footprint MOOC coming in August. The survey will be issued again in October. Lots going on, more to come!

We’ve done a lot and we’ve had loads of support and collaboration. We are really open to that collaboration and work in partnership. We will be continuing this project into the next year. I realise this is the tip of the iceberg but it should be food for thought.

Q&A 

Q1) We were interested in the staff capabilities

A1 – LC) We have run a lot of workshops for staff and research students, done a series at vet. Theres a digital skills issue, research, and learning and teaching, and personal strands here.

A1 – NO) There were sessions and training for staff before… And much of the research into social media and digital footprint has been very small cohorts in very specific areas,

Comment) I do sessions for academic staff in SPS, but I didn’t know about this project so I’ll certainly work that in.

A1 – LC) We did do a session for fourth year SPS students. I know business school are all over this as part of “Brand You”.

Q2) My background was in medicine and when working in a hospital and a scary colleague told junior doctors to delete their Facebook profiles! She was googling them. I saw an article in the Sun that badly misrepresented doctors – of doctors living the “high life” because there was something sunny.

A2 – LC) You need to be aware people may Google you… And be confident of your privacy and settings. And your professional body guidelines about what you have there. But there are grey areas there… We wanted to emphasise informed choice. You have the Right to be Forgotten law for instance. Many nursing students already knew restrictions but felt Facebook restrictions unfair… A recent article says there are 3.5 degrees of separation on Facebook – that can be risky… In teaching and learning this raises issues of who friends who, what you report… etc. The culture is we do use social media, and in many ways that’s positive.

A2 – NO) Medical bodies have very clear guidance… But just knowing that e.g. Profile pictures are always public on Facebook, you can control settings elsewhere… Knowing that means you can make informed decisions.

Q3) What is “Brand You”?

A3) Essentially it’s about thinking of yourself as a brand, how your presences are uses… And what is consistent, how you use your name, your profile images. And how you do that effectively if you do that. There is a book called “Brand You” which is about effective online presence.

Closing Keynote : Helen Walker, GreyBox Consulting and Bright Tribe Trust

I’m doing my Masters in Digital Education with University of Edinburgh, but my role is around edtech, and technology in schools, so I am going to share some of that work with you. So, to set the scene a wee video: Kids React to Technology: Old Computers:

YouTube Preview Image

Watching the kids try to turn on the machine it is clear that many of us are old enough to remember how to work late 1970s/early 1980s computers and their less than intuitive user experience.

So the gaps are maybe not that wide anymore… But there are still gaps. The gaps for instance between what students experience at home, and what they can do at home – and that can be huge. There is also a real gap between EdTech promises and delivery – there are many practitioners who are enervated about new technologies, and have high expectations. We also have to be aware of the reality of skills – and be very cautious of Prensky’s (2001) idea of the “digital native” – and how intoxicating and inaccurate that can be.

There is also a real gap between industry and education. There is so much investment in technology, and promises of technology. Meanwhile we also see perspectives of some that computers do not benefit pupils. Worse, in September 2015 the OECD reported, and it was widely re-reported that computers do not improve pupil results, and may in fact disbenefit. That risks going back before technology, or technology being the icing on the cake… And then you read the report:

“Technology can amplify great teaching but great technology cannot replace poor teaching.”

Well of course. Technology has to be pedagogically justified. And that report also encourages students as co-creators. Now if you go to big education technology shows like BETT and SETT you see very big rich technology companies offering expensive technology solutions to quite poor schools.

That reflects Education Endowment Fund Report 2012 found that “it’s the pedagogy, not technology” and the technology is a catalyst for change. Glynis Cousins says that technology has to work dynamically with pedagogy.

Now, you have fabulous physical and digital resources here. There is the issue here of what schools have. Schools often have machines that are 9-10 years old, but students have much more sophisticated devices and equipment at home – even in poor homes. Their school experience of using old kit to type essays jars with that. And you do see schools trying to innovate with technology – iPads and such in particular… They brought them, they invest thousands.. But they don’t always use them because the boring crucial wifi and infrastructure isn’t there. It’s boring and expensive but it’s imperative. You need that all in order to use these shiny things…

And with that… Helen guides us to gogopp.com and the web app to ask us why a monkey with its hand in a jar with a coin… We all respond… The adage is that if you wanted to catch a monkey you had to put an orange or some nuts in a jar, and wouldn’t let go, so a hunter could just capture the monkey. I deal with a lot of monkeys… A lot of what I work towards is convincing them that letting go of that coin, or nut, or orange, or windows 7 to move on and change and learn.

Another question for us… What does a shot of baseball players in a field have to do with edtech… Well yes, “if you build it, they will come”. A lot of people believe this is how you deal with edtech… Now although a scheme funding technology for schools in England has come to an end, a lot of Free Schools now have this idea. That if you build something, magic will happen…

BTW this gogopp tool is a nice fun free tool – great for small groups…

So, I do a lot of “change management consultation” – it’s not a great phrase but a lot of what it’s about is pretty straightforward. Many schools don’t know what they’ve got – we audit the kit, the software, the skills. We work on a strategy, then a plan, then a budget. And then we look at changes that make sense… Small scale, pathfinder projects, student led work – with students in positions of responsibility, we have a lot of TeachMeet sessions – a forum of 45 mins or so and staff who’ve worked on pathfinder projects have 2 or max 5 mins can share their experience – a way to drop golden nuggets into the day (much more effective than inset days!), and I do a lot of work with departmental heads to ensure software and hardware aligns with needs.

When there is the right strategy and the right pedagogical approach, brilliant things can happen. For instance…

Abdul Chohan, now principal of Bolton Academy, transformed his school with iPads – giving them out and asking them what to do with them. He works with Apple now…

David Mitchell (no, not that one), Deputy Headteacher in the Northwest, started a project called QuadBlogging for his 6th year students (year 7 in Scotland) whereby there are four organisations involved – 2 schools and 2 other institutions, like MIT, like the Government – big organisations. Students get real life, real world feedback in writing. They saw significant increases in their writing quality. That is a great benefit of educational technology – your audience can be as big or small as you want. It’s a nice safe contained forum for children’s writing.

Simon Blower, had an idea called “Lend me your writing”, crowdfunded Pobble – a site where teachers can share examples of student work.

So those are three examples of pedagogically-driven technology projects and changes.

And now we are going to enter Kahoot.it…

The first question is about a free VLE – Edmodo… It’s free except for analytics which is a paid for option.

Next up… This is a free behaviour management tool. The “Class Story” fundtion has recently been added… That’s Class Dojo.

Next… A wealth of free online courses, primarily aimed at science, math and computing… Khan Academy. A really famous resource now. Came about as Salmon Khan who asked for maths homework help… Made YouTube videos… Very popular and now a global company with a real range of videos from teachers. No adverts. Again free…

And next… an adapting learning platform with origins in the “School of One” in NYC. That’s Knewton. School of One is an interesting school which has done away with traditional classroom one to many systems… They use Knewton, which suggests the next class, module, task, etc. This is an “Intelligent Tutoring System” which I am skeptical of but there is a lot of interest from publishers etc. All around personalised learning… But that is all data driven… I have issues with thinking of kids as data producing units.

Next question… Office 365 tool allows for the creation of individual and class digital notebooks – OneNote. It’s a killer app that Microsoft invest in a lot.

And Patrick is our Kahoot winner (I’m second!)! Now, I use Kahoot I training sessions… It’s fun once… Unless everyone uses it through the day. It’s important that students don’t just experience the same thing again and again, that you work as a learning community to make sure that you are using tools in a way that stays interesting, that varies, etc.

So, what’s happening now in schools?

  • Mobility: BYOD, contribution, cross-platform agility
  • Office365/Google/iCloud
  • VLE/LMS – PLE/PLN – for staff and students
  • Data and tracking

So with mobility we see a growth in Bring Your Own Device… That brings a whole range of issues around esafety, around infrastructure. It’s not just your own devices, but also increasingly a kind of hire-purchase scheme for students and parents. That’s a financial pressure – schools are financially pressured and this is just a practical issue. One issue that is repeatedly coming up is the issue of cross-platform agility – phones, tablets, laptops. And discussion of bringing in keyboards, mice, and traditional set ups… Keyboard skills are being seen as important again in the primary sector. The benefit of mobile devices is collaboration, the idea of the main screen allowing everyone to be part of the classroom… You don’t need expensive software, can use e.g. cheap Reflector mirroring software. Apps… Some are brilliant, some are dreadful… Management of apps and mobile device management has become a huge industry… Working with technicians to support getting apps onto devices… How you do volume purchasing? And a lot of apps… One of two hit propositions… You don’t want the same app every week for one task… You need the trade off of what is useful versus getting the app in place/stafftime. We also have the issue of the student journey. Tools like socrative and nearpod lets you push information to devices. But we are going to look at/try now Plickers… What that does is has one device – the teachers mobile app – and I can make up printed codes (we’ve all been given one today) that can be laminated, handed out at the beginning of the year… So we then hold up a card with the appropriate answer at the top… And the teacher’s device is walked around to scan the room for the answers – a nice job for a student to do… So you can then see the responses… And the answer… I can see who got it wrong, and who got it right. I can see the graph of that….

We have a few easy questions to test this: 2+2 = (pick your answer); and how did you get here today? (mostly on foot!).

The idea is it’s a way to get higher order questioning into a session, otherwise you just hear from the kids that put their hands up all the time. So that’s Plicker… Yes, they all have silly names. I used to live in Iceland where a committee meets to agree new names – the word for computer means “witchcraft machine”.

So, thinking about Office365/Google/iCloud… We are seeing a video about a school where pupils helps promote, manage, coding, supporting use of Office365 in the school. And how that’s a way to get people into technology. These are students at Wyndham High in Norfolk – all real students. That school has adopted Office365. Both Office365 and Google offer educational environments. One of the reasons that schools err towards Office365 is because of the five free copies that students get – which covers the several locations and machines they may use at home.

OneNote is great – you can drag and drop documents… you can annotate… I use it with readings, with feedback from tutors. Why it’s useful for students is the facility to create Class Notebooks where you add classes and add notebooks. You can set up a content library – that students can access and use. You can also view all of the students notebooks in real time. In schools I work in we no longer have planners, instead have a shared class notebook – then colleagues can see and understand planning.

Other new functionality is “Classroom” where you can assign classes, assignments… It’s a new thing that brings some VLE functionality but limited in terms of grades being 0-100. And you can set up forms as well – again in preview right now but coming. Feedback goes into a CSV file in excel.

The other thing that is new is Planner – a project planning tool to assign tasks, share documents, set up groups.

So, Office 365 is certainly the tool most secondary schools I work with use.

The other thing that is happening in schools right now is the increasing use of data dashboards and tracking tools – especially in secondary schools – and that is concerning as it’s fairly uncritical. There is a tool called Office Mix which lets you create tracked content in Powerpoint… Not sure if you have access here, but you can use it at home.

Other data in schools tools include Power BI… Schools are using these for e.g. attainment outcomes. There is a free schools version of this tool (used to be too expensive). My concern is that it is not looking at what has impact in terms of teaching and learning. It’s focused on the summative, not the actual teaching and learning, not on students reporting back to teachers on their own learning. Hattie and self-reported grades tells us that students set expectations, goals, and understand rubrics for self-assessment. There is rich and interesting work to be done on using data in rich and meaningful ways.

In terms of what’s coming… This was supposed to be by 2025, then 2020, maybe sooner… Education Technology Action Group suggest online learning is an entitlement, better measures of performance, new emerging teaching and learning, wearables, etc.

Emerging EdTech includes Augmented Reality. It’s a big thing I do… It’s easy but it excites students… It’s a digital overlay on reality… So my two year old goddaughter is colouring in book that is augmented reality – you can then see a 3D virtual dinosaur coloured as per your image. And she asked her dad to send me a picture of her with a dinosaur. Other fun stuff… But where is the learning outcome here? Well there is a tool called Aurasma… Another free tool… You create a new Aura trigger image – can be anything – and you can choose your overlay… So I said I wanted to change the words on th epaper converted into French. It’s dead easy! We get small kids into this and can put loads of hidden AR content around the classroom, you can do it on t-shirts – to show inner working of the body for instance. We’ve had Year 11’s bring Year 7 textbooks to life for them – learning at both ends of the spectrum.

Last thing I want to talk about is micro:bit. This is about coding. In England and Wales coding is compulsory part of English now. All students are being issued a micro:bit and students are now doing all sorts of creative things. Young Rewired State project runs every summer and come to London to have code assessed – the winners were 5 and 6 year olds. So they will come to you with knowledge of coding – but they aren’t digital natives no matter what anyone tells you!

Q&A

Q1 – Me) I wanted to ask about equality of access… How do you ensure students have the devices or internet access at home that they need to participate in these activities and tools – like the Office365 usage at home for instance. In the RSE Digital Participation Inquiry we found that the reality of internet connectivity in homes really didn’t match up to what students will self-report about their own access to technology or internet connections, there is such baggage associated with not having internet access to access to the latest technologies and tools… So I was wondering how you deal with that, or if you have any comments on that.

A1) With the contribution schemes that schools have for devices… Parents contribute what they can, school covers the rest… So that can be 50p or £1 per month, it doesn’t need to be a lot. Also pupil premium money can be used for this. But, yes, parental engagement is important… Many students have 3G access not fixed internet for instance and that has cost implications… some can use dongles supplied by schools but just supporting students like this can cost 15k/yr to support for a small to medium sized cohort. There is some interesting stuff taking place in new build schools though… So for instance Gaia in Wales are a technology company doing a lot of the new build hardware/software set up… In many of those schools there is community wifi access… a way around that issue of connectivity… But that’s a hard thing to solve.

Q1 – Me) There was a proposal some years ago from Gordon Brown’s government, for all school aged children to have government supported internet access at home but that has long since been dropped.

Q2) I fear with technologies is that if I learn it, it’s already out of date. And also learners who are not motivated to engage with these tools they haven’t used before… I enjoyed these tools, their natty…

A2) Those are my “sweet shop” tools… Actually Office365/Google or things like Moodle are the bread and butter tools. These are fun one-off apps… They are pick up and go stuff… but its getting big tools working well that matter. Ignore the sweets if you need or want… The big stuff matters.

And with that Velda is closing with great thanks to our speakers today, to colleagues in IAD, and to Daphne Loads and colleagues. Please do share your feedback and ideas, especially for the next forum!