Aug 092016
 
Notes from the Unleashing Data session at Repository Fringe 2016

After 6 years of being Repository Fringe‘s resident live blogger this was the first year that I haven’t been part of the organisation or amplification in any official capacity. From what I’ve seen though my colleagues from EDINA, University of Edinburgh Library, and the DCC did an awesome job of putting together a really interesting programme for the 2016 edition of RepoFringe, attracting a big and diverse audience.

Whilst I was mainly participating through reading the tweets to #rfringe16, I couldn’t quite keep away!

Pauline Ward at Repository Fringe 2016

Pauline Ward at Repository Fringe 2016

This year’s chair, Pauline Ward, asked me to be part of the Unleashing Data session on Tuesday 2nd August. The session was a “World Cafe” format and I was asked to help facilitate discussion around the question: “How can the respository community use crowd-sourcing (e.g. Citizen Science) to engage the public in reuse of data?” – so I was along wearing my COBWEB: Citizen Observatory Web and social media hats. My session also benefited from what I gather was an excellent talk on “The Social Life of Data” earlier in the event from the Erinma Ochu (who, although I missed her this time, is always involved in really interesting projects including several fab citizen science initiatives).

I won’t attempt to reflect on all of the discussions during the Unleashing Data Session here – I know that Pauline will be reporting back from the session to Repository Fringe 2016 participants shortly – but I thought I would share a few pictures of our notes, capturing some of the ideas and discussions that came out of the various groups visiting this question throughout the session. Click the image to view a larger version. Questions or clarifications are welcome – just leave me a comment here on the blog.

Notes from the Unleashing Data session at Repository Fringe 2016

Notes from the Unleashing Data session at Repository Fringe 2016

Notes from the Unleashing Data session at Repository Fringe 2016

If you are interested in finding out more about crowd sourcing and citizen science in general then there are a couple of resources that made be helpful (plus many more resources and articles if you leave a comment/drop me an email with your particular interests).

This June I chaired the “Crowd-Sourcing Data and Citizen Science” breakout session for the Flooding and Coastal Erosion Risk Management Network (FCERM.NET) Annual Assembly in Newcastle. The short slide set created for that workshop gives a brief overview of some of the challenges and considerations in setting up and running citizen science projects:

Last October the CSCS Network interviewed me on developing and running Citizen Science projects for their website – the interview brings together some general thoughts as well as specific comment on the COBWEB experience:

After the Unleashing Data session I was also able to stick around for Stuart Lewis’ closing keynote. Stuart has been working at Edinburgh University since 2012 but is moving on soon to the National Library of Scotland so this was a lovely chance to get some of his reflections and predictions as he prepares to make that move. And to include quite a lot of fun references to The Secret Diary of Adrian Mole aged 13 ¾. (Before his talk Stuart had also snuck some boxes of sweets under some of the tables around the room – a popularity tactic I’m noting for future talks!)

So, my liveblog notes from Stuart’s talk (slightly tidied up but corrections are, of course, welcomed) follow. Because old Repofringe live blogging habits are hard to kick!

The Secret Diary of a Repository aged 13 ¾ – Stuart Lewis

I’m going to talk about our bread and butter – the institutional repository… Now my inspiration is Adrian Mole… Why? Well we have a bunch of teenage repositories… EPrints is 15 1/2; Fedora is 13 ½; DSpace is 13 ¾.

Now Adrian Mole is a teenager – you can read about him on Wikipedia [note to fellow Wikipedia contributors: this, and most of the other Adrian Mole-related pages could use some major work!]. You see him quoted in two conferences to my amazement! And there are also some Scotland and Edinburgh entries in there too… Brought a haggis… Goes to Glasgow at 11am… and says he encounters 27 drunks in one hour…

Stuart Lewis at Repository Fringe 2016

Stuart Lewis illustrates the teenage birth dates of three of the major repository softwares as captured in (perhaps less well-aged) pop hits of the day.

So, I have four points to make about how repositories are like/unlike teenagers…

The thing about teenagers… People complain about them… They can be expensive, they can be awkward, they aren’t always self aware… Eventually though they usually become useful members of society. So, is that true of repositories? Well ERA, one of our repositories has gotten bigger and bigger – over 18k items… and over 10k paper thesis currently being digitized…

Now teenagers also start to look around… Pandora!

I’m going to call Pandora the CRIS… And we’ve all kind of overlooked their commercial background because we are in love with them…!

Stuart Lewis at Repository Fringe 2016

Stuart Lewis captures the eternal optimism – both around Mole’s love of Pandora, and our love of the (commercial) CRIS.

Now, we have PURE at Edinburgh which also powers Edinburgh Research Explorer. When you looked at repositories a few years ago, it was a bit like Freshers Week… The three questions were: where are you from; what repository platform do you use; how many items do you have? But that’s moved on. We now have around 80% of our outputs in the repository within the REF compliance (3 months of Acceptance)… And that’s a huge change – volumes of materials are open access very promptly.

So,

1. We need to celebrate our success

But are our successes as positive as they could be?

Repositories continue to develop. We’ve heard good things about new developments. But how do repositories demonstrate value – and how do we compare to other areas of librarianship.

Other library domains use different numbers. We can use these to give comparative figures. How do we compare to publishers for cost? Whats our CPU (Cost Per Use)? And what is a good CPU? £10, £5, £0.46… But how easy is it to calculate – are repositories expensive? That’s a “to do” – to take the cost to run/IRUS cost. I would expect it to be lower than publishers, but I’d like to do that calculation.

The other side of this is to become more self-aware… Can we gather new numbers? We only tend to look at deposit and use from our own repositories… What about our own local consumption of OA (the reverse)?

Working within new e-resource infrastructure – http://doai.io/ – lets us see where open versions are available. And we can integrate with OpenURL resolvers to see how much of our usage can be fulfilled.

2. Our repositories must continue to grow up

Do we have double standards?

Hopefully you are all aware of the UK Text and Data Mining Copyright Exception that came out from 1st June 2014. We have massive massive access to electronic resources as universities, and can text and data mine those.

Some do a good job here – Gale Cengage Historic British Newspapers: additional payment to buy all the data (images + XML text) on hard drives for local use. Working with local informatics LTG staff to (geo)parse the data.

Some are not so good – basic APIs allow only simple searchers… But not complex queries (e.g. could use a search term, but not e.g. sentiment).

And many publishers do nothing at all….

So we are working with publishers to encourage and highlight the potential.

But what about our content? Our repositories are open, with extracted full-text, data can be harvested… Sufficient but is it ideal? Why not do bulk download from one click… You can – for example – download all of Wikipedia (if you want to).  We should be able to do that with our repositories.

3. We need to get our house in order for Text and Data Mining

When will we be finished though? Depends on what we do with open access? What should we be doing with OA? Where do we want to get to? Right now we have mandates so it’s easy – green and gold. With gold there is PURE or Hybrid… Mixed views on Hybrid. Can also publish locally for free. Then for gree there is local or disciplinary repositories… For Gold – Pure, Hybrid, Local we pay APCs (some local option is free)… In Hybrid we can do offsetting, discounted subscriptions, voucher schemes too. And for green we have UK Scholarly Communications License (Harvard)…

But which of these forms of OA are best?! Is choice always a great thing?

We still have outstanding OA issues. Is a mixed-modal approach OK, or should we choose a single route? Which one? What role will repositories play? What is the ultimate aim of Open Access? Is it “just” access?

How and where do we have these conversations? We need academics, repository managers, librarians, publishers to all come together to do this.

4. Do we now what a grown-up repository look like? What part does it play?

Please remember to celebrate your repositories – we are in a fantastic place, making a real difference. But they need to continue to grow up. There is work to do with text and data mining… And we have more to do… To be a grown up, to be in the right sort of environment, etc.

Q&A

Q1) I can remember giving my first talk on repositories in 2010… When it comes to OA I think we need to think about what is cost effective, what is sustainable, why are we doing it and what’s the cost?

A1) I think in some ways that’s about what repositories are versus publishers… Right now we are essentially replicating them… And maybe that isn’t the way to approach this.

And with that Repository Fringe 2016 drew to a close. I am sure others will have already blogged their experiences and comments on the event. Do have a look at the Repository Fringe website and at #rfringe16 for more comments, shared blog posts, and resources from the sessions. 

Jun 292016
 

Today I am at theFlood and Coastal Erosion Risk Management Network (FCERM.net) 2016 Annual Assembly in Newcastle. The event brings together a really wide range of stakeholders engaged in flood risk management. I’m here to talk about crowd sourcing and citizen science, with both COBWEB and University of Edinburgh CSCS Network member hats on, as the event is focusing on future approaches to managing flood risk and of course citizen science offers some really interesting potential here. 

I’m going to be liveblogging today but as the core flooding focus of the day is not my usual subject area I particularly welcome any corrections, additions, etc. 

The first section of the day is set up as: Future-Thinking in Flood Risk Management:

Welcome by Prof Garry Pender

Welcome to our third and final meeting of this programme of network meetings. Back at our first Assembly meeting we talked about projects we could do together, and we are pleased to say that two proposals are in the process of submissions. For today’s Assembly we will be looking to the future and future thinking about flood risk management.  There is a lot in the day but also we encourage you to discuss ideas, form your own breakout groups if you want.

And now onto our first speaker. Unfortunately Prof Hayley Fowler, Professor of Climate Change Impacts, Newcastle University cannot be with us today. But Chris Kilby has stepped in for Hayley.

Chris Kilby, Newcastle University – What can we expect with climate change? 

Today is 29th June, which means that four years ago today we had the “Toon Monsoon” –  around 50mm in 2 hours and the full city was in lockdown. We’ve seen some incidents like this in the last year, in London, and people are asking about whether that is climate change. And that incident has certainly changed thinking and practice in the flood risk management community. It’s certainly changed my practice – I’m now working with sewer systems which is not something I ever imagined.

Despite spending millions of pounds on computer models, the so-called GCMs, these models seem increasingly hard to trust as the academic community realise how difficult to predict flooding risk actually is. It is near impossible to predict future rainfall – this whole area is riven with great uncertainty. There is a lot of good data and thinking behind them, but I now have far more concern about the usefulness of these models than 20 years ago – and that’s despite the fact that these models are a lot better than they were.

So, the climate is changing. We see some clear trends both locally and globally. A lot of these we can be confident of. Temperature rises and sea level rise we have great confidence in those trends. Rainfall seasonality change (more in winter, less in summer), and “heavy rainfall” in the UK at least, has been fairly well predicted. What has been less clear is the extremes of rainfall (convective), and extremes of rainfall like the Toon Monsoon. Those extremes are the hardest to predict, model, reproduce.

The so called UKCP09 projections, from 2009, are still there and are still the predictions being used but a lot has changed with the models we use, with the predictions we are making. We haven’t put out any new projections – although that was originally the idea when UK CP09 projections came  out. So, officially, we are still using UKCP09. That produced coherant indications of more frequent and heavy rainfall in the UK. And UKCP09 suggests 15-20% increased in Rmed in winter. But these projections are based on daily rainfall, what was not indicated here was the increased hourly rate. So some of the models looking at decreased summer rainfall, which means a lower mean rainfall per hour, but actually that isn’t looking clear anymore. So there are clear gaps here, firstly with hourly level convective storms, and all climate models have the issue of when it comes to “conveyer belt” sequences of storms, it’s not clear models reliably reproduce these.

So, this is all bad news so far… But there is some good news. More recent models (CMIP5) suggest some more summer storms and accommodate some convective summer storms. And those newer models – CMIP5 and those that follow – will feed into the new projections. And some more good news… The models used in CP09, even high resolution regional models, ran on a resolution of 25km and downscaled using weather generator to 5km but no climate change information beyond 25km. Therefore within the 25km grid box the rain fall is averaged and doesn’t adequately resolve movement of air and clouds, adding a layer of uncertainty, as computers aren’t big/fast enough to do a proper job of resolving individual cloud systems. But Hayley, and colleagues at the Met Office, have been running higher resolution climate models which are similar for weather forecasting models at something like a 1.5km grid size. Doing that with climate data and projecting over the long term do seem to resolve the convective storms. That’s good in terms of new information. Changes look quite substantial: summer precipitation intensities are expected to increase by 30-40% for short duration heavy events. That’s significantly higher than UKCP09 but there are limitations/caveats here too… So far the simulations are on the South East of England only, simulations have been over 10 years in duration, but we’d really want more like 100 year model. And there is still poor understanding of the process and of what makes a thunderstorm – thermodynamic versus circulation changes may conflict. Local thermodynamics are important but that issue of circulation, the impacts of large outbreaks of warm air from across the continent, and that conflict between those processes is far from clear in terms of what makes the difference.

So, Hayley has been working on this with the Met Office, and she now has an EU project with colleagues in the Netherlands which is producing interesting initial results. There is a lot still to do but it does look like a larger increase in convection than we’d previously thought. Looking at winter storms we’ve seen an increase over the last few years. Even the UKCP09 models predicted some of this but so far we don’t see a complete change attributable to climate change.

Now, is any of this new? Our working experience and instrumental records tend to only go back 30-40 years, and that’s not long enough to understand climate change. So this is a quick note of historical work which has been taking place looking at Newcastle flooding history. Trawling through the records we see that the Toon Monsoon isn’t unheard of – we’ve had them three or four times in the last century:

  • 16th Set 1913 – 2.85 inches (72mm) in 90 minutes
  • 22nd June 1941 – 1.97 inches (50mm) in 35 minutes and 3.74 inches (95mm) in 85 minutes
  • 28th June 2012 – 50mm in 90 minutes

So, these look like incidents every 40 years or so looking at historic records. That’s very different from the FEH type models and how they account for Fluvial flooding, storms, etc.

In summary then climate models produce inherently uncertain predictions but major issues remain with extremes in general, and hourly rainfall extremes. Overall picture that is emerging is of increasing winter rainfall (intensity and frequency), potential for increased (summer) convective rainfall, and in any case there is evidence that climate variability over the last century has included similar extremes to those observed in the last decade.

And the work that Hannah and colleagues are working on are generating some really interesting results so do watch this space for forthcoming papers etc.

Q&A

Q1) Is that historical data work just on Newcastle?

A1) David has looked at Newcastle and some parts of Scotland. Others are looking at other areas though.

Q2) Last week in London on EU Referendum day saw extreme rainfall – not as major as here in 2012 – but that caused major impacts in terms of travel, moving of polling station etc. So what else is taking place in terms of work to understand these events and impacts?

A2) OK, so impacts wise that’s a bit difference. And a point of clarification – the “Toon Monsoon” wasn’t really a Monsoon (it just rhymes with Toon). Now the rainfall in London and Brighton being reported looked to be 40mm in an hour, which would be similar or greater than in Newcastle so I wouldn’t downplay them. The impact of these events on cities particularly is significant. In the same way that we’ve seen an increase in fluvial flooding in the last ten years, maybe we are also seeing an increase in these more intense shorter duration events. London is certainly very vulnerable – especially with underground systems. Newcastle Central here was closed because of water ingress at the front – probably wouldn’t happen now as modifications have been made – and metro lines shut. Even the flooding event in Paris a few weeks back was most severely impacting the underground rail/metro, road and even the Louvre. I do worry that city planners have build in vulnerability for just this sort of event.

Q3) I work in flood risk management for Dumfries and Galloway – we were one of the areas experiencing very high rainfall. We rely heavily in models, rainfall predictions etc. But we had an event on 26th/27th January that wasn’t predicted at all – traffic washed off the road, broke instrument peaks, evacuations were planned. SEPA and the Met office are looking at this but there is a gap here to handle this type of extreme rainfall on saturated ground.

A3) I’m not aware of that event, more so with flooding on 26th December which caused flooding here in Newcastle and more widespread. But that event does sound like the issue for the whole of that month for the whole country. It wasn’t just extreme amounts of daily rainfall, but it was the fact that the previous month had also been very wet. That combination of several months of heavy rainfall, followed by extreme (if not record breaking on their own) events really is an issue – it’s the soul of hydrology. And that really hasn’t been recognised to date. The storm event statistics tend to be the focus rather than storms and the antecedent conditions. But this comes down to flood managers having their own rules to deal with this. In the North East this issue has arisen with the River Tyne where the potential for repurposing rivers for flood water retention has been looked at – but you need 30 day predictions to be able to do that. And if this extreme event following a long period of rain really changes that and poses challenges.

Comment – Andy, Environment Agency) Just to note that EA DEFRA Wales have a programme to look at how we extend FEH but also looking at Paleo Geomorphology to extend that work. And some interesting results already.

Phil Younge, Environment Agency – The Future of Flood Risk Management

My role is as Yorkshire Major Incident Recovery Manager, and that involves three things: repairing damage; investing in at-risk communities; and engaging with those communities. I was brought in to do this because of another extreme weather event, and I’ll be talking about the sorts of things we need to do to address these types of challenges.

So, quickly, a bit of background on the Environment Agency. We are the National flood risk agency for England. And we have a broad remit including risk of e.g. implications of nuclear power stations, management of catchment areas, work with other flood risk agencies etc. And we directly look after 7100 km of river, coastal and tidel raised defences; 22,600 defences, with assets worth over 20 billion. There are lots of interventions we can make to reduce the risk to communities. But how do we engage with communities to make them more resiliant for whatever the weather may throw at them? Pause on that thought and I’ll return to it shortly.

So I want to briefly talk about the winter storms of 2015-16. The Foss Barrier in York is what is shown in this image – and what happened there made national news in terms of the impact on central York. The water levels were unprecedentedly high. And this event was across the North of England, with record river levels across the region and we are talking probably 1 metre higher than we had experienced before, since records began. So the “what if” scenarios are really being triggered here. Some of the defences built as a result of events in 2009 were significantly overtopped, so we have to rethink what we plan for in the future. So we had record rainfall, 14 catchments experienced their highest ever river flow. But the investment we had put in made a difference, we protected over 20,000 properties during storms Desmond and Eva – even though some of those defences have been overtopped in 2009. We saw 15k households and 2,600 businesses flooded in 2009, with 150 communities visited by flood support officers. We issued 92 flood warnings – and we only do that when there is a genuine risk to loss of life. We had military support, temporary barriers in place, etc. for this event but the levels were truly unprecedented.

Significant damage was done to our flood defences across the North of England. In parts of Cumbria the speed and impact of the water, the force and energy of that water, have made huge damage to roads and buildings. We have made substantial work to repair those properties to the condition they were in before the rain. We are spending around £24 million to do that and do it at speed for October 2016.

But what do we do about this? Within UK PLC how do we forecast and manage the impact and consequence of flooding across the country? Following the flooding in Cumbria Oliver Letwin set up the Flood Risk Resilience Review, to build upon the plans the Environment Agency and the Government already has, to look at what must be done differently to support communities across the whole of England. The Review has been working hard across the last four months, and there are four strands I want to share:

  • Modelling extreme weather and stress testing resilience to flood risk – What do we plan for? What is a realistic and scalable scenario to plan for? Looking back at that Yorkshire flooding, how does that compare to existing understanding of risk. Reflecting on likely extreme consequences as a yardstick for extreme scenarios.
  • Assessing the resilience of critical local infrastructure – How do we ensure that businesses still run, that we can run as a community. For instance in Leeds on Boxing Day our telecommunications were impacted by flooding. So how can we address that? How do we ensure water supply and treatment is fit for these events? How can we ensure our hospitals and health provision is appropriate? How can we ensure our transport infrastructure is up and running. As an aside the Leeds Boxing Day floods happened on a non working day – the Leeds rail station is the second busiest outside London so if that had happened on a working day the impact could have been quite different, much more severe.
  • Temporary defences – how can we move things around the country to reduce risk as needed, things like barriers and pumps. How do we move those? How do we assess when they are needed? How do we ensure we had the experience and skills to use those temporary defences? A review by the military has been wrapped into this Resilience Review.
  • Flood risk in core cities – London is being used as a benchmark, but we are also looking at cities like Leeds and how we invest to keep these core key cities operating at times of heightened flood risk.

So, we are looking at these areas, but also how we can help our communities to be more resilient. The Environment Agency are looking at community engagement and that’s particularly part of what we are here to do, to develop and work with the wider FCERM community.

We do have an investment programme from 2015-2021 which includes substantial capital investment. We are investing significantly in the North of England (e.g. £54 per person for everyone in Yorkshire, Lancashire, and Cumbria, also East Midlands and Northumbria. And that long planning window is letting us be strategic, to invest based on evidence of need. And in the Budget 2016 there was an additional £700 million for flood risk management to better protect 4,745 homes and 1,700 businesses. There will also be specific injections of investment in places like Leeds, York, Carlisle etc. to ensure we can cope with incidents like we had last year.

One thing that really came out of last year was the issue of pace. As a community we are used to thinking slowly before acting, but there is a lot of pressure from communities and from Government to act fast, to get programmes of work underway within 12 months of flooding incidents. Is that fast? Not if you live in an affected area, but it’s faster than we may be used to. That’s where the wealth of knowledge and experience needs to be available to make the right decisions quickly. We have to work together to do this.

And we need to look at innovation… So we have created “Mr Nosy”, a camera to put down culverts(?) and look for inspect them. We used to (and do) have teams of people with breathing apparatus etc. to do this, but we can put Mr Nosy down so that a team of two can inspect quickly. That saves time and money and we need more innovations that allow us to do this.

The Pitt  Review (2008) looked at climate change and future flood and coastal risk management discussed the challenges. There are many techniques to better defend a community, we need the right blend of approach: “flood risk cannot be managed by building ever bigger “hard” defences”; natural measures are sustainable; multiple benefits for people, properties and wildlife; multi-agency approach is the way forward. Community engagement is also crucial to inform the community to understand the scale of the risk, to understand how to live with risk in a positive way. So, this community enables us to work with research, we need that community engagement, and we need efficiency – that big government investment needs to be well spent, we need to work quickly and to shortcut to answers quickly but those have to be the right answers. And this community is well placed to help us ensure that we are doing the right things so that we can assure the communities, and assure the government that we are doing the right things.

Q&A

Q1) When is that Review due to report?

A1) Currently scheduled for middle of July, but thereabouts.

Q2) You mentioned the dredging of watercourses… On the back of major floods we seem to have dredging, then more 6 weeks lately. For the public there is a perception that that will reduce flood risk which is really the wrong message. And there are places that will continue to flood – maybe we have to move coastal towns back? You can’t just keep building walls that are bigger and bigger.

A2) Dredging depends so much on the circumstances. In Calderdale we are making a model so that people can understand what impact different measures have. Dredging helps but it isn’t the only things. We have complex hydro-dynamic models but how do we simply communicate how water levels are influenced, the ways we influence the river channel. And getting that message across will help us make changes with community understanding. In terms of adaptation I think you are spot on. Some communities will probably adapt because of that, but we can’t just build higher and higher walls. I am keen that flood risk is part of the vision for a community, and how that can be managed. Historically in the North East cities turned their backs on the river, as water quality has improved that has changed, which is great but brings its own challenges.

Q3) You mentioned a model, is that a physical model?

A3) Yes, a physical model to communicate that. We do go out and dredge where it is useful, but in many cases it is not which means we have to explain that when communities think it is the answer to flooding. Physical models are useful, apps are good… But how do we get across some of the challenges we face in river engineering.

Q4) You talked about community engagement but can you say more about what type of engagement that is?

A4) We go out into the communities, listen to the experiences and concerns, gathering evidence, understanding what that flooding means for them. Working with the local authorities those areas are now producing plans. So we had an event in Calderdale marking six months since the flood, discussing plans etc. But we won’t please all the people all of the time, so we need to get engagement across the community. And we need that pace – which means bringing the community along, listening to them, bringing into our plans… That is challenging but it is the right thing to do. At the end of the day they are the people living there, who need to reassured about how we manage risk and deliver appropriate solutions.

The next section of the day looks at: Research into Practice – Lessons from Industry:

David Wilkes – Global Flood Resilience, Arup – Engineering Future Cities, Blue-Green Infrastructure

This is a bit of an amalgam of some of the work from the Blue-Green Cities EPSRC programme, which I was on the advisory board of, and some of our own work at Arup.

Right now 50% of the global population live in cities – over 3.2 billion people. As we look forward, by the middle of this century (2050) we are expecting growth so that around 70% of the world population will live in cities, so 6.3 billion.

We were asked a while ago to give some evidence to the Third Inquiry of the All Party Parliamentary Group for Excellence in the Built Environment info flood migration and resilience, and we wanted to give some clear recommendations that: (1) Spatial planning is the key to long term resilience; (2) Implement programme of improved surface water flood hazard mapping; (3) Nurture capacity within professional community to ensure quality work in understanding flood risk takes place, and a need to provide career paths as part of that nurturing.

We were called into New York to give some support after Hurricane Sandy. They didn’t want a major reaction, a big change, instead they wanted a bottom up resilient approach, cross cutting areas including transportation, energy, land use, insurance and infrastructure finance. We proposed an iterative cycle around: redundancy; flexibility; safe failure; rapid rebound; constant learning. This is a quantum shift from our approach in the last 100 years so that learning is a crucial part of the process.

So, what is a Blue-Green city? Well if we look at the January 2014 rainfall anomaly map we see the shift from average annual rainfall. We saw huge flooding scarily close to London at that time, across the South East of England. Looking at the December 2015 we see that rainfall anomaly map again showing huge shift from the average, again heavily in the South East, but also South West and North of England. So, what do we do about that? Dredging may be part of this… But we need to be building with flood risk in mind, thinking laterally about what we do. And this is where the Blue-Green city idea comes in. There are many levels to this: Understand water cycle at catchment scale; Align with other drivers and development needs; identify partners, people who might help you achieve things, and what their priorities are; build a shared case for investment and action; check how it is working and learn from experience.

Looking, for instance, at Hull we see a city long challenged by flooding. It is a low lying city so to understand what could be done to reduce risk we needed to take a multi faceted view across the long term: looking at frequency/likelihood of risk, understand what is possible, looking at how changes and developments can also feed into local development. We have a few approaches available… There is the urban model, of drainage from concrete into underground drainage – the Blue – and the green model of absorbing surface water and managing it through green interventions.

In the Blue-Green Cities research approach you need to work with City Authority and Community Communications; you need to Model existing Flood Risk Management; Understand Citizens Behaviour, and you need to really make a clear Business Case for interventions. And as part of that process you need to overcome barriers to innovation – things like community expectations and changes, hazards, etc. In Newcastle, which volunteered to be a Blue-Green city research area, we formed the Newcastle Learning and Action Alliance to build a common shared understanding of what would be effective, acceptable, and practical. We really needed to understand citizens’ behaviours – local people are the local experts and you need to tap into that and respect that. Please really value Blue-Green assets but only if they understand how they work, the difference that they make. And indeed people offered to maintain Blue-Green assets – to remove litter etc. but again, only if they value and understand their purpose. And the community really need to feel a sense of ownership to make Blue-Green solutions work.

It is also really important to have modelling, to show that, to support your business case. Options include hard and soft defences. The Brunton Park flood alleviation scheme included landscape proposals, which provided a really clear business case. OfWATT wanted investment from the energy sector, they knew the costs of conventional sewerage, and actually this alternative approach is good value, and significantly cheaper – as both sewer and flood solution – than the previous siloed approach. There are also Grey-Green options – landscaping to store water in quite straightforward purposes, more imaginative purposes, and the water can be used for irrigation, wash down, etc. Again, building the business case is absolutely essential.

In the Blue-Green Cities research we were able to quantify direct and indirect costs to different stakeholders – primary industry, manufacturing, petroleum and chemical, utilities sector, construction, wholesale and retail, transport, hotels and restaurants, info and communication, financial and professional, other services. When you can quantify those costs you really have a strong case for the importance of interventions that reduce risk, that manage water appropriately. That matters whether spending tax payers money or convincing commercial partners to contribute to costs.

Commission of Inquiry into flood resilience of the future: “Living with Water” (2015), from the All Party Group for Excellence in the Built Environment, House of Commons, talk about “what is required is a fundamental change in how we view flood management…”

Q&A

Q1) I wanted to ask about how much green we would have to restore to make a difference? And I wanted to ask about the idea of local communities as the experts in their area but that can be problematic…

A1) I wouldn’t want to put a figure on the green space, you need to push the boundaries to make a real difference. But even small interventions can be significant. If the Blue-Green asset interrupts the flood path, that can be hugely significant. In terms of the costs of maintaining Blue-Green assets, well… I have a number of pet projects and ideas and I think that things like parks and city bike ways, and to have a flood overflow that also encourages the community to use it, will clearly be costlier than flat tarmac. But you can get Sustrans, local businesses, etc. to support that infrastructure and, if you get it right, that supports a better community. Softer, greener interventions require more maintenance but that can give back to the community all year round, and there are ways to do that. You made another point about local people being the experts. Local people do know about their own locality. Arguably as seasoned professionals we also know quite a bit. The key thing is to not be patronising, not to pretend you haven’t listened, but to build concensus, to avoid head to head dispute, to work with them.

Stephen Garvin, Director Global Resilience Centre, BRE – Adapting to change – multiple events and FRM

I will be talking about the built environment, and adaptations of buildings for flood resilience. I think this afternoon’s workshops can develop some of these areas a bit. I thought it would be good to reflect on recent flooding, and the difficulty of addressing these situations. The nature of flooding can vary so greatly in terms of the type and nature of floods. For instance the 2007 floods were very different from the 2012 flooding and fro the 2014 floods in terms of areas effected, the nature of the flood, etc. And then we saw the 2015/16 storms – the first time that every area at risk of flooding in Scotland and the North of the UK flooded – usually not all areas are hit at once.

In terms of the impact water damage is a major factor. So, for instance in Cumbria 2015, we had record rainfall, over-topped defences, Rivers Eden and Petrol, Water depth of 1.5m in some properties in Carlisle. That depth of flooding was very striking. A lot of terraced properties, with underfloor voids, were affected in Carlisle. And water was coming in from every direction. We can’t always keep water from coming in, so in some ways the challenge is getting water out of the properties. How do we deal with it? Some of these properties had had flood resilience measures before – such as raising the height of electrical sockets – but they were not necessarily high enough or useful enough in light of the high water. And properties change hands, are rented to new tenants, extensions are added – the awareness isn’t consistently there and some changes increase vulnerability to flooding.

For instance, one property had, after 2005 less severe floods had led to flood prevention measures being put in place – door surrounds, airbrick covers, and despite those measures water inundated the property. Why? Well there had been a conservatory added which, despite best efforts to seal it, let in a great deal of water. They had also added an outdoor socket for a tumble dryer a few feet off the ground. So we have to think about these measures – are they appropriate? Do they manage the risk sufficiently? How do we handle the flood memory? You can have a flood resilient kitchen installed, but what happens when it is replaced?

There are two approaches really: Flood resilience essentially allows the water to come in, but the building and its materials are able to recover from flooding; by comparison Flood resistance is about keeping water out, dry proof materials etc. And there are two dimensions here as we have to have a technical approach in design, construction, flood defences, sustainable approaches to drainage; and non-technical approaches – policy, regulation, decision making and engagement, etc. There are challenges here – construction are actually very small companies on the whole – more than 5 people is a big company. And we see insurers who are good at swinging into action after floods, but they do not always consider resilience or resistance that will have a long term impact so we are working to encourage that approach, that idea of not replacing like for like but replacing with better more flood resilient or resistant options. For instance there are solutions for apertures that are designed to keep water out to high depths – strong PVC doors, reinforced, and multi-point lockable for instance. In Germany, in Hamburg they have doors like this (though perforated brick work several feet higher!). You can also use materials to change materials, change designs of e.g. power sockets, service entries, etc.

Professor Eric Nehlsen came up with the idea of cascading flood compartments with adaptive response, starting from adaptation to flooding dry and wet-proofing (where we tend to work) through to more ambitious ideas like adaptation by floating and amphibious housing… Some US coastal communities take the approach of raising properties off the ground, or creating floating construction, particularly where hurricanes occur, but that doesn’t feel like the right solution in many cases here… But we have to understand and consider alternative approaches.

There are standards for floor repair – supported by BRE and industry – and there are six standards that fit into this area, which outline approaches to Flood risk assessment, planning for FRR, Property surveys, design and specification of flood resilient repair, construction work, maintenance and operation (some require maintenance over time). I’m going to use those standards for an FRR demonstration. We have offices in Watford in a Victorian Terrace, a 30 square metre space where we can test cases – have done this for energy efficiency before, have now done for flooding. This gives us a space to show what can be achieved, what interventions can be made, to help insurers, construction, policy makers see the possibilities. The age of the building means it is a simple construction – concrete floor and brick walls – so nothing fancy here. You can imagine some tests of materials, but there are no standards for construction products for repair and new builds for flood resistance and resilience. It is still challenging to drive adoption though – essentially we have to disrupt normal business and practice to see that change to resistant or resilient building materials.

Q&A

Q1) One of the challenges for construction is that insurance issue of replacing “like by like”…

A1) It is a major challenge. Insurance is renewed every year, and often online rather than by brokers. We are seeing some insurers introducing resilience and resistance but not wide-scale yet. Flood resilience grants through ECLG for Local Authorities and individuals are helpful, but no guarantee of that continuing. And otherwise we need to make the case to the property owner but that raises issues of affordability, cost, accessibility. So, a good question really.

Jaap Flikweert – Flood and Coastal Management Advisor, Royal HaskoningDHV – Resilience and adaptation: coastal management for the future

I’m going to give a practitioners perspective on ways of responding to climate change. I will particularly talk about adaptation which tends to be across three different areas/meanings: Protection (reduce likelihood); Resilience (reduce consequence); and Adaptation, which I’m going to bluntly call “Relocation” (move receptors away). And I’ll talk about inland flooding, coastal flooding and coastal erosion.. But I tend not to talk as much on coastal erosion as if we focus only on risk we can miss the opportunities. But I will be talking about risk – and I’ll be highlighting some areas for research as I talk.

So, how do we do our planning to think about how we do our city planning to manage the risk. I think the UK – England and Wales especially – are at the lead here in terms of Shoreline Management Plans – they are long term and broad scale view, there is a policy for coastal defence (HtL (Hold the Line)/MR (Managed Realignment)/NAI (No Active Intervention), Strong interaction with other sectors. Scotland are making progress here too. But there is a challenging to be flexible, to think about the process of change.

Setting future plans can be challenging – there is a great deal of uncertainty in terms of climate change, in terms of finances. We used to talk about a precautionary approach but I think we need to talk about “Managed-adaptive” approaches with decision pathways. For instance The Thames Barrier is an example of this sort of approach. This isn’t necessarily new work, there is a lot of good research to date about how to do this but it’s much more about mainstreaming that understanding and approach.

When we think about protection we need to think about how we sustain defences in a future with climate change? We will see loading increase (but extent uncertain); value at risk will increase (but extent uncertain); coastal squeeze and longshore impacts. We will see our beaches disappear – with both environmental and flood risk implications. An example from the Netherlands shows HtL feasible and affordable up to about 6m in sea level rise; with sandy solutions (also deal with coastal squeeze), and radical innovation is of vital importance.

We can’t translate that to the UK, it is a different context, but we need to see this as inspirational. In the UK we won’t hold the line for ever… So how do we deal with that? We can think about the structures, and I think there is research opportunity here about how we justify buying time for adaption, and how we design for short life (~20 years), and how we develop adaptable solutions. We can’t Hold the Line forever, but some communities are not ready for that change so we have to work on what we can achieve and how.

In terms of Resilience we need to think about coastal flooding – in principle not different from inland flooding, design to minimise impact, but in practice that is more difficult with lower change/higher consequence raising challenges of less awareness, more catastrophic if it does happen. New Orleans would be a pertinent example here. And when we see raised buildings – as David mentioned – those aren’t always suitable for the UK, they change how a community looks which may not be acceptable… Coastal erosion raises its own challenges too.

When we think of Adaptation/Relocation we have to acknowledge that protection is always technically possible but what if it was unaffordable or unsustainable. For example a disaster in Grahamstown, Queensland saw a major event in January 2011 lead to protective measures but the whole community moving in land in December 2011. There wasn’t a delay on funding etc. as this was an emergency, it forced the issue. But how can we do that in a planned way? We have Coastal change Pathfinders. This approach is very valuable including actual relocation, awareness, engagement lessons, policy innovation. But the approach is very difficult to mainstream because of funding, awareness, planning policy, local authority capacity. And here too I see research opportunities around making the business case for adaptation/relocation.

To take an example here that a colleague is working on. Fairbourne, Gwynedd, on the West Coast of Wales, is a small community, with a few buildings from the 1890s which has grown to 400 properties and over 800 people. Coastal defences were improved in 1981, and again in 2012. But this is a community which really shouldn’t be in that location in the long term, they are in the middle of flood plans. The Parish Council have adopted an SMP policy which has goals across different timings: in the short term to Hold the Line; medium term – Managed Realignment, and long term – No Active Intervention. There is a need to plan now to look at how we move from one position to another… So this isn’t dissemination needed here, it is true communication and engagement with the community, identifying who that community is to ensure that is effective.

So, in closing I think there is research needed around design for short life; consultation and engagement – about useful work done, lessons learned, moving from informing to involving to ownership, defining what a community is; Making the business case for supporting adaptation/relocation – investment in temporary protection to buy time; investment in increasing communities’ adaptive capacity; value of being prepared vs unprepared – damage (to the nation) such as lack of mobility, employability, burden on health and social services. And I’d like to close with the question: should we consider relocation for some inland areas at risk of flooding?

Q&A

Q1) That closing question… I was driving to a settlement in our area which has major flood risk, is frequently cut off by snow in the summer. There are few jobs there, it is not strategically key although it has a heritage value perhaps. We could be throwing good money after bad to protect a small settlement like that which has minimal contribution. So I would agree that we should look at relocation of some inland properties. Also, kudos to the parish council of Fairbourne for adopting that plan. We face real challenges as politicians are elected on 5 year terms, and getting them persuaded that they need to get communities to understand the long term risks and impacts is really challenging.

A1) I think no-one would claim that Fairbourne was an easy process. The Council adopted the SMP but who goes to parish meetings? But BBC Wales picked it up, rather misreported the timelines, but that raised interest hugely. But it’s worth noting that a big part of Gwynedd and mid Wales faces these challenges. Understanding what we preserve, where investment goes… How do we live with the idea of people living below sea level. The Dutch manage that but in a very different way and it’s the full nation who are on board, very different in the UK.

Q2) What about adopting Dutch models for managing risk here?

A2) We’ve been looking at various ways that we can learn from Dutch approaches, and how that compares and translates to a UK context.

And now, in a change to plans, we are rejuggling the event to do some reflection on the network – led by Prof. Garry Pender – before lunch. We’ll return with 2 minute presentations after that. Garry is keen that all attending complete the event feedback forms on the network, the role of the network, resources and channels such as the website, webinars, events, etc. I am sure FCERM.net would also welcome comments and feedback by email from those from this community who are not able to attend today. 

Sharing Best Practice – Just 2-minutes – Mini presentations from delegates sharing output, experience and best practice

 

I wasn’t able to take many notes from this session, as I was presenting a 2 minute session from my COBWEB colleague Barry Evans (Aberystwyth University), on our co-design work and research associated with our collaboration with the Tal-y-bont Floodees in Mid-Wales. In fact various requirements to re-schedule the day meant that the afternoon was more interactive but also not really appropriate for real time notation so, from hereon, I’m summarising the day. 

At this point in the day we moved to the Parallel Breakout sessions on Tools for the Future. I am leading Workshop 1 on crowd sourcing so won’t be blogging them, but include their titles here for reference:

  • Workshop 1 – Crowd-Sourcing Data and Citizen Science – An exploration of tools used to source environmental data from the public led by Nicola Osborne CSCS Network with case studies from SEPA. Slides and resources from this session will be available online shortly.
  • Workshop 2 – Multi-event modelling for resilience in urban planning An introduction to tools for simulating multiple storm events with consideration of the impacts on planning in urban environments with case studies from BRE and Scottish Government
  • Workshop 3 – Building Resilient Communities Best-practice guidance on engaging with communities to build resilience, led by Dr Esther Carmen with case studies from the SESAME project

We finished the day with a session on Filling the Gaps– Future Projects:

Breakout time for discussion around future needs and projects

I joined a really interesting Community Engagement breakout session, considering research gaps and challenges. Unsurprisingly much of the discussion centred on what we mean by community and how we might go about identifying and building relationships with communities. In particular there was a focus on engaging with transient communities – thinking particularly about urban and commuter areas where there are frequent changes in the community. 

Final Thoughts from FCERM.net – Prof. Garry Pender 

As the afternoon was running behind Garry closed with thank yous to the speakers and contributors to the day. 

Oct 202015
 
Digital Footprint campaign logo

I am involved in organising, and very much looking forward to, two events this week which I think will be of interest to Edinburgh-based readers of this blog. Both are taking place on Thursday and I’ll try to either liveblog or summarise them here.

If you are are based at Edinburgh University do consider booking these events or sharing the details with your colleagues or contacts at the University. If you are based further afield you might still be interested in taking a look at these and following up some of the links etc.

Firstly we have the fourth seminar of the new(ish) University of Edinburgh Crowd Sourcing and Citizen Science network:

Citizen Science and the Mass Media

Thursday, 22nd October 2015, 12 – 1.30 pm, Paterson’s Land 1.21, Old Moray House, Holyrood Road, Edinburgh.

“This session will be an opportunity to look at how media and communications can be used to promote a CSCS project and to engage and develop the community around a project.

The kinds of issues that we hope will be covered will include aspects such as understanding the purpose and audience for your project; gaining exposure from a project; communicating these types of projects effectively; engaging the press; expectation management;  practical issues such as timing, use of interviewees and quotes, etc.

We will have two guest presenters, Dave Kilbey from Natural Apptitude Ltd, and Ally Tibbitt from STV, followed by plenty of time for questions and discussion. The session will be chaired by Nicola Osborne (EDINA), drawing on her experience working on the COBWEB project.”

I am really excited about this session as both Dave and Ally have really interesting backgrounds: Dave runs his own app company and has worked on a range of high profile projects so has some great insights into what makes a project appealing to the media, what makes the difference to that project’s success, etc; Ally works as STV and has a background in journalism but also in community engagement, particularly around social and environmental projects. I think the combination will make for an excellent lunchtime session. UoE staff and students can register for the event via Eventbright, here.

On the same day we have our Principal’s Teaching Award Scheme seminar for the Managing Your Digital Footprints project:

Social media, students and digital footprints (PTAS research findings)

Thursday, 22nd October 2015, 2 – 3.30pm, IAD Resources Room, 7 Bristo Square, George Square, Edinburgh.

“This short information and interactive session will present findings from the PTAS Digital Footprint research http://edin.ac/1d1qY4K

In order to understand how students are curating their digital presence, key findings from two student surveys (1457 responses) as well as data from 16 in-depth interviews with six students will be presented. This unique dataset provides an opportunity for us to critically reflect on the changing internet landscape and take stock of how students are currently using social media; how they are presenting themselves online; and what challenges they face, such as cyberbullying, viewing inappropriate content or whether they have the digital skills to successfully navigate in online spaces.

The session will also introduce the next phase of the Digital Footprint research: social media in a learning & teaching context.  There will be an opportunity to discuss e-professionalism and social media guidelines for inclusion in handbooks/VLEs, as well as other areas.”

I am also really excited about this event, at which Louise Connelly, Sian Bayne, and I will be talking about the early findings from our Managing Your Digital Footprints project, and some of the outputs from the research and campaign (find these at: www.ed.ac.uk/iad/digitalfootprint).

Although this event is open to University staff and students only (register via the Online Bookings system, here), we are disseminating this work at a variety of events, publications etc. Our recent ECSM 2015 paper is the best overview of the work to date but expect to see more here in the near future about how we are taking forward this work. Do also get in touch with Louise or I if you have any questions about the project or would be interested in hearing more about the project, some of the associated training, or the research findings as they emerge.

May 282015
 
Image of the first CSCS seminar

This morning I am at the first seminar arranged by the University of Edinburgh Citizen Science and Crowdsourced Data and Evidence Network. The Network brings together those interested in citizen science and crowdsourcing from across the organisation and this event is also supported by the Academic Networking Fund, IAD. Today’s seminar looks at the Zooniverse crowdsourcing organisation and suite of projects with two guest speakers, and I’ll be taking live notes here. As usual, because these are live notes there may be errors, typos, formatting issues, etc and corrections are welcomed. 

We are starting our day with an introduction by James Stewart on the focus of the network, which will particularly focus on methodological approaches.

Grant Miller (Zooniverse): ‘The Zooniverse – Real Science Online’

About Grant and his talk:

‘The Zooniverse is the world’s largest and most successful citizen science platform. I will discuss what we have learned from building over 40 projects, and where the platform is heading in the future.’ (Website: https://www.zooniverse.org/)

Grant Miller is a recovering astrophysicist who gained his PhD from the University of St Andrews, searching for planets orbiting distant stars. He is now the communications lead for the Zooniverse on-line citizen science platform.

I had kind of a weird introduction into crowdsourcing and citizen science.. But the main thing I will be talking about today is about how we engage the Zooniverse community to participate and enjoy doing that and being part of our community.

Zooniverse all started with Kevin, a student at Oxford who was tasked with looking at thousands of images of the universe to find two sorts of galaxies: eliptical galaxies and spiral galaxies. He had a million to classify. He did 50,000 and then met with his supervisor and had some strong arguements: he didn’t want to spend his whole academic career classifying galaxies, and he argued that it didn’t require his training. So, by show of hands who thinks this image of a galaxy (we are looking at one of many) is an eliptical, how many think it is a spiral? The room votes that this is a spiral and it is indeed a spiral – and that’s basically how Zooniverse works. We show an image, we ask people what it is, and they choose. And people, en mass, really went for this. They went through huge amounts of images very quickly.

Other things started to happen to… The first community around the project was the Galaxy Zoo forum. A participant called Hanny found a thing (vootwerp)… It didn’t look like the galaxies she was classifying. This was a completely new astronomical phenomenon, which was never known about. An amateur had found this through this very simple platform. People aren’t just good at recognising patterns, they also get distracted and find new things. And after discovering and publishing on this phenomenon – a huge cloud of gas associated with a galaxy – a group from the community decided to make a project of looking for more of these in other Galaxy Zoo images. And this is why communities are so brilliant. On another project our community found a whole new worm under the sea. That’s the power of having this community taking part.

So, how do we do this? Well we really simplify the language of the task, make it easy for people to take part. And when Galaxy Zoo took off we found other scientists and researchers approaching us to build new projects including humanities projects, and biological projects. So we set up projects such as Snapshot Serengeti – used to indicate what you can see in images from camera traps on the Serengeti. I was working with a group of computer scientists trying to work out how to identify the object in the image, and also my 4 year old nephew… and he said in seconds, the computer scientists are still looking for a solution.

So at this point in time we now have 42 projects in the Zooniverse. Old Weather in 2010 was our first humanities project. It started as a climatology project, but because it was using historic ship logs and those include so many other types of data we found humanities researchers and historians coming on board so it has had a second life. We have other humanities projects, cancer research projects, etc. Of those projects about 30-35 are currently live. We think this will expand rapidly soon but I’ll come back to that. And last year we passed the 1 million volunteer mark, that’s registered volunteers. Mostly those are in Western Europe and North America, but we have participants in 200 countries (7 countries have not).

The community is expanding, the projects are expanding… But there is a lot of potential out there, a huge cognitive surplus we could be using. For instance Clay Shirky notes that 200 billon hours are spent watching TV by adults in the UK, it took only 100 million hours to create Wikipedia. We are only beginning to tap that potential. On January 7th last year we relaunched a project called Space Warps – we had over a million classifications an hour – when Prof Brian Cox and Dara O’Brien asked the public to do it on live TV. That meant that overnight we had discovered an object it can take astronomers years to discover. It’s good but it’s no 200 billion hours… Imagine what you could do with that much time. Every hour there are 16 years worth of human effort spent playing Angry Birds… How do we get that effort into citizen science?

So, if gamification the way to go? For those working in citizen science you could probably run a week long conference just on whether you should or should not do gamification. We have decided not to but some of the most successful – foldit and Eyewire – do use it. Those projects gave huge thought about how to ensure participants reward efforts in the right way so that people don’t just game the system. For us we are worried that that won’t work for us, not convinced we would be good enough building a game and end up with something neither game nor citizen science. But some of our projects have tried gamification and we have studied this. On Galaxy Zoo we used a leader board to start with but that caused some tension: those in the lead were doing hundreds of thousands of classifications and people felt the leaders might have cheated, others felt that they could never get there so just left. On Old Weather we enabled those participants who focused on a particular ships log could become captain – but it put off as many people as it attracted. And those who became captain had nowhere to go.

This comes back to motivation for taking part. When we do ask our volunteers frequently it comes down to those participants wanting to contribute to research. So, for instance, The Andromeda project involved images that weren’t that exciting… They were asked to circle clusters of galaxy. The task is simple, they feel they are really contributing… They finished the task in a week. This time, when we had finished we put up a message thanking participants for their contribution, saying that we had enough for the paper, but they were welcome to carry on… And that shows a rapid fall down to zero participation – they were only interested while the task at hand was useful. And that pattern reminds us not to mess with our community, they use precious spare time and they want to be doing something useful and meaningful.

Planet Hunters is a project we used to detect planets based on data. People don’t take part to discover planets, it is because they really are interested in the science. Some of our really active participants choose to download the data, write their own code, doing work at PhD level as a volunteer and sending data back… The planets discovered in that project are rare and weird – things we didn’t spot with algorithms – the first one found had 4 suns. And recently we found a seven planet solar system, the largest other than our own .

Volunteers are keen to go further, so we have a discussion area – labelled Talk – for all of our projects. That means you can comments, Twitter style, or you can use old style discussion boards for long form discussions. Those areas are also used by the scientists, the researchers, the technical teams and developers, and the community can interact with them there – the most productive findings often come from that interaction between volunteers and scientists. The talk areas of our community are really important. In fact we have a network diagram for our community we can see some of our most active participants  – one huge green blob on this diagram is a wonderful woman called Elizabeth who posts and comments, and moderates, helps fellow volunteers come along. And we are looking at those networks, at who those lynchpins are, etc.

I said that people write their own code, do their own analysis… So can we get that on the site? We have been playing with the tools area, which we’ve tried this for Galaxy Zoo and for Snapshot Serengeti. We’ve been funded to build a broader set of tools, to map data, etc. from the website itself.

One of the other big things we are trying to do is to translate the site. For instance here is Galaxy Zoo in traditional character Mandarin. And we are doing this through crowdsourcing. You pick your site, and you show words or sections for users to translate. I talked about understanding the community and their interest and motivation. You also need to understand how we allocate images etc. We have done it based on seen/not seen but have been toying with the idea of shaping what images you see based on what you have seen, or are good at, or particularly like or are good at identifying. We tried that, shaping images to suit interested folk. When we tried that it wasn’t that successful, this was on Snapshot Serengeti, and realised we hadn’t been showing them blank images… So we looked at usage data to see to what extend seeing blank images impacts classifying images. It seems that the more blank images a user sees, the more they classify. When you classify a few/lots in one go they leave the site sooner. But psychologically we aren’t sure why this is yet – to classify a blank image its one click, that’s quick… But also what is the reward there for that image – is it just as rewarding to classify a blank image. There seems to be a sweet spot here… The same team trying to automatically spot a zebra has also been looking at identifying anything being in the image… But doing that may mean they leave the site sooner so we could be shooting ourselves in the foot…

So, we’ve been thinking who should see what? And as part of that we have been trying, with some of the space image projects, putting some simulated images into the mix  to rank/detect expert level – and looking at that in comparison to their experience/expert level within the system. We want to see if there is a smarter way to do a Zooniverse project.

The other thing that can happen is fear, a sort of classification anxiety. For instance for cancer images people can be quite scared to click the button and contribute to the research. So we are toying with showing volunteers how the consensus clustering works – so we can show people that their marking counts but that they are backed up by the wisdom of the crowds we think that may help them trust themselves. At the moment we just blog about this stuff, but how can we show this on the site.

Panoptes is our new infrastructure platform, which we’ve been building for the last year, built with 2 million dollars of funding from Google. And the first project using this appeared on Stargazing Live this year, looking for Super Novas. We discovered five Super Novas during the week long run of that programme. That project on panoptes is infrastructure we will be building projects on, but anyone can run projects on this site. You can build your own project with name, introduction, research case, work flows – mark an ellipse, answer a question, etc. Then you upload your subjects/data as images. Scientists were building things in half an hour that would have taken our developer six months during our trials here. We will be launching our beta today, and launching fully over the next two weeks… There are still only two types of work flows at launch: tree logic, and classifying. But there are still so many other questions and tasks to do – but we hope to tackle and add facilities later on, notably: humanities/transcription – consensus being the main problem there; audio; and video. We have tried audio and video before but they won’t be in the first iteration of Panoptes. And we still have to answer the question of whether audio or video can work for citizen science – they are not that popular in our experience, but maybe that is about the projects not the format… There are still lots of questions to answer.

Q&A

Q1) Can you say more about social motivation here. But also what about subjectivity and objectivity here – and how much opportunity there is to learn, how you become more able to identify things that have previously been ambiguous. Your predecessor talked about people popping on for a few minutes, not gaining

A1) For citizen science, crowdsourcing and volunteering generally the majority of people do just pop in briefly. The learning is often through the discussion areas. But we do see that people who do more classifications become better at it… And we see that the most comments people do post in discussion, the more technical detail or terminology they include. But we are also trying to actively teach our volunteers. When I came in we started looking at ways to go further than the data processing – I wanted to create an educational course for Planet Hunters, maybe a 25 slide course that could appear every few classifications through an invitation to take part every 10 classifications. People did opt in to that… And we thought that would improve classifications and keep volunteers in the system, as well as supporting them to learn. But we are still looking at ways to educate through the site.

Q2) Can you say more about who decides which projects are made live? So many research communities in the world, who’s using the data? Also is there any communications between the volunteers and the scientists?

A2) The process, until now, was that we got grant money to build citizen science projects and we put out calls for proposals. People would come to us with a case, and we would decide in-house as a team which seemed worth doing, were buildable, might be interested to try. Research output was always put first – they had to have a good research case. We would get 50-100 proposals and build 5 per year. But that has led to the new infrastructure. There is huge demand for citizen science, and all areas of science have huge amounts of data… But to some extent the problem still exists… I could put up 100,000 pictures when this platform goes live, so we will still be reviewing and filtering projects before they can be become official Zooniverse projects. So you can use the platform to build private projects etc. but before they can be on the homepage they will be filtered etc, tested in beta, rated by the crowd, etc. On the communication front – that’s mainly on discussion boards. And each participant had a suitable label – you can tell who the researchers are. So when Hanne made her discovery that was discussion boards and researchers following up and discussing that. But some of our volunteers and science teams do their own thing with google groups, hang outs etc.

Q3) I’m interested in your use of the word “discovery” and what that might mean. That end point is easy to attribute, but how do you credit all that prior work?

A3) The first author for the Planet Hunters project is that research team, then us, then those who have classified the planets. We try to attribute credit there. We are trying to work out how to credit everyone who has ever taken part – on the website, not on the papers – but it is now more complex. Even just in science it is complex – there are 30 people on that paper discovering a new planet… It becomes really properly collaborative and hard to credit. We try to recognise anyone we thin

Q4) In general, but particularly thinking about the new platform, how are you handling the moderation of images, data and discussion – there seems to be potential for really problematic trolling/inappropriate activity here, but also legitimate but inappropriate images.

A4) We looked at various sites where you can upload images. We liked Flickr’s privacy policy – we can’t review all the images or monitor all those projects, especially the private ones. So we rely on if we do find something, we will remove them. Sharing our ideals… And there is a grey area where people might share adult material but in a legitimate research project 0 that will be case by case. In terms of comments etc. we do have moderators who can flag or delate comments, or can talk to volunteers about that. And we will keep those for people who moderate or have admin rights.

Q5) What do you mean by private projects?

A5) You will be able to create a project and share only with those you send a link to. So we won’t be able to review them all. Hopefully they will be built by those genuinely trying to run a research project but we know people could use or abuse that facility, so we will state our policy and will delete anything that we need to, and to report to authorities if needed.

Q6) Researchers can already pay to use crowdsourcing, is that something you will be doing? e.g. Crowd Power, Mechanical Turk.

A6) In theory someone could offer financial rewards for a project running on the platform, we won’t facilitate that in the infrastructure and we will be sharing our ideals and policies. I have no problem for financial incentives as long as that is above board, but that’s not our model and not what we are offering.  And there are serious citizen science questions about data quality where people are working for financial rewards. But it will be interesting to see what happens over the coming months.

Q7) Will all projects stay there forever?

A7) We already review our own projects. We do not want to waste people’s time. We will impress this on those using the new platform. And we will also make it possible for people to share the final products – papers etc – of those projects. Right now we have archive sites for our projects, we link to a GitHub site for retired projects, data etc.

Q8) Looking at loyalty for different projects. Presumably you have a small number doing large amounts of work… Does that pattern of loyalty track to different projects or do they only get very loyal about one project?

A8) In the past we deliberately separated our projects, we didn’t make great efforts to encourage volunteers to work across the projects, making it hard to switch between them. We’ve been thinking a lot about this when we think about delivering the right data to the right user, we are also thinking about letting volunteers know about the projects that will be of interest.

Image showing consensus classifications in Galaxy Zoo

Grant shows an image annotated with consensus classifications in Galaxy Zoo

Mark Hartswood (Oxford University & CSCS Data and Evidence network founder): ‘Intervening in Citizen Science: From incentives to value co-creation’

About Mark and his talk:

‘This talk reflects upon a collaboration between SmartSociety, an EU project exploring how to architect effective collectives of people and machines, and the Zooniverse,  a leading on-line citizen science platform.

Our collaboration tackled the question of how to increase engagement of Zooniverse volunteers. In the talk I will chart how our thinking has progressed from framing volunteering in terms of motivation and incentives, and how it moved towards a much richer conceptualisation of multiple participating groups engaging in complicated relationships of value co-creation.’

Mark Hartswood is a Social Informatician whose main employer is Oxford University and currently working in the area of Responsible Research and Innovation.

I am going to start with an answer to one of Grant’s questions.. volunteers find it fun to see a surprisung image – building up hope and tension for an exciting image… I’d taken this slide out of my slides but I thought I’d add it back in…

Grant: Isn’t it great when you see the same answer in two different places!

Mark: In my talk proper I’ll be talking about motivations for participation, and I will be looking at several projects here SOCIAM, Smart Society (which I work on) and Zooniverse, with acknowledgements to my colleagues on the study I will be talking about.

Our colleagues at Ben-Gurion University of Negev have been looking at incentive schemes for crowd sourcing, and Zooniverse offered us an opportunity to try this out with a group of real volunteers…

Our study in a nutshell was:

  • Auto ethnography – exploring Zooniverse as a volunteer
  • Survey of Zooniverse participants, looking at motivation, anxiety, engagement, disengagement. Targeted at volunteers actve in last three months
  • Develop an intervention to re-engage volunteers (essentially an email)
  • Intervention successful…

But that’s not the story I want to tell today. I want to talk about conceptualising citizen science as co-creation of value, looking at the literature and moving to a co-creation of value approach.

Literature wise: Peer production has been posed as a problem for economists in terms of understanding motivation (Benkler). Motivation for citizen science is important but it seems hard to properly explain. Roddich et al found motivations were multiple and compound – from appreciating scale and beauty of universe, supporting scientific process, personal connection to the project. There can be real mix. And they give complex narratives. Motivations are also shown to be dynamic, they change, evolve, wax or wane (Rotman et al). And motivation is non exhaustive in explaining participation – Eveleigh et al shows that people may be highly motivated but not have time/be able to participate in practice.

Coupled with motive are issues of reward and inventives. Often in the literature motives are coupled with the idea that the right motives can lead to use of rewards or inventives. Incentives seen to generate interest, sustain engagegemnt and improve quality in citizen science according to Prestopnik et al. Or exerting a form of leverage. Or “programming” participation (Maggi et al?).

So Dickinson et al (2012) looked at incentives and rewards. But there are some confusing combination of badges and certificates as incentives, discussion as social incentives, and other incentives. Building community and recognising effort are also part of the mix. There are real mixes of social individualised approaches, and more social processes.

There are some real problematic areas here. Kittur et al that motivation must be there first, incentives should just align otives to desiered behaviour. Gamification could produce ambivalent results in citizen science (Darch, Preist et al). Incentives can create perverse outcomes as well (Sneddon et al).

We want to not ask what motivates people, but ask how participation creates value for participants and for others. So what is co-creation of value? It has its origins in commerce and value. The idea is that value is created in the factory and delivered to the consumer, in the past. Currently the customer is active in creating the value of the product or service. That includes promoting the product, design of new products, aiding diffusion. Flows of value to the business, the customer, and to other customers – see for instance WetSeal which enables customers to combine garments into collections, to share those, to share images of themselves in garments, etc.

So, in science we can see co-creation of value in citizen science. In a mature platform like Zooniverse there are complex types of values shared. Different forms of value are shared by participants. There are diverse reasons to participate, very varied levels of participation by individuals. There is a difference between value made collectively (e.g. casual users who make only a few classifications), and value made individually (the few who make many classifications). And we see those conversations on forums on, say, anomolies, and scientist responses to those… add values to the community, become resources for the community, and scientist blog posts also add to that, and help acknowledge the role of volunteers. And participants also build social capital via social media, which also promotes the platform. And contributed data and project outputs we see materials like star catalogues becoming available for individuals to use in their own research.

So there are complex forms of value, and those values interact. Changes in incentives can therefore change dynamics in this web of value.

Looking at a scientist blog post “There’s a green one and a pink one and a blue one and a yellow one” – beginning with an image visualising all the contributions of a community, from super active participants, through to those making a few each. The text of this post speaks to the delicacy of talking about participants in a project with those dynamics, acknowledging contribution of all forms and emphasising that volume is not the only measure. The post is artfully written to achieve a number of delicate balances. The crowd each has to be acknowledged as valuable. It would be easy to praise the highly active participants, and dismiss casual participants, and this post carefully avoids any sense of jealousy, unfairness, etc.

If we have complex dynamics in these webs of value and co-creation, what happens when incentives explicitly value one type of contribution over another. And that brings us back to the effects of gamification. So, looking at Old Weather, where contributions enabled you to rise to the rank of captain… The leaderboard explicitly values volume of contribution. For non gamers game elements can be demotivating, and the heights of the leaderboard looks inaccessible (see Darch). But also leader borads can set a normative standard for contribution that demotivates the long tail (Preist et al). So, we think a co-creation model enables us to better understand the impact of changing the dynamics through incentives.

This takes us back to the inventions we looked at in our study… And comments from Zooniverse participants. In terms of how volunteers became disengaged that was about boredom/forgetting about the project, about distractions from work or home, and people said that to motivate them an email when they haven’t logged in might work. So we looked at an email to remind volunteers about zooniverse.

But there were other reasons too. Ideas about achieving a level of mastery, and if you are not reaching that it isn’t valuable, or fear of classifying in case of mistakes. And there we think an incentive that might be effective is reassurance about classification anxiety.

We also saw volunteers unware of other projects being available to participate in – which can be resolved through sign posting to other projects.

So, benefits of a co-creation perspective…

  • More symmetical idea – motives held by volunteer and incentives are things you do to the volunteer
  • Less individualistics – explains more complex relationships and dynamics between both participating individuals and groups
  • Don’t want to reject incentives or motivations – but want to put them in broader non-individualistic framework
  • Opens up a broader framework for design e.g. around diagnosing and repairing problems where participants fail to realise value for themselves or each other
  • Provides access to thinking about value and values and ethics dilemmas in participatory citizen science based on principles of mutuality and equitability
  • Much of this is half-articulated in the citizen science literature – but moving away from the language and logic of incentives and motives helps realse it more fully.

Q1) I think you’ve both given brilliant talks on the motivations of students in learning environments – that’s my area and educators have been looking at this for some time. With intrinsic and extrinsic motivations. Is that something you are looking at?

A1) Is there a whole area of literature here then?

Q1) Betty Collis comes to mind on the issue of co-creation. But yes, there is a literature there in education.

A1) It would be interesting to make those connections there…

Comment) I think that you are also talking about the psychology of learning, and there are really different motivations there, some quite instrumental… Do you have any thoughts on that based on what you have seen in Zooniverse?

A2) I am certainly still exploring this area. But I think the idea that motivations are a priori has to be challenged. Zooniverse creates a space for volunteers to be challenged by things they may have never thought of before.

Q2) And what incentives would you recommend for an online learning forum

A2) There is that diversity… And that is quite healthy. And we don’t neccassarily want to convert all this sort of person, into that sort of person. Zooniverse is pretty successful in creating lots of different sorts of rooms – to participate in different sort of ways. Catering to that diversity, and accepting that, is actually sort of important.

Comment) A lot of the crowdsourcing systems in commercial academic fields started very nievely – individualised collective intelligence idea… realising the wisdom of the crowd but then seeing the community collaborating and changing things… So now we see discovering of the world of people, normal dynamics… But also new things are brought to that space… Mutual new ideas that can help fields think about social organisation and motivation and things…

Comment) You are seeking to do something different to us (educators) but you are similarly trying to avoid negative experiences through cliques, and you also don’t want to create that.

Grant) We had a Zooniverse discussion board, with many early super users… They were quite cliquey. They were not hostile but almost too much too soon for someone new coming in. They were using technical language, showing their knowledge, perhaps feeling or behaving in quite entitled ways. So we do think about how we get people to form a healthy community… And it’s not something we have solved…

Comment) And you haven’t written that up, as that would be divisive.

Grant) Indeed, but we have been looking at new ways to tackle that potential issue – breaking down walls between projects being part of that – by relaunching talk. We find commentators wanting a count of how many comments they have made – and we don’t want to convey authority in that way. It is common in forums but we don’t want to do that.

Comment) But people do invest time and knowledge… So levelling everyone to the shame can diminish contribution.

Grant) I like that blog post Mark highlighted for it’s approach to acknowledging contributions of all types. We have to think about how to reward everyone, without alienating the other types of contributions.

Mark) It’s not so much about levelling, but about emergent politics about values. And being thoughtful of those dynamics.

Comment) But to some extent you’ll never understand the reasons for participation. There was a US project with two users who were way ahead… proved to be a guy and his father in law competing!

Grant) There are a whole bunch of compound motivations – some may be petty, some may be

Mark) We had some really lovely motivations and some really sad ones – terminally ill people wanting to make a contribution for instance. But there were also motivations that were total turn offs – some wanted to look at alien worlds, some found that disturbing or frightening. People had really individual perspectives.

Comment) You’ve talked about people sharing what they do to social media accounts – bi-passing a lack of gamification by sharing in that way!

Grant) That is implemented more for sharing a lovely image – it’s not about numbers but sharing something interesting. We have talked about the idea – and have some new funding – to build a native Facebook app for four of our projects… But that sort of issue may arise there. Whether personal announcement is motivating or not.

Comment) More open platforms does enable more entrepreneurship and different approaches.. It becomes a game perhaps… Could be other things to search for… Scrapbooking the loveliest images, new ways into projects.

Grant) We are wary of gamification, but it can create motive for some but it is kind of treacherous. We have also seen volunteers make their own games out of ungamified projects – tracking how many animals or types of galaxies etc. they have seen. There are some who like the idea of a gamified Zooniverse project.

Q) How representative do you think the Zooniverse volunteers are – they are very heavily studied as a group, and the literature looks at very few niche groups but how do they compared to that big pool of untapped talent – that 200 billion hours.

A) Demographically it was a very flat age range – very level participation across age ranges. Participants tended to be quite highly educated. So a lot of untapped reserves would be about that less educated range of people perhaps.

Grant) One of the things we indicated in our funding we do have that flat age range, but we also have Facebook likes and that lets us see detailed demographic age range. We saw a massive discrepancy there with loads of young people, those under 25 who were interested on Facebook but didn’t participate on the Zooniverse projects.

Mark: Under 18s weren’t in our study for ethical reasons…

Grant: But even looking just at 18-25 year olds that discrepancy between the Facebook likes and the participation applied.

Comment) Just on that gamification front, it does work but why it works is really an issue.

And with that we are closing the session… This event has really shown the value of combining very different people in the room… That breadth of interests etc. And I think that bodes well for our network as a whole, and that will hopefully add real value to our events in the future.

 

Aug 292013
 

Following my post earlier this month on the Cabaret of Dangerous Ideas event which myself and my colleagues Addy and Ben ran on Fieldtrip GB, I am delighted to have some additional follow up.

The main reason for this follow up is because Eccentronic have created a fantastic and, in their words, “quite bizarre”  video using the map of public toilets we created specially for our event using Fieldtrip GB. Watch it in all of it’s glory here:

YouTube Preview Image

Do let Eccentronic know what you think – comment or like the video over on YouTube, tweet them @eccentronic, or leave comments below.

Now, this is definitely the most creative response I’ve seen to Fieldtrip GB… so far! I’m hoping it’s the start of many weird and wonderful uses of the app! On that note do share your own thoughts on our key question from the event :

If you could map anything in your community, what would it be and why?

here, in the comments below.

And the other goodies to share…

The Edinburgh Beltane Network – who were coordinating the Cabaret of Dangerous Ideas events with Susan Morrison – have now set most of the images from the very varied Cabaret of Dangerous Ideas events live here on Flickr.

Addy Pope speaking to the small audience at the Cabaret of Dangerous Ideas

Addy Pope speaking at our Cabaret of Dangerous Ideas event.

This leaves only the audio file from our event. If it is sounds reasonable we will share it via an update to this post so bookmark this post and keep an eye out!

:: Update: The Audio from our Fringe show is now live. You can download or play it via this MP3. ::

 August 29, 2013  Posted by at 5:29 pm Week In the Life Tagged with: , , , , ,  No Responses »