Jun 302017
 

Today I’m at ReCon 2017, giving a presentation later (flying the flag for the unconference sessions!) today but also looking forward to a day full of interesting presentations on publishing for early careers researchers.

I’ll be liveblogging (except for my session) and, as usual, comments, additions, corrections, etc. are welcomed. 

Jo Young, Director of the Scientific Editing Company, is introducing the day and thanking the various ReCon sponsors. She notes: ReCon started about five years ago (with a slightly different name). We’ve had really successful events – and you can explore them all online. We have had a really stellar list of speakers over the years! And on that note…

Graham Steel: We wanted to cover publishing at all stages, from preparing for publication, submission, journals, open journals, metrics, alt metrics, etc. So our first speakers are really from the mid point in that process.

SESSION ONE: Publishing’s future: Disruption and Evolution within the Industry

100% Open Access by 2020 or disrupting the present scholarly comms landscape: you can’t have both? A mid-way update – Pablo De Castro, Open Access Advocacy Librarian, University of Strathclyde

It is an honour to be at this well attended event today. Thank you for the invitation. It’s a long title but I will be talking about how are things are progressing towards this goal of full open access by 2020, and to what extent institutions, funders, etc. are being able to introduce disruption into the industry…

So, a quick introduction to me. I am currently at the University of Strathclyde library, having joined in January. It’s quite an old university (founded 1796) and a medium size university. Previous to that I was working at the Hague working on the EC FP7 Post-Grant Open Access Pilot (Open Aire) providing funding to cover OA publishing fees for publications arising from completed FP7 projects. Maybe not the most popular topic in the UK right now but… The main point of explaining my context is that this EU work was more of a funders perspective, and now I’m able to compare that to more of an institutional perspective. As a result o of this pilot there was a report commissioned b a British consultant: “Towards a competitive and sustainable open access publishing market in Europe”.

One key element in this open access EU pilot was the OA policy guidelines which acted as key drivers, and made eligibility criteria very clear. Notable here: publications to hybrid journals would not be funded, only fully open access; and a cap of no more than €2000 for research articles, €6000 for monographs. That was an attempt to shape the costs and ensure accessibility of research publications.

So, now I’m back at the institutional open access coalface. Lots had changed in two years. And it’s great to be back in this spaces. It is allowing me to explore ways to better align institutional and funder positions on open access.

So, why open access? Well in part this is about more exposure for your work, higher citation rates, compliant with grant rules. But also it’s about use and reuse including researchers in developing countries, practitioners who can apply your work, policy makers, and the public and tax payers can access your work. In terms of the wider open access picture in Europe, there was a meeting in Brussels last May where European leaders call for immediate open access to all scientific papers by 2020. It’s not easy to achieve that but it does provide a major driver… However, across these countries we have EU member states with different levels of open access. The UK, Netherlands, Sweden and others prefer “gold” access, whilst Belgium, Cyprus, Denmark, Greece, etc. prefer “green” access, partly because the cost of gold open access is prohibitive.

Funders policies are a really significant driver towards open access. Funders including Arthritis Research UK, Bloodwise, Cancer Research UK, Breast Cancer Now, British Heard Foundation, Parkinsons UK, Wellcome Trust, Research Councils UK, HEFCE, European Commission, etc. Most support green and gold, and will pay APCs (Article Processing Charges) but it’s fair to say that early career researchers are not always at the front of the queue for getting those paid. HEFCE in particular have a green open access policy, requiring research outputs from any part of the university to be made open access, you will not be eligible for the REF (Research Excellence Framework) and, as a result, compliance levels are high – probably top of Europe at the moment. The European Commission supports green and gold open access, but typically green as this is more affordable.

So, there is a need for quick progress at the same time as ongoing pressure on library budgets – we pay both for subscriptions and for APCs. Offsetting agreements are one way to do this, discounting subscriptions by APC charges, could be a good solutions. There are pros and cons here. In principal it will allow quicker progress towards OA goals, but it will disproportionately benefit legacy publishers. It brings publishers into APC reporting – right now sometimes invisible to the library as paid by researchers, so this is a shift and a challenge. It’s supposed to be a temporary stage towards full open access. And it’s a very expensive intermediate stage: not every country can or will afford it.

So how can disruption happen? Well one way to deal with this would be the policies – suggesting not to fund hybrid journals (as done in OpenAire). And disruption is happening (legal or otherwise) as we can see in Sci-Hub usage which are from all around the world, not just developing countries. Legal routes are possible in licensing negotiations. In Germany there is a Projekt Deal being negotiated. And this follows similar negotiations by open access.nl. At the moment Elsevier is the only publisher not willing to include open access journals.

In terms of tools… The EU has just announced plans to launch it’s own platform for funded research to be published. And Wellcome Trust already has a space like this.

So, some conclusions… Open access is unstoppable now, but still needs to generate sustainable and competitive implementation mechanisms. But it is getting more complex and difficult to disseminate to research – that’s a serious risk. Open Access will happen via a combination of strategies and routes – internal fights just aren’t useful (e.g. green vs gold). The temporary stage towards full open access needs to benefit library budgets sooner rather than later. And the power here really lies with researchers, which OA advocates aren’t always able to get informed. It is important that you know which are open and which are hybrid journals, and why that matters. And we need to think if informing authors on where it would make economic sense to publish beyond the remit of institutional libraries?

To finish, some recommended reading:

  • “Early Career Researchers: the Harbingers of Change” – Final report from Ciber, August 2016
  • “My Top 9 Reasons to Publish Open Access” – a great set of slides.

Q&A

Q1) It was interesting to hear about offsetting. Are those agreements one-off? continuous? renewed?

A1) At the moment they are one-off and intended to be a temporary measure. But they will probably mostly get renewed… National governments and consortia want to understand how useful they are, how they work.

Q2) Can you explain green open access and gold open access and the difference?

A2) In Gold Open Access, the author pays to make your paper open on the journal website. If that’s a hybrid – so subscription – journal you essentially pay twice, once to subscribe, once to make open. Green Open Access means that your article goes into your repository (after any embargo), into the world wide repository landscape (see: https://www.jisc.ac.uk/guides/an-introduction-to-open-access).

Q3) As much as I agree that choices of where to publish are for researchers, but there are other factors. The REF pressures you to publish in particular ways. Where can you find more on the relationships between different types of open access and impact? I think that can help?

A3) Quite a number of studies. For instance is APC related to Impact factor – several studies there. In terms of REF, funders like Wellcome are desperate to move away from the impact factor. It is hard but evolving.

Inputs, Outputs and emergent properties: The new Scientometrics – Phill Jones, Director of Publishing Innovation, Digital Science

Scientometrics is essentially the study of science metrics and evaluation of these. As Graham mentioned in his introduction, there is a whole complicated lifecycle and process of publishing. And what I will talk about spans that whole process.

But, to start, a bit about me and Digital Science. We were founded in 2011 and we are wholly owned by Holtzbrink Publishing Group, they owned Nature group. Being privately funded we are able to invest in innovation by researchers, for researchers, trying to create change from the ground up. Things like labguru – a lab notebook (like rspace); Altmetric; Figshare; readcube; Peerwith; transcriptic – IoT company, etc.

So, I’m going to introduce a concept: The Evaluation Gap. This is the difference between the metrics and indicators currently or traditionally available, and the information that those evaluating your research might actually want to know? Funders might. Tenure panels – hiring and promotion panels. Universities – your institution, your office of research management. Government, funders, policy organisations, all want to achieve something with your research…

So, how do we close the evaluation gap? Introducing altmetrics. It adds to academic impact with other types of societal impact – policy documents, grey literature, mentions in blogs, peer review mentions, social media, etc. What else can you look at? Well you can look at grants being awarded… When you see a grant awarded for a new idea, then publishes… someone else picks up and publishers… That can take a long time so grants can tell us before publications. You can also look at patents – a measure of commercialisation and potential economic impact further down the link.

So you see an idea germinate in one place, work with collaborators at the institution, spreading out to researchers at other institutions, and gradually out into the big wide world… As that idea travels outward it gathers more metadata, more impact, more associated materials, ideas, etc.

And at Digital Science we have innovators working across that landscape, along that scholarly lifecycle… But there is no point having that much data if you can’t understand and analyse it. You have to classify that data first to do that… Historically we did that was done by subject area, but increasingly research is interdisciplinary, it crosses different fields. So single tags/subjects are not useful, you need a proper taxonomy to apply here. And there are various ways to do that. You need keywords and semantic modeling and you can choose to:

  1. Use an existing one if available, e.g. MeSH (Medical Subject Headings).
  2. Consult with subject matter experts (the traditional way to do this, could be editors, researchers, faculty, librarians who you’d just ask “what are the keywords that describe computational social science”).
  3. Text mining abstracts or full text article (using the content to create a list from your corpus with bag of words/frequency of words approaches, for instance, to help you cluster and find the ideas with a taxonomy emerging

Now, we are starting to take that text mining approach. But to use that data needs to be cleaned and curated to be of use. So we hand curated a list of institutions to go into GRID: Global Research Identifier Database, to understand organisations and their relationships. Once you have that all mapped you can look at Isni, CrossRef databases etc. And when you have that organisational information you can include georeferences to visualise where organisations are…

An example that we built for HEFCE was the Digital Science BrainScan. The UK has a dual funding model where there is both direct funding and block funding, with the latter awarded by HEFCE and it is distributed according to the most impactful research as understood by the REF. So, our BrainScan, we mapped research areas, connectors, etc. to visualise subject areas, their impact, and clusters of strong collaboration, to see where there are good opportunities for funding…

Similarly we visualised text mined impact statements across the whole corpus. Each impact is captured as a coloured dot. Clusters show similarity… Where things are far apart, there is less similarity. And that can highlight where there is a lot of work on, for instance, management of rivers and waterways… And these weren’t obvious as across disciplines…

Q&A

Q1) Who do you think benefits the most from this kind of information?

A1) In the consultancy we have clients across the spectrum. In the past we have mainly worked for funders and policy makers to track effectiveness. Increasingly we are talking to institutions wanting to understand strengths, to predict trends… And by publishers wanting to understand if journals should be split, consolidated, are there opportunities we are missing… Each can benefit enormously. And it makes the whole system more efficient.

Against capital – Stuart Lawson, Birkbeck University of London

So, my talk will be a bit different. The arguements I will be making are not in opposition to any of the other speakers here, but is about critically addressing our current ways we are working, and how publishing works. I have chosen to speak on this topic today as I think it is important to make visible the political positions that underly our assumptions and the systems we have in place today. There are calls to become more efficient but I disagree… Ownership and governance matter at least as much as the outcome.

I am an advocate for open access and I am currently undertaking a PhD looking at open access and how our discourse around this has been coopted by neoliberal capitalism. And I believe these issues aren’t technical but social and reflect inequalities in our society, and any company claiming to benefit society but operating as commercial companies should raise questions for us.

Neoliberalism is a political project to reshape all social relations to conform to the logic of capital (this is the only slide, apparently a written and referenced copy will be posted on Stuart’s blog). This system turns us all into capital, entrepreneurs of our selves – quantification, metricification whether through tuition fees that put a price on education, turn students into consumers selecting based on rational indicators of future income; or through pitting universities against each other rather than collaboratively. It isn’t just overtly commercial, but about applying ideas of the market in all elements of our work – high impact factor journals, metrics, etc. in the service of proving our worth. If we do need metrics, they should be open and nuanced, but if we only do metrics for people’s own careers and perform for careers and promotion, then these play into neoliberal ideas of control. I fully understand the pressure to live and do research without engaging and playing the game. It is easier to choose not to do this if you are in a position of privelege, and that reflects and maintains inequalities in our organisations.

Since power relations are often about labour and worth, this is inevitably part of work, and the value of labour. When we hear about disruption in the context of Uber, it is about disrupting rights of works, labour unions, it ignores the needs of the people who do the work, it is a neo-liberal idea. I would recommend seeing Audrey Watters’ recent presentation for University of Edinburgh on the “Uberisation of Education”.

The power of capital in scholarly publishing, and neoliberal values in our scholarly processes… When disruptors align with the political forces that need to be dismantled, I don’t see that as useful or properly disruptive. Open Access is a good thing in terms of open access. But there are two main strands of policy… Research Councils have spent over £80m to researchers to pay APCs. Publishing open access do not require payment of fees, there are OA journals who are funded other ways. But if you want the high end visible journals they are often hybrid journals and 80% of that RCUK has been on hybrid journals. So work is being made open access, but right now this money flows from public funds to a small group of publishers – who take a 30-40% profit – and that system was set up to continue benefitting publishers. You can share or publish to repositories… Those are free to deposit and use. The concern of OA policy is the connection to the REF, it constrains where you can publish and what they mean, and they must always be measured in this restricted structure. It can be seen as compliance rather than a progressive movement toward social justice. But open access is having a really positive impact on the accessibility of research.

If you are angry at Elsevier, then you should also be angry at Oxford University and Cambridge University, and others for their relationships to the power elite. Harvard made a loud statement about journal pricing… It sounded good, and they have a progressive open access policy… But it is also bullshit – they have huge amounts of money… There are huge inequalities here in academia and in relationship to publishing.

And I would recommend strongly reading some history on the inequalities, and the racism and capitalism that was inherent to the founding of higher education so that we can critically reflect on what type of system we really want to discover and share scholarly work. Things have evolved over time – somewhat inevitably – but we need to be more deliberative so that universities are more accountable in their work.

To end on a more positive note, technology is enabling all sorts of new and inexpensive ways to publish and share. But we don’t need to depend on venture capital. Collective and cooperative running of organisations in these spaces – such as the cooperative centres for research… There are small scale examples show the principles, and that this can work. Writing, reviewing and editing is already being done by the academic community, lets build governance and process models to continue that, to make it work, to ensure work is rewarded but that the driver isn’t commercial.

Q&A

Comment) That was awesome. A lot of us here will be to learn how to play the game. But the game sucks. I am a professor, I get to do a lot of fun things now, because I played the game… We need a way to have people able to do their work that way without that game. But we need something more specific than socialism… Libraries used to publish academic data… Lots of these metrics are there and useful… And I work with them… But I am conscious that we will be fucked by them. We need a way to react to that.

Redesigning Science for the Internet Generation – Gemma Milne, Co-Founder, Science Disrupt

Science Disrupt run regular podcasts, events, a Slack channel for scientists, start ups, VCs, etc. Check out our website. We talk about five focus areas of science. Today I wanted to talk about redesigning science for the internet age. My day job is in journalism and I think a lot about start ups, and to think about how we can influence academia, how success is manifests itself in the internet age.

So, what am I talking about? Things like Pavegen – power generating paving stones. They are all over the news! The press love them! BUT the science does not work, the physics does not work…

I don’t know if you heard about Theranos which promised all sorts of medical testing from one drop of blood, millions of investments, and it all fell apart. But she too had tons of coverage…

I really like science start ups, I like talking about science in a different way… But how can I convince the press, the wider audience what is good stuff, and what is just hype, not real… One of the problems we face is that if you are not engaged in research you either can’t access the science, and can’t read it even if they can access the science… This problem is really big and it influences where money goes and what sort of stuff gets done!

So, how can we change this? There are amazing tools to help (Authorea, overleaf, protocol.io, figshare, publons, labworm) and this is great and exciting. But I feel it is very short term… Trying to change something that doesn’t work anyway… Doing collaborative lab notes a bit better, publishing a bit faster… OK… But is it good for sharing science? Thinking about journalists and corporates, they don’t care about academic publishing, it’s not where they go for scientific information. How do we rethink that… What if we were to rethink how we share science?

AirBnB and Amazon are on my slide here to make the point of the difference between incremental change vs. real change. AirBnB addressed issues with hotels, issues of hotels being samey… They didn’t build a hotel, instead they thought about what people want when they traveled, what mattered for them… Similarly Amazon didn’t try to incrementally improve supermarkets.. They did something different. They dug to the bottom of why something exists and rethought it…

Imagine science was “invented” today (ignore all the realities of why that’s impossible). But imagine we think of this thing, we have to design it… How do we start? How will I ask questions, find others who ask questions…

So, a bit of a thought experiment here… Maybe I’d post a question on reddit, set up my own sub-reddit. I’d ask questions, ask why they are interested… Create a big thread. And if I have a lot of people, maybe I’ll have a Slack with various channels about all the facets around a question, invite people in… Use the group to project manage this project… OK, I have a team… Maybe I create a Meet Up Group for that same question… Get people to join… Maybe 200 people are now gathered and interested… You gather all these folk into one place. Now we want to analyse ideas. Maybe I share my question and initial code on GitHub, find collaborators… And share the code, make it open… Maybe it can be reused… It has been collaborative at every stage of the journey… Then maybe I want to build a microscope or something… I’d find the right people, I’d ask them to join my Autodesk 360 to collaboratively build engineering drawings for fabrication… So maybe we’ve answered our initial question… So maybe I blog that, and then I tweet that…

The point I’m trying to make is, there are so many tools out there for collaboration, for sharing… Why aren’t more researchers using these tools that are already there? Rather than designing new tools… These are all ways to engage and share what you do, rather than just publishing those articles in those journals…

So, maybe publishing isn’t the way at all? I get the “game” but I am frustrated about how we properly engage, and really get your work out there. Getting industry to understand what is going on. There are lots of people inventing in new ways.. YOu can use stuff in papers that isn’t being picked up… But see what else you can do!

So, what now? I know people are starved for time… But if you want to really make that impact, that you think is more interested… I undesrtand there is a concern around scooping… But there are ways to do that… And if you want to know about all these tools, do come talk to me!

Q&A

Q1) I think you are spot on with vision. We want faster more collaborative production. But what is missing from those tools is that they are not designed for researchers, they are not designed for publishing. Those systems are ephemeral… They don’t have DOIs and they aren’t persistent. For me it’s a bench to web pipeline…

A1) Then why not create a persistent archived URI – a webpage where all of a project’s content is shared. 50% of all academic papers are only read by the person that published them… These stumbling blocks in the way of sharing… It is crazy… We shouldn’t just stop and not share.

Q2) Thank you, that has given me a lot of food for thought. The issue of work not being read, I’ve been told that by funders so very relevant to me. So, how do we influence the professors… As a PhD student I haven’t heard about many of those online things…

A2) My co-founder of Science Disrupt is a computational biologist and PhD student… My response would be about not asking, just doing… Find networks, find people doing what you want. Benefit from collaboration. Sign an NDA if needed. Find the opportunity, then come back…

Q3) I had a comment and a question. Code repositories like GitHub are persistent and you can find a great list of code repositories and meta-articles around those on the Journal of Open Research Software. My question was about AirBnB and Amazon… Those have made huge changes but I think the narrative they use now is different from where they started – and they started more as incremental change… And they stumbled on bigger things, which looks a lot like research… So… How do you make that case for the potential long term impact of your work in a really engaging way?

A3) It is the golden question. Need to find case studies, to find interesting examples… a way to showcase similar examples… and how that led to things… Forget big pictures, jump the hurdles… Show that bigger picture that’s there but reduce the friction of those hurdles. Sure those companies were somewhat incremental but I think there is genuinely a really different mindset there that matters.

And we now move to lunch. Coming up…

UNCONFERENCE SESSION 1: Best Footprint Forward – Nicola Osborne, EDINA

This will be me – talking about managing a digital footprint and how robust web links are part of that lasting digital legacy- so no post from me but you can view my slides on Managing Your Digital Footprint and our Reference Rot in Theses: A HiberActive Pilot here.

SESSION TWO: The Early Career Researcher Perspective: Publishing & Research Communication

Getting recognition for all your research outputs – Michael Markie, F1000

I’m going to talk about things you do as researchers that you should get credit for, not just traditional publications. This week in fact there was a very interesting article on the history of science publishing “Is the staggering profitable business of scientific publishing bad for science?”. Publishers came out of that poorly… And I think others are at fault here too, including the research community… But we do have to take some blame.

There’s no getting away from the fact that the journal is the coin of the realm, for career progression, institutional reporting, grant applications. For the REF, will there be impact factors? REF says maybe not, but institutions will be tempted to use that to prioritise. Publishing is being looked at by impact factor…

And it’s not just where you publish. There are other things that you do in your work and which you should get ore credit for. Data; software/code – in bioinformatics there are new softwares and tools that are part of the research, are they getting the recognition they should; all results – not just the successes but also the negative results… Publishers want cool and sexy stuff but realistically we are funded for this, we should be able to publish and be recognised for it; peer review – there is no credit for it, peer reviews often improve articles and warrant credit; expertise – all the authors who added expertise, including non-research staff, everyone should know who contributed what…

So I see research as being more than a journal article. Right now we just package it all up into one tidy thing, but we should be fitting into that bigger picture. So, I’m suggesting that we need to disrupt it a bit more and pubis in a different way… Publishing introduces delays – of up to a year. Journals don’t really care about data… That’s a real issue for reproducibility.  And there is bias involved in publishing, there is a real lack of transparency in publishing decisions. All of the above means there is real research waster. At the same time there is demand for results, for quicker action, for wider access to work.

So, at F1000 we have been working on ways to address these issues. We launched Wellcome Open Research, and after launching that the Bill & Melinda Gated Foundation contacted us to build a similar platform. And we have also built an open research model for UCL Child Health (at St Ormond’s Street).

The process involves sending a paper in, checking there is plagiarism and that ethics are appropriate. But no other filtering. That can take up to 7 days. Then we ask for your data – no data then no publication. Then once the publication and data deposition is made, the work is published and an open peer review and user commenting process begins, they are names and credited, and they contribute to improve that article and contribute to the article revision. Those reviewers have three options: approved, approved with reservations, or not approved as it stands. So yo get to PMC and indexed in PubMed you need two “approved” status of two “approved with reservations” and an “approved”.

So this connects to lots of stuff… For Data thats with DataCite, DigShare, Plotly, Resource Identification Initiative. For Software/code we work with code ocean, Zenodo, GitHub. For All results we work with PubMed, you can publish other formats… etc.

Why are funders doing this? Wellcome Trust spent £7m on APCs last year… So this platform is partly as a service to stakeholders with a complementary capacity for all research findings. We are testing new approach to improve science and its impact – to accelerate access and sharing of findings and data; efficiency to reduce waste and support reproducibility; alternative OA model, etc.

Make an impact, know your impact, show your impact – Anna Ritchie, Mendeley, Elsevier

A theme across the day is that there is increasing pressure and challenges for researchers. It’s never been easier to get your work out – new technology, media, platforms. And yet, it’s never been harder to get your work seen: more researchers, producing more outputs, dealing with competition. So how do you ensure you and your work make an impact? Options mean opportunities, but also choices. Traditional publishing is still important – but not enough. And there are both older and newer ways to help make your research stand out.

Publishing campus is a big thing here. These are free resources to support you in publishing. There are online lectures, interactive training courses, and expert advice. And things happen – live webinars, online lectures (e.g. Top 10 Tips for Writing a Really Terrible Journal Article!), interactive course. There are suits of materials around publishing, around developing your profile.

At some point you will want to look at choosing a journal. Metrics may be part of what you use to choose a journal – but use both quantitative and qualitative (e.g. ask colleagues and experts). You can also use Elsevier Journal Finder – you can search for your title and abstract and subject areas to suggest journals to target. But always check the journal guidance before submitting.

There is also the opportunity for article enrichments which will be part of your research story – 2D radiological data viewer, R code Viewer, Virtual Microscope, Genome Viewer, Audioslides, etc.

There are also less traditional journals: Heliyon is all disciplines so you report your original and technically sound results of primary research, regardless of perceived impact. Methodsx is entirely about methods work. Data in Brief allows you to describe your data to facilitate reproducibility, make it easier to cite, etc. And an alternative to a data article is to add datasets on Mendeley.

And you can also use Mendeley to understand your impact through Mendeley Stats. There is a very detailed dashboard for each publication – this is powered by Scopus so works for all articles indexed in Scopus. Stats like users, Mendeley users with that article in their library, citations, related works… And you can see how your article is being shared. You can also show your impact on Mendeley, with a research profile that is as comprehensive as possible –  not just your publications but with wider impacts, press mentions…. And enabling you to connect to other researchers, to other articles and opportunities. This is what we are trying to do to make Mendeley help you build your online profile as a researcher. We intend to grow those profiles to give a more comprehensive picture of you as a researcher.

And we want to hear from you. Every journal, platform, and product is co-developed with ongoing community input. So do get in touch!

How to share science with hard to reach groups and why you should bother – Becky Douglas

My background is physics, high energy gravitational waves, etc… As I was doing my PhD I got very involved in science engagement. Hopefully most of you think about science communication and public outreach as being a good thing. It does seem to be something that arise in job interviews and performance reviews. I’m not convinced that everyone should do this – not everyone enjoys or is good at it – but there is huge potential if you are enthusiastic. And there is more expectation on scientists to do this to gain recognition, to help bring trust back to scientists, and right some misunderstanding. And by the way talks and teaching don’t count here.

And not everyone goes to science festivals. It is up to us to provide alternative and interesting things for those people. There are a few people who won’t be interested in science… But there are many more people who don’t have time or don’t see the appeal to them. These people deserve access to new research… And there are many ways to communicate that research. New ideas are always worth doing, and can attract new people and get dialogue you’d never expect.

So, article writing is a great way to reach out… Not just in science magazines (or on personal blogs). Newspapers and magazines will often print science articles – reach out to them. And you can pitch other places too – Cosmo prints science. Mainstream publications are desperate for people who understand science to write about it in engaging ways – sometimes you’ll be paid for your work as well.

Schools are obvious, but they are great ways to access people from all backgrounds. You’ll do extra well if you can connect it to the current curriculum! Put the effort in to build a memorable activity or event. Send them home with something fun and you may well reach parents as well…

More unusual events would be things like theatre, for instance Lady Scientists Stitch and Bitch. Stitch and Bitch is an international thing where you get together and sew and craft and chat. So this show was a play which was about travelling back in time to gather all the key lady scientists, and they sit down to discuss science over some knitting and sewing. Because it was theatre it was an extremely diverse group, not people who usually go to science events. When you work with non scientists you get access to a whole new crowd.

Something a bit more unusual… Soapbox Science, I brought to Glasgow in 2015. It’s science busking where you talk about your cutting edge research. Often attached to science festivals but out in public, to draw a crowd from those shopping, or visiting museums, etc. It’s highly interactive. Most had not been to a science event before, they didn’t go out to see science, but they enjoyed it…

And finally, interact with local communities. WI have science events, Scouts and Guides, meet up groups… You can just contact and reach out to those groups. They have questions in their own effort. It allows you to speak to really interesting groups. But it does require lots of time. But I was based in Glasgow, now in Falkirk, and I’ve just done some of this with schools in the Goebbels where we knew that the kids rarely go on to science subjects…

So, this is really worth doing. You work, if it is tax-payer funded, should be accessible to the public. Some people don’t think they have an interest in science – some are right but others just remember dusty chalkboards and bland text books. You have to show them it’s something more than that.

What helps or hinders science communication by early career researchers? – Lewis MacKenzie

I’m a postdoc at the University of Leeds. I’m a keen science communicator and I try to get out there as much as possible… I want to talk about what helps or hinders science communication by early career researchers.

So, who are early career researchers? Well undergraduates are a huge pool of early career researchers and scientists which tend to be untapped; also PhDs; also postdocs. There are some shared barriers here: travel costs, time… That is especially the case in inaccessible parts of Scotland. There is a real issue that science communication is work (or training). And not all supervisors have a positive attitude to science communication. As well as all the other barriers to careers in science of course.

Let’s start with science communication training. I’ve been through the system as an undergraduate, PhD students and postdocs. A lot of training are (rightly) targeted at PhD students, often around writing, conferences, elevator pitches, etc. But there are issues/barriers for ECRs include… Pro-active sci comm is often not formally recognized as training/CPD/workload – especially at evenings and weekends. I also think undergraduate sci comm modules are minimal/non-existent. You get dedicated sci comm masters now, there is lots to explore. And there are relatively poor sci comm training opportunities for post docs. But across the board media skills training pretty much limited – how do you make youtube videos, podcasts, web comics, writing in a magazine – and that’s where a lot of science communication takes place!

Sci Comm in Schools includes some great stuff. STEMNET is an excellent way for ECRs, industry, retirees, etc as volunteers, some basic training, background checks, and a contact hub with schools and volunteers. However it is a confusing school system (especially in England) and curricula. How do you do age-appropriate communication. And just getting to the schools can be tricky – most PhDs and Sci Comm people won’t have a car. It’s basic but important as a barrier.

Science Communication Competitions are quite widespread. They tend to be aimed at PhD students, incentives being experience, training and prizes. But there are issues/barriers for ECRs – often conventional “stand and talk” format; not usually collaborative – even though team work can be brilliant, the big famous science communicators work with a team to put their shows and work together; intense pressure of competitions can be off putting… Some alternative formats would help with that.

Conferences… Now there was a tweet earlier this week from @LizyLowe suggesting that every conference should have a public engagement strand – how good would that be?!

Research Grant “Impact Plans”: major funders now require “impact plans” revolving around science communication. That makes time and money for science communication which is great. But there are issues. The grant writer often designate activities before ECRs are recruited. These prescriptive impact plans aren’t very inspiring for ECRS. Money may be inefficiently spent on things like expensive web design. I think we need a more agile approach to include input from ECRs once recruited.

Finally I wanted to finish with Science Communication Fellowships. These are run by people like Wellcome Trust Engagement Fellowships and the STFC. These are for the Olympic gold medallists of Sci Comm. But they are not great for ECRs. The dates are annual and inflexible – and the process is over 6 months – it is a slow decision making process. And they are intensively competitive so not very ECR friendly, which is a shame as many sci comm people are ECRs. So perhaps more institutions or agencies should offer sci comm fellowships? And  a continuous application process with shorter spells?

To sum up… ECRs at different career stages require different training and organisational support to enable science communication. And science communication needs to be recognised as formal work/training/education – not an out of hours hobby! There are good initiatives out there but there could be many more.

PANEL DISCUSSION – Michael Markie, F1000 (MM); Anna Ritchie, Mendeley, Elsevier (AR); Becky Douglas (BD); Lewis MacKenzie (LW) – chaired by Joanna Young (JY)

Q1 (JY): Picking up on what you said about Pathways to Impact statements… What advice would you give to ECRs if they are completing one of these? What should they do?

A1 (LM): It’s quite a weird thing to do… Two strands… This research will make loads of money and commercialise it; and the science communication strand. It’s easier to say you’ll do a science festival event, harder to say you’ll do press release… Can say you will blog you work once a month, or tweet a day in the lab… You can do that. In my fellowship application I proposed a podcast on biophysics that I’d like to do. You can be creative with your science communication… But there is a danger that people aren’t imaginative and make it a box-ticking thing. Just doing a science festival event and a webpage isn’t that exciting. And those plans are written once… But projects run for three years maybe… Things change, skills change, people on the team change…

A1 (BD): As an ECR you can ask for help – ask supervisors, peers, ask online, ask colleagues… You can always ask for advice!

A1 (MM): I would echo that you should ask experienced people for help. And think tactically as different funders have their own priorities and areas of interest here too.

Q2: I totally agree with the importance of communicating your science… But showing impact of that is hard. And not all research is of interest to the public – playing devil’s advocate – so what do you do? Do you broaden it? Do you find another way in?

A2 (LM): Taking a step back and talking about broader areas is good… I talk a fair bit about undergraduates as science communicators… They have really good broad knowledge and interest. They can be excellent. And this is where things like Science Soapbox can be so effective. There are other formats too.. Things like Bright Club which communicates research through comedy… That’s really different.

A2 (BD) I would agree with all of that. I would add that if you want to measure impact then you have to think about it from the outset – will you count people, some sort of voting or questionnaires. YOu have to plan this stuff in. The other thing is that you have to pitch things carefully to your audience. If I run events on gravitational waves I will talk about space and black holes… Whereas with a 5 year old I ask about gravity and we jump up and down so they understand what is relevant to them in their lives.

A2 (LM): In terms of metrics for science communication… At the British Science Association conference a few years back and this was a major theme… Becky mentioned getting kids to post notes in boxes at sessions… Professional science communicators think a great deal about this… Maybe not as much us “Sunday Fun Run” type people but we should engage more.

Comment (AR): When you prepare an impact statement are you asked for metrics?

A2 (LM): Not usually… They want impact but don’t ask about that…

A2 (BD): Whether or not you are asked for details of how something went you do want to know how you did… And even if you just ask “Did you learn something new today?” that can be really helpful for understanding how it went.

Q3: I think there are too many metrics… As a microbiologist… which ones should I worry about? Should there be a module at the beginning of my PhD to tell me?

A3 (AR): There is no one metric… We don’t want a single number to sum us up. There are so many metrics as one number isn’t enough, one isn’t enough… There is experimentation going on with what works and what works for you… So be part of the conversation, and be part of the change.

A3 (MM): I think there are too many metrics too… We are experimenting. Altmetrics are indicators, there are citations, that’s tangible… We just have to live with a lot of them all at once at the moment!

UNCONFERENCE SESSION 2: Preprints: A journey through time – Graham Steel

This will be a quick talk plus plenty of discussion space… From the onset of thinking about this conference I was very keen to talk about preprints…

So, who knows what a preprint is? There are plenty of different definitions out there – see Neylon et al 2017. But we’ll take the Wikipedia definition for now. I thought preprints dates to the 1990s. But I found a paper that referenced a pre-print from 1922!

Lets start there… Preprints were ticking along fine… But then a fightback began, In 1966 preprinte were made outlaws when Nature wanted to take “lethal steps” to end preprints. In 1969 we had a thing called the “Inglefinger Rule” – we’ll come back to that later… Technology wise various technologies ticked along… In 1989 Tim Berners Lee came along, In 1991 Cern set up, also ArXiv set up and grew swiftly… About 8k prepreints per month are uploaded to ArXiv each month as of 2016. Then, in 2007-12 we had Nature Preprints…

But in 2007, the fightback began… In 2012 the Ingelfinger rule was creating stress… There are almost 35k journals, only 37 still use the Ingelfinger rule… But they include key journals like Cell.

But we also saw the launch of BioaXiv in 2013. And we’ve had an explosion of preprints since then… Also 2013 there was a £5m Centre for Open Science set up. This is a central place for preprints… That is a central space, with over 2m preprints so far. There are now a LOT of new …Xiv preprint sites. In 2015 we saw the launch of the ASAPbio movement.

Earlier this year Mark Zuckerberg invested billions in boiXiv… But everything comes at a price…

Scottish spends on average £11m per year to access research through journals. The best average for APCs I could find is $906. Per pre-print it’s $10. If you want to post a pre-print you have to check the terms of your journal – usually extremely clear. Best to check in SHERPA/ROMEO.

If you want to find out more about preprints there is a great Twitter list, also some recommended preprints reading. Find these slides: slideshare.net/steelgraham and osf.io/zjps6/.

Q&A

Q1: I found Sherpa/Romeo by accident…. But really useful. Who runs it?

A1: It’s funded by Jisc

Q2: How about findability…

A2: ArXiv usually points to where this work has been submitted. And you can go back and add the DOI once published.

Q2: It’s acting as a static archive then? To hold the green copy

A2: And there is collaborative activity across that… And there is work to make those findable, to share them, they are shared on PubMed…

Q2: One of the problems I see is purely discoverability… Getting it easy to find on Google. And integration into knowledgebases, can be found in libraries, in portals… Hard for a researcher looking for a piece of research… They look for a subject, a topic, to search an aggregated platform and link out to it… To find the repository… So people know they have legal access to preprint copies.

A2: You have COAR at OU which aggregates preprints, suggests additional items when you search. There is ongoing work to integrate with CRIS systems, frequently commercial so interoperability here.

Comment: ArXiv is still the place for high energy physics so that is worth researchers going directly too…

Q3: Can I ask about preprints and research evaluation in the US?

A3: It’s an important way to get the work out… But the lack of peer review is an issue there so emerging stuff there…

GS: My last paper was taking forever to come out, we thought it wasn’t going to happen… We posted to PeerJ but discovered that that journal did use the Inglefinger Rule which scuppered us…

Comment: There are some publishers that want to put preprints on their own platform, so everything stays within their space… How does that sit/conflict with what libraries do…

GS: It’s a bit “us! us! us!”

Comment: You could see all submitted to that journal, which is interesting… Maybe not health… What happens if not accepted… Do you get to pull it out? Do you see what else has been rejected? Could get dodgy… Some potential conflict…

Comment: I believe it is positioned as a separate entity but with a path of least resistance… It’s a question… The thing is.. If we want preprints to be more in academia as opposed to publishers… That means academia has to have the infrastructure to do that, to connect repositories discoverable and aggregated… It’s a potential competitive relationship… Interesting to see how it plays out…

Comment: For Scopus and Web of Science… Those won’t take preprints… Takes ages… And do you want to give up more rights to the journals… ?

Comment: Can see why people would want multiple copies held… That seems healthy… My fear is it requires a lot of community based organisation to be a sustainable and competitive workflow…

Comment: Worth noting the radical “platinum” open access… Lots of preprints out there… Why not get authors to submit them, organise into free, open journal without a publisher… That’s Tim Garrow’s thing… It’s not hard to put together a team to peer review thematically and put out issues of a journal with no charges…

GS: That’s very similar to open library of humanities… And the Wellcome Trust & Gates Foundation stuff, and big EU platform. But the Gates one could be huge. Wellcome Trust is relatively small so far… But EU-wide will be major ramifications…

Comment: Platinum is more about overlay journals… Also like Scope3 and they do metrics on citations etc. to compare use…

GS: In open access we know about green, gold and with platinum it’s free to author and reader… But use of words different in different contexts…

Q4: What do you think the future is for pre-prints?

A4 – GS: There is a huge boom… There’s currently some duplication of central open preprints platform. But information is clear on use and uptake is on the rise… It will plateau at some point like PLoSOne. They launched 2006 and they probably plateaued around 2015. But it is number 2 in the charts of mega-journals, behind Scientific Reports. They increased APCs (around $1450) and that didn’t help (especially as they were profitable)…

SESSION THREE: Raising your research profile: online engagement & metrics

Green, Gold, and Getting out there: How your choice of publisher services can affect your research profile and engagement – Laura Henderson, Editorial Program Manager, Frontiers

We are based in Lausanne in Switzerland. We are fully digital, fully open access publisher. All of 58 journals are published under CC-BY licenses. And the organisation was set up scientists that wanted to change the landscape. So I wanted to talk today about how this can change your work.

What is traditional academic publishing?

Typically readers pay – journal subscriptions via institution/library or pay per view. Given the costs and number of articles they are expensive – ¢14B journals revenue in 2014 works out at $7k per article. It’s slow too.. Journal rejection cascade can take 6 months to a year each time. Up to 1 million papers – valid papers – are rejected every year. And these limit access to research around 80% of research papers are behind subscription paywalls. So knowledge gets out very slowly and inaccessibly.

By comparison open access… Well Green OA allows you to publish an dthen self-archive your paper in a repository where it can be accessed for free. you can use an institutional or central repository, or I’d suggest both. And there can be a delay due to embargo. Gold OA makes research output immediately available from th epublisher and you retain the copyright so no embargoes. It is fully discoverable via indexing and professional promotion services to relevant readers. No subscription fee to reader but usually involves APCs to the institution.

How does Open Access publishing compare? Well it inverts the funding – institution/grant funder supports authors directly, not pay huge subscrition fees for packages dictates by publishers. It’s cheaper – Green OA is usually free. Gold OA average fee is c. $1000 – $3000 – actually that’s half what is paid for subscription publishing. We do see projections of open access overtaking subscription publishing by 2020.

So, what benefits does open access bring? Well there is peer-review; scalable publishing platforms; impact metrics; author discoverability and reputation.

And I’d now like to show you what you should look for from any publisher – open access or others.

Firstly, you should expect basic services: quality assurance and indexing. Peter Suber suggests checking the DOAJ – Directory of Open Access Journals. You can also see if the publisher is part of OASPA which excludes publishers who fail to meet their standards. What else? Look for peer review nad good editors – you can find the joint COPE/OASPA/DOAJ Principles of Transaparancy and Best Practice in Scholarly Publishing. So you need to have clear peer review proceses. And you need a governing board and editors.

At Frontiers we have an impact-neutral peer review oricess. We don’t screen for papers with highest impact. Authors, reviewers and handling Associate Editor interact directly with each other in the online forum. Names of editors and reviewers publishhed on final version of paper. And this leads to an average of 89 days from submission to acceptance – and that’s an industry leading timing… And that’s what won an ASPLP Innovation Award.

So, what are the extraordinary services a top OA publisher can provide? Well altmetrics are more readily available now. Digital articles are accessible and trackable. In Frontiers our metrics are built into every paper… You can see views, downloads, and reader demographics. And that’s post-publication analytics that doesn’t rely on impact factor. And it is community-led imapact – your peers decide the impact and importance.

How discoverable are you? We launched a bespoke built-in networking profile for every author and user: Loop. Scrapes all major index databases to find youe work – constatly updating. It’s linked to Orchid and is included in peer review process. When people look at your profile you can truly see your impact in the world.

In terms of how peers find your work we have article alerts going to 1 million people, and a newsletter that goes to 300k readers. And our articles have 250 million article views and downloads, with hotspots in Mountain View California, and in Shendeng, and areas of development in the “Global South”.

So when you look for a publisher, look for a publisher with global impact.

What are all these dots and what can linking them tell me? – Rachel Lammey, Crossref

Crossref are a not for profit organisation. So… We have articles out there, datasets, blogs, tweets, Wikipedia pages… We are really interested to understand these links. We are doing that through Crossref Event Data, tracking the conversation, mainly around objects with a DOI. The main way we use and mention publications is in the citations of articles. That’s the traditional way to discuss research and understand news. But research is being used in lots of different ways now – Twitter and Reddit…

So, where does Crossref fit in? It is the DOI registration agency for scholarly content. Publishers register their content with us. URLs do change and do break… And that means you need something ore persistent so it can still be used in their research… Last year at ReCon we tried to find DOI gaps in reference lists – hard to do. Even within journals publications move around… And switch publishers… The DOI fixes that reference. We are sort of a switchboard for that information.

I talked about citations and references… Now we are looking beyong that. It is about capturing data and relationships so that understanding and new services (by others) can be built… As such it’s an API (Application Programming Interface) – it’s lots of data rather than an interface. SO it captures subject, relation, object, tweet, mentions, etc. We are generating this data (As of yesterday we’ve seen 14 m events), we are not doing anything with it so this is a clear set of data to do further work on.

We’ve been doing work with NISO Working Group on altmetrics, but again, providing the data not the analysis. So, what can this data show? We see citation rings/friends gaming the machine; potential peer review scams; citation patterns. How can you use this data? Almost any way. Come talk to us about Linked Data; Article Level Metrics; general discoverability, etc.

We’ve done some work ourselves… For instant the Live Data from all sources – including Wikipedia citing various pages… We have lots of members in Korea, and started looking just at citations on Korean Wikipedia. It’s free under a CC0 license. If you are interested, go make something cool… Come ask me questions… And we have a beta testing group and we welcome you feedback and experiments with our data!

The wonderful world of altmetrics: why researchers’ voices matter – Jean Liu, Product Development Manager, Altmetric

I’m actually five years out of graduate school, so I have some empathy with PhD students and ECRs. I really want to go through what Altmetrics is and what measures there are. It’s not controversial to say that altmetrics have been experiencing a meteoric rise over the last few years… That is partly because we have so much more to draw upon than the traditional journal impact factors, citation counts, etc.

So, who are altmetrics.com? We have about 20 employees, founded in 2011 and all based in London. And we’ve started to see that people re receptive to altmetrics, partly because of the (near) instant feedback… We tune into the Twitter firehose – that phrase is apt! Altmetrics also showcase many “flavours” of attention and impact that research can have – and not just articles. And the signals we tracked are highly varies: policy documents, news, blogs, Twitter, post-publication peer review, Facebook, Wikipedia, LinkedIn, Reddit, etc.

Altmetrics also have limitations. They are not a replacement for peer review or citation-based metrics. They can be gamed – but data providers have measures in place to guard against this. We’ve seen interesting attempts at gamification – but often caught…

Researchers are not only the ones who receive attention in altmetrics, but they are also the ones generating attention that make up altmetrics – but not all attention is high quality or trustworthy. We don’t want to suggest that researchers should be judged just on altmetrics…

Meanwhile Universities are asking interesting questions: how an our researchers change policy? Which conference can I send people to which will be most useful, etc.

So, lets see the topic of “diabetic neuropathy”. Looking around we can see a blog, an NHS/Nice guidance document, and a The Conversation. A whole range of items here. And you can track attention over time… Both by volume, but also you can look at influencers across e.g. News Outlets, Policy Outlets, Blogs and Tweeters. And you can understand where researcher voices feature (all are blogs). And I can then compare news and policy and see the difference. The profile for News and Blogs are quite different…

How can researchers voices be heard? Well you can write for a different audience, you can raise the profile of your work… You can become that “go-to” person. You also want to be really effective when you are active – altmetrics can help you to understand where your audience is and how they respond, to understand what is working well.

And you can find out more by trying the altmetric bookmarking browser plugin, by exploring these tools on publishing platforms (where available), or by taking a look.

How to help more people find and understand your work – Charlie Rapple, Kudos

I’m sorry to be the last person on the agenda, you’ll all be overwhelmed as there has been so much information!

I’m one of the founders of Kudos and we are an organisation dedicated to helping you increase the reach and impact of your work. There is such competition for funding, a huge growth in outputs, there is a huge fight for visibility and usage, a drive for accountability and a real cult of impact. You are expected to find and broaden the audience for your work, to engage with the public. And that is the context in which we set up Kudos. We want to help you navigate this new world.

Part of the challenge is knowing where to engage. We did a survey last year with around 3000 participants to ask how they share their work – conferences, academic networking, conversations with colleagues all ranked highly; whilst YouTube, slideshare, etc. are less used.

Impact is built on readership – impacts cross a variety of areas… But essentially it comes down to getting people to find and read your work. So, for me it starts with making sure you increase the number of people reaching and engaging with your work. Hence the publication is at the centre – for now. That may well be changing as other material is shared.

We’ve talked a lot about metrics, there are very different ones and some will matter more to you than others. Citations have high value, but so do mentions, clicks, shares, downloads… Do take the time to think about these. And think about how your own actions and behaviours contribute back to those metrics… So if you email people about your work, track that to see if it works… Make those connections… Everyone has their own way and, as Nicola was saying in the Digital Footprint session, communities exist already, you have to get work out there… And your metrics have to be about correlating what happens – readership and citations. Kudos is a management tool for that.

In terms of justifying time here is that communications do increase impact. We have been building up data on how that takes place. A team from Nanyang Technological Institute did a study of our data in 2016 and they saw that the Kudos tools – promoting their work – they had 23% higher growth in downloads of full text on publisher sites. And that really shows the value of doing that engagement. It will actually lead to meaningful results.

So a quick look at how Kudos works… It’s free for researchers (www.growkudos.com) and it takes about 15 minutes to set up, about 10 minutes each time you publish something new. You can find a publication, you can use your ORCID if you have one… It’s easy to find your publication and once you have then you have page for that where you can create a plain language explanation of your work and why it is important – that is grounded in talking to researchers about what they need. For example: http://bit.ly/plantsdance. That plain text is separate from the abstract. It’s that first quick overview. The advantage of this is that it is easier for people within the field to skim and scam your work; people outside your field in academia can skip terminology of your field and understand what you’ve said. There are also people outside academia to get a handle on research and apply it in non-academic ways. People can actually access your work and actually understand it. There is a lot of research to back that up.

Also on publication page you can add all the resources around your work – code, data, videos, interviews, etc. So for instance Claudia Sick does work on baboons and why they groom where they groom – that includes an article and all of that press coverage together. That publication page gives you a URL, you can post to social media from within Kudos. You can copy the trackable link and paste wherever you like. The advantage to doing this in Kudos is that we can connect that up to all of your metrics and your work. You can get them all in one place, and map it against what you have done to communicate. And we map those actions to show which communications are more effective for sharing… You can really start to refine your efforts… You might have built networks in one space but the value might all be in another space.

Sign up now and we are about to launch a game on building up your profile and impact, and scores your research impact and lets you compare to others.

PANEL DISCUSSION – Laura Henderson, Editorial Program Manager, Frontiers (LH); Rachel Lammey, Crossref (RL); Jean Liu, Product Development Manager, Altmetric (JL); Charlie Rapple, Kudos (CR). 

Q1: Really interesting but how will the community decide which spaces we should use?

A1 (CR): Yes, in the Nangyang work we found that most work was shared on Facebook, but more links were engaged with on Twitter. There is more to be done, and more to filter through… But we have to keep building up the data…

A1 (LH): We are coming from the same sort of place as Jean there, altmetrics are built into Frontiers, connected to ORCID, Loop built to connect to institutional plugins (totally open plugin). But it is such a challenge… Facebook, Twitter, LinkedIn, SnapChat… Usually personal choice really, we just want to make it easier…

A1 (JL): It’s about interoperability. We are all working in it together. You will find certain stats on certain pages…

A1 (RL): It’s personal choice, it’s interoperability… But it is about options. Part of the issue with impact factor is the issue of being judged by something you don’t have any choice or impact upon… And I think that we need to give new tools, ways to select what is right for them.

Q2: These seem like great tools, but how do we persuade funders?

A2 (JL): We have found funders being interested independently, particularly in the US. There is this feeling across the scholarly community that things have to change… And funders want to look at what might work, they are already interested.

A2 (LH): We have an office in Brussels which lobbies to the European Commission, we are trying to get our voice for Open Science heard, to make difference to policies and mandates… The impact factor has been convenient, it’s well embedded, it was designed by an institutional librarian, so we are out lobbying for change.

A2 (CR): Convenience is key. Nothing has changed because nothing has been convenient enough to replace the impact factor. There is a lot of work and innovation in this area, and it is not only on researchers to make that change happen, it’s on all of us to make that change happen now.

Jo Young (JY): To finish a few thank yous… Thank you all for coming a lot today, to all of our speakers, and a huge thank you for Peter and Radic (our cameramen), to Anders, Graham and Jan for work in planning this. And to Nicola and Amy who have been liveblogging, and to all who have been tweeting. Huge thanks to CrossRef, Frontiers, F1000, JYMedia, and PLoS.

And with that we are done. Thanks to all for a really interesting and busy day!

 

Nov 232013
 
photo of book bag

Today I have been liveblogging – by invitation no less – at the Society of Young Publishers Conference 2013 in Oxford, and EDINA is proud to sponsor the event through my participation. The event in entitled “Life in Publishing: It’s more than just books (and Tumblr)“, it’s theme being the future of publishing into the digital (and the inspiration for the name coming from this Tumblr blog).

My notes from the day can be found over on the SYP blog, Press Forwardhttp://thesyp.org.uk/syp-conference-2013-liveblog/

You can also view tweets from the event on #SYPC13.

Anyone interested in data, app development, digital publishing or disruption should find something of interest in there… and for me it has been a fun and informative day! And if you have been at the conference and are interested in what EDINA does around publishing and publishers I would recommend taking a look at the UK LOCKSS Alliance, CLOCKSS,  The Keepers Registry, and The UK Access Management Federation.

 November 23, 2013  Posted by at 5:18 pm Events Attended, LiveBlogs Tagged with: , ,  No Responses »
May 022013
 

Today I am blogging from the University of Edinburgh Digital Scholarship Day of Ideas 2, a day long look at research in the digital humanities and social sciences. You can find out more on the event on the Digital HSS website. As usual these are live blog posts so apologies for any spelling errors, typos, etc. And please do leave your comments and corrections here.

Professor Dorothy Miell, head of college of Huminities and Social Sciences is introducing the day. Last year we shaped the day around external speakers but we are well aware that there is such a wealth of work taking place here in Edinburgh so this year we have reshaped the event to include more input from researchers here in Edinburgh, with break out sessions and discussion time. The event is part of a programme of events in the Digital HSS thread, led by Sian Bayne. The programme includes workshops and a range of other events. Just yesterday a group of us were discussing how to take forward this work, how to help groups gather around applications for grants etc, developing fora for post graduates etc. If you have any ideas please do contact Sian and let her know.

Our first speaker is Tara McPherson who is based in the School of Cinematic Arts at USC in Los Angeles. She is a researcher on cinema and gender. Her new media research concentrates on computation, gender and race as well as new paradigms of publishing and authorship.

Scholarship across scales: humanities research in a networked world – Dr Tara McPherson, School of Cinematic Arts, University Southern California

We are often told we are living in an era of big data, of large digital data sets and the speed of their expansion. And so much of this work is created by citizens, “vernacular archives” such as Flickr and YouTube. And those spaces are the data for emerging scholars. And we are already further along in how big data and linked data can support scholarship. There is a project called DataONE – Data Observation Network for Earth  – is a grant project for scientists, the grand archive of knowledge. This is the sort of data aggregation Foucault warned us about! But it’s not just in the scientists. In the humanities we also have huge data sets, the Holocaust Testimony video collection is an example of that – we can use that as visual evidence in  a way that was previously unavailable to us. Study of expression, of memory, of visual aspects can be explored alongside more traditional ways of exploring those testimonies. And we can begin to ask ourselves about what happens when we begin to visualise big data in new ways. If communication is increasingly in forms like video what are the opportunities for scholarship to take advantage of that new material, the vernaculars, and what does it mean that we can now have interpretation presented in parallel to evidence. Whilst many humanities scholars have been sceptical about the combination of human and machine interpretations there are rich possibilities for thinking about these not as alternative forms but as a contiunuum. And we will see shifts in how we collaborate, in sharing the outcomes of our knowledge. Rather than thinking of our outputs as texts, as publications, we also need to think about data sets, as software. Stuff that exists at multiple levels from bite size records – metadata that records our work for instance, to book size, to bigger. And we need to think about how we credit work, how we recognise effort, how we assess that work. How do we reward and assess innovation – how do we do that for research that may not lead to immediate articles but be much longer, much bigger scale.

Going back to DataONE there is a sub project called eBird, a tool to allow birdwatchers to gather data on birds. They are somewhat ahead of the game in thinking about crowdsourced science. Colleagues at Dartmouth are starting to look at crowdsourcing data. My son plays a game that lets you fold proteins that contributes to scientific research. There are examples from Wikipedia, to protein folding to metadata games, etc. which also challenge traditional publishing. The Shakespeare Quarterly challenges peer review with an open process – an often challenging form of peer review. Gary Hall and colleagues at Goldsmiths are also innovating with open journals. But we also see a change from academic knowledge as something which should be locked away, a move away from the book as fetish object etc. In the UK we saw JISC fund livingbooksaboutlife.org – from open access science but curated by humanists and scientists.

And we see information that can be discovered and represented in many ways. We can get hung up on Google or library catalogue search dynamics but actually searches can be quite different. So for something like Textmap we get an idea of different modes of discovering and browsing and searching the archive, opportunities for academics to reinterpret and reuse data. The opportunity to manipulate and reuse data gives our archive much more fludity. We can engage on many different registers. You can imagine the Shoah Foundation archive which I showed earlier having a K12 interface, as well as interfaces for researchers, for publishers etc. Some may be functional interfaces but some may be much more playful, more experimental.

Humanities scholars and artists are helping to design some of these spaces. The tools will not take the form that we need them to as particular humanities scholars unless we are part of that process. We often don’t think of ourselves as having that role but we have to shape those ways to communicate our data, to visualising it etc. Humanities scholars have spent years interpreting text, visual aspects, emotion, embodiment, we are extremely well placed to contribute, to help us build better tools, better visualisations etc. There is no logical fit between the design of the database and the type of fit with the work of humanities researcher. Data can have inconsistencies, nuances, multiple interpretations, they don’t easily fit into a database but databases can be designed to do that. Mukurtu (www.mukurtu.org) is an ethnographic database and exploration space, the researcher has worked with the world intellectual property association and indiginous groups to record and access data according to their knowledge protocol, that reflect kinship relations, codings of trust. We also have much to learn from experimental interactive design. The Open Ended Group (openendedgroup.com) do large scale digitisation. They have digisted a huge closed detroit factory, and used 3D visualisation. It’s for an experimental art space not a science museum. It’s a powerful piece to experience and inhabit and explores the grammers of visuality. It’s not about literal reinterpretation but creative and immersive explorations.

Another example: Sharon Daniel – database driven documentary from IV drug users in a needle exchange programme in San Francisco. 100 hours of audio to be explored through the interface, work in Vectors. Vectors is a journal I edit, an experiment on the boundary of humanities research, visual interpretation and screen culture. Can you play an argument like a video game? Can you be emersed in an argument like a film? Another example here is an audio exploration of the largest womens prisons in California. Curated to make an arguement about our complicity in the rhetoric of imprisonment by the state. The piece has a tree based structure which allows exploration based on where you have been. You can navigate the piece through a variety of themes. You can follow one woman’s story through the archive in a variety of ways, and incarceration and the paradigms on which it depends. The piece is quite different to a typical journal article – it will be different every time. Which raises interesting questions for the assessment of scholarship. It’s fairly typical of what else is in the archive. We pair scholars with minimal or no programming experience with staff in design and programming staff in the lab. A fantastic co-creative process but not scalable, especially as many of these pieces are in Flash. But we have identified many research questions and areas for exploration here.

I work in a cinema schools, looking at visual cultures. We found we needed tools, we didn’t want to build tools but the scholarly interpretation needed by our scholars does not fit into existing rigid strcutures. Since we began to work in this area we’ve moved to thinking about potential around vernacular knowledge, collaboration with the Shoah Foundation, temporal and geographical maps from Hypercities that let you explore materials in space and time. And from those partnership we have formed a group, the Alliance for Networking Visual Culture (scalar.usc.edu/anvc) funded by Carnegie Mellon(?) with partners from the Internet Archive, with the SHoah Foundation, with traditional humanities research centres, with design partners, 8 university presses to explore none traditional scholarly publications and those presses have committed to publishing these born digital scholarly materials. And you can begin to think about scholarship across scales, with new combinations, ways to draw in the archives. Traditionally humanities scholars have a vampiric relationships with the archive! We can imagine in the world of Linked Data that the round tripping of our scholarly knowledge back to the archive might become quicker and more effective. So we’ve been building a prototype… this is a born digital book about YouTube by a media scholar, which takes the form of YouTube. It’s an open access book but peer reviewed in the same way as any other. So we have built a platform called “Scalar”, a publishing platforum for scholars who use visual materials. Anyone can log in, to play with the software, to try to create and engage with the software. It’s connected to archives – partners, YouTube, Vimeo, etc. and particularly to Critical Commons – an archive that includes some commercial materials (under US copyright law) and also links to the metadata around that material. And it lets you create different structures that allow you to take multiple paths through materials, through data, more like a scholarly form but not neccassarily in linear routes. So, for example, “We are all children of Algeria” by Nicolas Mirzoeff. He had a book coming out in print but when submitted the Arab Spring took place and was very relevant to the book so he created a companion piece. As you built the piece on Scalar a number of visualisations are generated on the fly to show you data on the content of the book, visual Table of Contents, metadata, the paths, etc. Another recent project, “The Nicest Kids in Town” – on American Bandstand that includes video that couldn’t be in the book. Also Diana Taylor and the Hermispheric Institute

Henry Jenkins and colleagues interactive book on digital cultures. Third World Majority an activist archive and scholarly expert pathways through that archive. Blurring the boundary between edited collection and archival collection. And the Knotted Line blurs public humanities and public curation. It explores incarceration in the US and this is based on the Scalar API with their own interface which is quite tactile.

These tools allow us to explore the outputs of scholarly research in different ways, the relationship to evidence, but also to think about teaching differently. See programme in the humanities and media studies, at intersection of theory and practice, where students must “make” a dissertation rather than write a dissertation. See also Rethinking Learning – a series of cards and materials from which students could create peer to peer learning. It is also a dissertation. The author Jeff Watson will be in a tenureship track role in Canada in the fall. Susana Ruiz has created a dissertation prototype which is a model of learning around games and video archives. But both of these projects look at new possibilities for teaching and learning.

We are building tools here for humanities scolars not “digital” humanities scholars. We build upon rich traditions of scholarly citation and annotation. Our evidence can live side by side by the analysis which increases the potential rigour of scholarship, the reader has far more opportunity to question or asses those arguemens. And the user/reader has an opportunity to remix. This isn’t about watering down our scholarship or making it ritzy, rather it is about making our scholarship flexible to an ever changing world and accessible in new ways.

Q&A

Q1 – Richard Coyne, Architecture & ECA) You raised the question of citation and academic and scholarly practices. Visual materials can be difficult to that

A) We tried stuff out here. A flash project is really hard to quote, accessing a specific audio file in Sharon Daniels work is really challenging. But in scalar each object has a unique identifier and URI, and you can export as XML and PDF, and you can use the API. It’s a traditional relational database with quite an idiosyncratic semantic layer on top. So you can build interesting stuff because of that combination.

Q2) You talked about emotion. There can be excitement around this sort of material but for some there is a sense of fear around knowing how to engage, particularly when incorporating into our own curricula and research. We can be quite traditional when we return to our desks. Any simple start up ramps to get through the fear barrier?

A2) It’s been a slog, even at USC. Dealing with visual rhetorics and argument. We have an institute in visual literacy for practice based PhD and interactive undergraduate and postgraduate programmes. We have guidelines and rubrics developed there for multimedia work and assessment and those have been useful rubrics for other schools in the university. At university level for tenures and promotion committee we have created criteria for assessing digital scholarship, the different ways to evaluate that work. The issue is less the form of the work but actually assessing the contribution of such a wide range of collaborators with very different skills. We have borrowed from the sciences but that’s not a simple mapping, there are issues. We have had only four digital media PhDs completed so far but all have gone on to good things. Visual temporality have traditions that it can draw upon… it will be an unevenly distributed move for next 10 years or so at least.

Q3 – Clara O’Shea, School of Education) the engagement with living archive, and the role of the scholar in that – what are the ethical implications? And what ways are your work changing the way scholars assess their own work?

A3) I’m just starting to look at assessing the role of the digital archive and the radical shift in purpose than the traditional archive. The library is about access, the archive to preserve. Digitally that split isn’t as relevant. Ethically it is very tricky though. The Shoah Foundation recorded materials long before the web, this was set up by Stephen Spielberg. Now they did sign away their rights to materials but we have been working with the board of the Shoah Foundation around what is and is not appropriate to do with the materials. There are projects for kids to remix video – so we have developed an ethical editing guideline for those students. At Dartmouth with that metadata game there has been a need to really think about the ethical and quality implications – exploring by layer, the difference of “expert” and crowdsourced, is a way that has been handled. In terms of scholars it changes the relationship to evidence and to scholars own work. So back to the Shoah material they have a policy of not providing transcripts as they want researchers to actually watch the video, to understand hesitancy and emotion. They have had scholars who have gotten students to make transcripts for them, analysed that and the Shoah foundation queries the analysis and whether scholars had seen the films. When those scholars actually watched the films their experience and analysis was quite different.

I was trained as a feminist film scholar when it was hard to find the film. I had read about the films before seeing them, often long before, and you could be left wondering if the scholar you had read was based on the same thing. Having the evidence there changes that, gives you a more direct relationship. Also writing small sections of arguments, writing more modually, that is what you start to do rather than long form structures we are used to, and that can be really appropriate for humanities scholars in some areas.

And now many thank yous and onto breakouts. I am going to Breakout 2, chaired by Professor Robin Williams:
I will be talking about  a project from the last three years looking at electronic literature as a model for creative innovation and practice. It’s mainly about networked communities of data analysts and practitioners. I was looking at ideas, concepts and new ontologies, of creativity in particular. And focusing on co-creation and collaboration. I say that is novel but really it isn’t, co-creation and collaboration pre-dates the digital era, pre-dates publishing in craftsmanship traditions. I was looking at both amateur and professional artists and practitioners, in a transnational, transcultural contexts. How we use the internet to create, say, art. So this is about exploring process, creativity, community, these sorts of aspects.
We came across the idea of creativity as a social ontology. Creativity as “an activity of exchange that enables (creates) people and communities” (Simon Biggs). You need interaction in the making process of this sort of ontology. In the communities I engaged with creativity was a subsequent activity of the collaborative community. They were interested in the making process rather than the objects of the making. Ethnographically I took a post-modern multi cited approach as a framework: follow the community; follow the artefact; follow the metaphor; follow the story; follow the life; follow the conflict; and I added the idea of follow the line (follow the rhizome). The communities are dynamic, changing, they move in different directions. The same in the voices, how many are there within those communities… The fieldwork was very nomadic both offline and online. I started following one community, then found many others connected. I followed online but also offline (within Europe). I looked at a network physically based in London, other communities started with New Zealand, moved to Germany, Italy, etc. and online presences moved beyond this.
I was looking at the idea of a “creative land” sat between place, artefact, practice. The practices are connected, through a community of bodies that make these assemblages happen. I look at the theoretical approach by (?) of creative lands. I didn’t just look at the creation of objects but also the creation of communities. Looking at creativity of Synergy and Assemblance. So I looked at Furtherfield.org, probably largest digital arts community in Europe. They have an offline gallery in London where I undertook fieldwork in January 2011 and this is still ongoing. This comes from the idea of being further than the leftfield, their basis is political and based in politics of late 1970s but also with criticism of commercialism of the New British Artists and Saachi’s influence on the arts. I looked at the daily activities, how they communicated their activities, and it is very equally distributed, not hierachical. For example one co-founder Mark Garrat talked of the community as “the medium” for this work. The artists were involved could come from sound, to network, to cyber performance, quite an open approach by Furtherfield. They have created the idea of DIWO – Do-It-With-Others, the making of art and artistic practice. This is defined on the website and clearly requires social interaction and collaboration as part of this work, about heteroarchy. The DIWO ethos is about contemporary forms of collaboration, an open and political praxis, about peer-to-peer processes for learning and sharing knowledge and making knowledge. And the idea of media art ecologies – based on Bucht who believes in a continuum of humans and environment, and from George Babbetson who talked about ecologies of mind, as multifunctional and different ideas and cultures coming together to make an assemblance.
The particular projects using digital platforms tend to focus on social change, particularly environmental change. And there is a movement called “make-shift”. Two groups, one around the world, one in Exeter. They have cyberperformances. And they have an open source “App Space” performance space for video, for materials, tweets, etc. This is one kind of process, of use of ideas. The artists have particular materials for performance including facilities to allow multiple audiences, multiple mixing, multiple points of access to be part of the performance. Another performance brings in comments from Facebook. As well as her belongings from the last 5 years, juxtaposing this with other forms of collection.
Another project, Read/Write Reality and their work Art is Open Source. Their idea is creating academies of knowledge. They share the knowledge of how to use open source tools to make art. So one project of Art is Open Source uses ubiquitous realities movies with WordPress. Their work is about co-creation and collaboration. I am also looking at AOS: Ubiquitous Pompeii through autoethnographic processes. This works with high school children in Pompeii, looking at designing and imagining possibilities to see the city in different ways. And co-creating and remixing material with schools. Using ubiquitous technology to co-create cities. It is still about peer-to-peer processes, about co-design… We are seeing the process of working together. The largest and best known project of Art is Open Source is La Cura – the call for a cure for a brain tumour, sharing medical information and scans etc. openly on the web.
Q&A
Q1) We have a project on open source and film, how do people engaging in these works actually make money from them?
A1) Furtherfield are using crowdfunding, education projects etc. to keep running. Art is Open Source runs educational and other projects and provides funding to make some of these projects happen.
Q2) You write in scholarly journals etc. Did the keynote give you thoughts about how the projects you look at may be written up in new ways.
A2) Yes, I think one thing that is interesting is the idea of being open source but I would also like to see collaborative writing. The monograph is all about me. But I would like to see multi voice texts and would like to look at this for sure.

Copyright, authorship and ownership in digital co-creative practices – Dr Smita Kheria

My work arose from Penny’s previous project. Some of the participants will be common to Penny’s presentation just now. My research interest is in exploring the norms of collaborative practices so far as copyright and ownership are concerned. I am a copyright lawyer and I am interest in how authors relate to copyright law in their practices. Copyright law poses 2 problems. Firstly how it conceives authorship and how that author is credited; and the second problem is how collaborative authors are perceived and how that works in practice, and particularly in emerging collaborative processes online.

So, just to ensure we are all in the same place. Copyright protects the work, it must be an original work. There must be some originality, some effort, skill and judgement. Usually the first author is the first owner, they are the copyright holder and has the economic rights. In collaborative work there are particular assumptions. In co-authorship – for example distinct book chapters in a book – each author has the rights for their contribution. When a joint authoer is perceived, a collaborative authorship, then all contributions have rights. But there is no distinctions within the concept of a joint author. And that has implications for the perception of authorship.

Last year Penny and I worked on a six month AHRC project looking at creation and publication of the “Digital Manual” and looking at authority, authorship and voice. Explored through interviews and focus groups. Participants were working with open source mechanisms. We asked participants – and creators – what the role and meaning of collaborative authorship was for them. What they felt about this, rules of attribution etc. And we found no set rules here, some ideas of how they should perceive authorship. Some commonalities across all four communities – which included MakeShift (from UpStage) and Art is Open Source. What they created was built in real time, changing regularly, grounded heavily in collaboration. The first case study on Art is Open Source we saw a very hands off approach to authorship and ownership. They are a network, they provide open source platforms and software, and also a fake competition in the project we were looking at. They were clear about the ownership of the platform and the software – open source and GPL licensed. But in terms of authors they wanted to disappear, they don’t want control, do not mind what others do with the material they have created. So for instance a book which came out of the project was discussed, they felt forced to be on the cover by publishers. They did take responsibility for the process but didn’t want to engage in what was made with what they made available. They felt attribution was important, generally important but they were not concerned about attribution of their own work.

This was very different to Sauti ya Wakulima. This is a collaborative knowledge base project set up by a set of farmers in Tanzania who share materials gathered via smartphone. There is an ongoing community around farming practices, climate change, etc. The person who set up this project took a very active role in terms of the content created and in the platform etc. They spoke to farmers about the licensing of content etc. This was made available under Creative Commons. His own perception of authorship was different. He did see himself as the author of the software, although he talks about using others materials and code. He was the author but no “not everything came from my own mind”.

Looking at UpStage from make-shift. The platform is totally open. But what about the performances? Well they left that to  performers. There was no licence fee payment option within platform for instance. Performance organisers used the term “brokers” of collaborative performances in the space but, when asked about the performance, capture of the performance for instance, they conceived themselves as authors. They wanted to disassociated themselves from notions of authorships but that was very much their own perceptions. And there was ambiguity about contributed images around performances as well.

And the final case study was FLOSS Manuals – collection of manuals on free and open source software. It is entirely open and editable. A collaborative publishing platform. A lot of manuals there. When editing videos we had taken in this work I actually used one of their manuals for my own work. The platform is open but what about the content? The platform takes a very active role in the content. They have clear licensing, using GPL. Anyone can publish, sell, reuse content. Within the community creating the manuals there was no consensus, it was imposed by the platform owners. And the creative community here radically expanded attribution – anyone who had done anything at all (a single letter, a font face, etc) was credited. Some uncertainty when we spoke to them as the community was unsure about attribution and licensing.

This was a small study but it is clear that collaboration and co-creating has huge implications for perceptions of authorship and huge relevance for copyright law.

Q&A

Q1 – Ewan Klein, Informatics) A comment more than a question: GPL does not let you do what you like. But do you think that Creative Commons would have provided a trail of attribution in the right way?

A1) Yes Creative Commons would allow that but not all of those we spoke to had the same feeling about attribution, about how work should be attributed and whether there is to be attributed. And under the law some may not be a copyright work (e.g. 1 line in a manual). Here attribution and copright ownership would be split. Do you attribute the collective or the individuals? The farmers went for collaborative attribution… that solves the problem but not the issue of who should be attributed.

Q2 – Chris Speed) something here to do with reciprocity. In terms of commons, in commons land… implicit models of not taking all your sheep… could that translate to copyright

A2) Reciprocity did come up as a suggestion on the basis of which attribution could be made. But how do you assess reciprocity? This comes back to Robin’s question of funding. All of these projects were started by grants, thereafter funded by second jobs, projects, PhDs, voluntary contributors. So if coming in voluntarily is attribution the least you can do (e.g. FLOSS), but maybe if getting a performance that is reciprocity enough? Now these were very different projects and that does need bearing in mind, but those differences were interesting.

Simon: There is a model in Open Source Software of attribution. In open source films we see this work at first but it falls apart when it gets to being an interface from enthusasim and creation and the longer term sustainability.

Penny: FLOSS is an interesting one. This is sort of a benevolent dictator model. He was reluctant to be involved. They do not have money, looking in different directions… This open source, almost utopian community have realised that they need funding to continue.

Smita: and they had an issue. They could publish those manuals but so could anyone else. It would be good to go back in a year’s time to see what had happened.

— And a break whilst I spoke at the Scottish Crucible —

“It’s a computer m’lord”: law and regulation for the digital economy – Prof Burkhard Schafer

I have come in a little late here but Burkhard is talking about new forms of data, such as monitoring data on older people, for the monitoring of their health but potentially ethical and legal concerns. What if you use technology to help people with their memory – what if it has legal issues? What if it leads to a criminal investigation? New forms of data collection invalidate traditional metaphors, traditional divisions of law.

I am based at the law school, notoriously the scene of a crime – the body snatchers of Edinburgh. The law tried to manage supply side, that led

Regulation through Architecture (Larry Lessig) – they restricted access, they build fencing around graves, they patented thick metal coffins that allowed you to view the decomposition before burying, to avoid body snatchers. I call this DRM (Death Risk Management!). But this does relate to the loss of things that are precious. There was a case of a father who gave his daughter, who was dying of cancer, a phone with unlimited voice mail box. But the phone was in her name and when she died the messages were deleted. He took legal action but this is not an easy case.

Whose assets are they, whose privacy is at stake? What happens to the digital artefacts after death? This is complex. This work is part of a multidisciplinary research project, not just informatics and lawyers but across anthropologists, sociologists etc. We came up with radical suggestions far from that of these judges. For instance the “Dead Man’s Switch” – a way to wipe your hard drive and remove embarrassing stuff on your death. There were joke companies promising to look after pets in the case of the rapture to ensure your pets were taken care of by good aethists. But there are serious questions about a service here… about legal liability when taking action on behalf of a dead person.

What about disintermediation? The body snatchers were banned so they cut out the middle man, killing for bodies rather than digging them up again. But could it happen again? Well child trafficking and sex abuse sits in some of the same places of preying on the nieve. We work on this area, looking at ways to understand the role of social workers, teachers, police so that they can extract information they need to evidence a case without breaching data protection law or compromising privacy. This is one of our more technical projects around encryption. And this includes consideration of risk to informants, what can be shared and how, to make sure that there is sharing of neccassary data without exposing others in responsible roles’ as informers on their clients or communities.

Robots bring deep seated problems. They will be something more than machines. They change how we think or interact with technology. To give examples is it appropriate legally, ethically… to give someone suffering with althzeimers a robot that speaks like her husband even if it comforts here? It may be justifiable emotionally but it is a massive deception. Similarly is it ethical to have robots looking like people, should that be another law of robotics.

Meanwhile we have Sensecam devices that automatically take images of their day. Althzeimers patients have been given these to go through their day and work through them with their support worker – to go through their day, remember what they have done, this seems to have benefits for retrieval. They use these devices on dogs too (for more fun purposes). Legally… well in galleries, theatres, movies… photography is banned but should there be an overriding right to take pictures. In Germany public buildings are copyrighted and images cannot be taken. We let guide dogs go where other dogs cannot, maybe this is a similar justification.

And a final example: David Valentine records his performances: “Duellists” and “The Commercial” in public space – demands made on council for CCTV films of his performance for his performers rights. Legally in the UK this is complex!

Q&A

Q1 – Jen Ross, School of Education) In recent release of Google Glass some restaurants and business banned Google Glass and I’m wondering about the social response and impact of these technologies.

A) Google “St Patricks Day Google Glass” for amusing example. One of the concerns I have… these are being designed in health settings and medical settings but are being designed for live blogging. This is sort of a trojan horse for changing privacy laws and expectation. Private time has origins in latin for robbing time from others, we expect to be alone. It’s fine if we are OK to have images taken etc. But without ability to be alone, if privacy is a public good not a private good then we may not want people to give it up so easily. It becomes very complicated. Lots of frivolous uses trying to get public use on the back of essentially medical technologies.

Q2) I worked on a project with Charles Wab on data sharing. A thing I found in that context is that once you’ve released data into that space… you’ve talked about advocacy role of the social worker… but once released how do you retrench into your social role?

A2) It’s not surprising that in case of child abuse evidence was there but have not been shared. Rules have been changed but it still doesn’t work. People find a way around that. If I don’t trust the recording mechanism I don’t share the data. If I’m concerned about use of my data then I don’t write them down any longer. I don’t think all the evidence we’ve found from the social scientists, the political scientists is that technology doesn’t change that. People respond to requests in our approach, not dumping all their data as they just won’t comply in any manner of creative ways. And it’s a distributed system, rather than centralised for the same reason.

Letting your digits do the walking: on the road with Ben Jonson, 1618 & 2013 – Prof James Loxley  and
Dr Anna Groundwater

We are at the beginning of our digital journey in comparison to others who have been talking today. I will tell you a bit about the manuscript we are looking us, it’s significance and the journey we think it could take us on. In 1618 Ben Jonson walked from London to Edinburgh on foot – an extended walk with no evidence until James Loxley came across an account by a walking companion, a treasure trove of primary evidence for researchers, and a window into life along the Great North Road. So I will talk a bit about how we can recreat that world, to understand that using primary and digital resources.

My experience of digital online resources as a user was as a beginner. I physically dug around in regional and national archives along the Great North Road. Digital catalogues have really helped me to do this, it has allowed me to achieve much more and in a much more cost effective manner. Tools like EEBO have helped me speed up the collation of materials online, to gather biographical information alongside literary texts. Most apposite here is EDINA’s Digimap, I’ve been using it on a daily basis, a way to reinterpret and consider networks, social spaces in early modern britain.

And the literature allows us to understand social spaces, social practices. We can look at practices of hospitality at that time, the experience Jonson was having. Welbeck Abbey for instance is discussed in the manuscript, with specific descriptions of taking over the house from Sir William. Also mention of Mr Bonner the Sherief in Newcastle. Some of this text we have been able to verify. We have been able to use OED to understand some of the terminology e.g. hullock, a wine for very important people.

The texts also provide a history of cultural interests, antiquarianism of tourism and travel.Of the places visited, of the castles, buildings and grand houses along the way. And the route taken there. From Belvoir Castle through to Pettifour Well in Kinghorn. So Edinburgh castle, for instance, was one of his stops. We can use art and images of that era to recreate that voyage. We can physically make these journals, but we can make these journals digitally too. The digital journey remaking the mental and physical connections of that historical journey.

Over to James: I will touch on the dimensions of the project which have emerged as we have been going along. Dimensions of which we have become aware. This was a digital project right from the start, since we have been talking about the project and the manuscript, many have asked about how the manuscript came to light and why this has happened now. The story is a disappointing one. In fact it involved me sitting down to consider the potential for a set of digitised set of catalogues, done by the National Archives, which are catalogues of archives around the UK in a project called Access to Archive. This allowed discovery of collection and structure of collections. I was looking through materials and how they worked, I was able to find literary manuscripts and where it sat in the collection… seemed to refer to Ben Jonson but the spelling was such that no one searching would have found it. There was no rummaging in archive attics. But we have been further exploring digital dimensions.

Because we have a journey here, because it is not like Boswell’s account of Samual Johnson but is instead a list of people, places, food, etc. We can see dimensions that are not classically those that a literary scholar are looking for, what we see as a quantifiable text I suppose. For instance an account talks of the time a journey began, the time of arrival, the locations. And can work out the distance of 9.5 miles, a time of 3 hours, what the walking pace was. Jonson seems to be at about 3.17 mph (modern human average 3.3 mph). An interesting one since Jonson in his own notes says he is around 20 stone. maybe something is not quite right there?

We don’t know who wrote the account, we have candidates but the companion is still anonimous. We can work out the height of the companion using surviving architectural drawings of a venue visited. We can work out that he is 5’5!

We are inevitably working with small data here. We have places, times, distances, speed etc. allows us to visualise the journey in ways we maybe would not have been able to do before, a manifestation beyond the annotated text. We’ve initially been exploring that in terms of a map. (see blogs.hss.ed.ac.uk/ben-jonsons-walk) This inital map on our website gives a sense of places visited (via map pins) and on those pins we include the time they were there and notes which is growing as metadata (excellent sweet water at York!). This is a starting point to begin to map out the data the walk has presented us with. This is really at “rehearsal” stage. There is a performative aspect to this walk – Jonson is greeted by crowds, by property owners, etc. markers etc. People have told us that we must reenact the walk! So we are doing a virtual walk, on 8th July he will tweet in real time on Twitter, that will be linked into the map and the information on the blog site, an interaction between those channels. Hopefully Ben will get into conversations as he is on his way, that’s part of what we’d like to do!

We are already thinking about the possibilities of expanding this for future projects. There is an example called Mapping the Lakes, a team at University of Lancaster made this tracking Thomas Grey and Coleridge journeys around the lakes, created with a GIS to visualise the walks. They have mapped obvious markers but have also tried to map more subjective things such as mood of the walk. You can look at them separately or together. That seems a way of thinking about the literary journey that we would like to develop for ourselves. We would like to think beyond the map we are “performing” this summer… There is clearly an interplay between sites and routes… some are easier to map and work out than others. In some places there was a guide to take them on their way – very hard to find the obvious route. Thinking also about how the mapping of the journey could bring in different possibilities, views, prospects, meaning of sites, etc. We haven’t represented those on the map but we would love to, particularly to compare their walk to modern walks. How do different models of the walk undertaken “for the sake of it” compare? And how can we take that walk, preserve that experience, feed in other materials etc. We hope to be able to approach the AHRC for follow on funding and we would love to talk to anyone interested in the spatiality of walking who might be interested in engaging.

Q&A

Q1) A connection: Joseph Burlaff, an artist in the US, recreated Gandhi’s walk using a treadmill and hooked up to Second Life avatar and reproduced that there… possible digital precursor

A1) Interesting possibility. Could get gradients in perhaps. There are analogues or comparitives out there to explore. There is a deepening tension and intensifying interest in the process and practice of walking. And how that carries with it expectations and kinds of appropriate representational modelling, do some justice to spatiality but not assuming a single model is all that we need.. need to weave different senses of the spatial within literary walk.

Q2 – Rocio) Comment on idea of the walk: making a collective walk, ask people in surrounding areas to do a bit of it, make it interactive and add their part of the journey… If you can’t do it yourself.

A) Exactly what we hope to do. Want to bring in local history societies and walking groups etc. on the old roads and feed that in.

Old light on new media: medieval practices in the digital age – Dr Eyal Poleg

We are working on a project called Manuscript Studies in an Interoperable Digital Environment funded by the Mellon Foundation. We have found interesting parallels between the reading of medieval manuscripts and medieval practices. Perhaps we can learn from Medieval practices to think about developing digital practices. In many ways printed books are an interim step here between practices we see across old and new media.

Lets start with hypertext. Hypertext is very common in medieval manuscripts, particularly in the Bible. The problem with the New Testament is the Gospels, how do you jump from one to another. You can explore a version at University of Toronto for instance. But in the manuscript era we get the usepian cannons, in the margins of each episode the usepian cannons and use the tables to jump from one to another, very similar to click on a link. This starts something new in exploring the text.

In the 12th Century there is a beautiful text in France. It is a working manuscripts. It has physical cut and paste. It shows the authors wrestling with technology, with experiments in navigating the text. Inventing references. And they tie that to the “late medieval bible” – Gutenberg bible is a replication of one of these bibles. The innovation of these bibles is evident in the chapter division, previously no divisions in the text. From 1230 onwards, with help of Stephen Langton the Archbishop of Canterbury, we have the chapter divisions. And we begin to get Book and Chapter divisions. This fits into mindset of Christian Exegetists at the time of the linkages within the bible. But this linking etc. took off like wildfile – the most efficient way to link and navigate. When we think about hypertext in the Medieval we have to also think of the web of illusions that people also had. So when reading a text, for example a psalter, there is an interaction of text, image and sound. For monks reading the text created a world of illusion. So we can, using digital technology, replicate that to an extent. By adding musical strata of the text, intricate links that evoke the memory of the men and women who would read these texts.

The wiki is a structure we also see in medieval texts. Even now the interaction one has with a printed book is limited. In Middle ages books were different, they were communal objects even for the monks. Annotations were seen to add value to the text, a communal project to read the text. You can read generations of commentators through the margins of the text. The way it took place.. and this is worth considering… is by giving amply space to interact, to comment on the text. Space deliberately left, intermedial and marginal glosses, spaces for comments and annotation. You can see the different hands, texts, monks reflected in the communal commenting on the text. And you see some commentators responding to each other. In one manuscript in Glasgow an O character has been vandalised, a later reader finds this offensive, erases for future readers… so how much can readers interact, erase, changes to the text do we allow? That would have been a nice image…

There is also a sort of Open Code emerging in manuscripts. a Printed book is not that open. But looking across the same manuscripts we see differences – some are errors or changes by the scribe. In the medieval ages the scribe assumes the text could have been faulty, they try to correct them, the text was in flux. Scholars use this to reproduce the text and we can also explore connections between one manuscripts and another. But of course what is a text? What is a changed text? What is a fixed text?

And finally we have non linear texts here, this can be created now in digital environments. Not necessarily beginning, middle and end. Navigation can be very different. For instance a medieval teaching manual uses images and associated ideas to explore but these are non linear, the image point us in directions within the text. And this ties into a late medieval aesthetic vision of ellisons. The idea of a network of ellisions.

Q&A

Q1) This is a fascinating talk, there are several very orchestrated ways to explore medieval manuscripts that this relates to. You touch on websites reflecting print books, not neccasarily taking advantage of the multimodal opportunities of the web.

A1) That was the starting point to the project. Mellon saw medievel manuscripts increasingly being digitised but that people were using them as printed texts and it wanted to look at new ways of working. So for instance you can see the Summarium, a prototype that uses TEI annotating a non-linear version of the texts, in a communal way.

Q2) Is there a connection between the idea of hypertext in medieval texts and the role of the church as an information system. There have been times where the physical church acted as an information system for state information etc. I’m not sure if that is true of the medieval era.

A2) In the middle ages, unlike the reformation, this is less about inforcement and more about the reality of texts. You live the texts. Monks especially live and breath the text and information. You wake and pray 7 times a day, you are surrounded by images, you are embedded within the textuality.

Q3) Do you find any dilution of the text transferring them to digital technologies? I am sure that institutions are very careful about this

A3) This is not an issue for us. The texts are not of interest to religious institutions today. Very early or very later texts might be an issue but these are not an issue

Q4) Have you ever come across work on roman law reception in the middle ages in codex, I think he came to similar conclusions analysing legal texts as hypertext and wikis. He has a secular models of the same phenomenon

A4) Yes I wasn’t aware of that but I will be interested to have the references. The manuscript texts were a little behind legal texts but it would be very interesting to compare.

And now onto the closing from Sian Bayne saying that it really has been a day of new ideas, very inspiring. And thank yous to the audience and the organisers and of course to all of our speakers.

 

Jun 232011
 

Today I will be liveblogging the ALPSP (Association of Learned and Professional Society Publishers) Making Sense of Social Media Seminar which is taking place at the British Institute of Radiology in London (where it’s crazily sunny today in stark contrast to Edinburgh yesterday).

Our chair for today is Katie Sayers, SAGE Publications, and the overarching heading for the day is:

“I have a Facebook group for Twitter users that Tweet about podcasters that talk to marketing bloggers”

The programme is looking very much at strategy and more sophisticated ways to weave social media into content and other marketing activities. I’ll be adding my notes to each session as it takes place. The hashtag for today is #ALPSP.

Introduction from Chair – Katie Sayers, SAGE Publications

Katie is welcoming us to the day:

There are lots of speakers from a variety of different publishers. The intent is to take you through various social media strategies, how they have been executed and we will be finishing with talks on metrics. We have 10 minutes for questions towards the end of the day and I would encourage you to be as transparent as possible and make the best use of this session. Continue reading »