Today I’m liveblogging from the CIGS Web 2.0 & Metadata Annual Seminar taking place at the National Library of Scotland, Board Room, George IV Bridge, Edinburgh. Usual live blog rules apply – the notes will probably change for corrections, tidying, pictures etc. [update: pictures are on Flickr (and will be added soon)] but should give a sense of the day.
OK we’re starting off with coffee, registration and biscuits and such. Below is the programme for the day – I’ll be filling in the gaps with notes throughout the day. The hashtag for the day is #cigs11.
Graeme Forbes, current chair of CIGS is introducing this, the fourth CIGS Web 2 & Metadata seminar. Our first speaker is Gilian Hanlon of SLIC.
Ask Scotland / Gillian Hanlon, SLIC
Gillian will be talking about Ask Scotland, a web based service which enables direct contact with a librarian to ask questions, sharing knowledge and collections with a global audience. This service is based in 17 different public libraries throughout Scotland and started as a local history service – it has now expanded but Scotland and Scottish History remain an important area for questions and knowledge sharing. Some specialist libraries have now joined Ask Scotland and the first Further Education library joined the group recently.
Ask Scotland also connects into Scotlands Information and CAIRNS – these are repurposed through maps and find a book icons in Ask Scotland so that these tools can be used by users, not just librarians. The intent is a one stop shop for users.
The service focuses on an email form. This goes to whoever is monitoring the service at a given time (the libraries take turn to do this) and they can forward on if needed to other libraries. There is also a live Chat service which is staffed 9-5 Monday to Friday (again this rotates around the libraries). When the site was re-launched an “Answerbase” which is a first for sites using the Question Point software. The Answerbase includes all of the answers that have been provided through the service. There are about 150 records live at the moment but this number will expand as things progress. The Answerbase page lets you browse recent answers, subscribe to the latest answers (implemented at a user’s request), and search for previously answered questions. We have had a huge amount of traffic on the site since it was launched and it shows the value of opening up this data. The order of questions is entirely chronological – no scope for searching or narrowing down other than the keyword search at the moment.
So, this brings us to the point of metadata so that people can find and access answers as best they can. At the moment we are using OCLC’s Question Point and we’re rather tied to what can and cannot be done with this. There are both required fields and optional fields for questions. The Optional fields include keywords which are a major part of the metadata used in the Answerbase. Most of the required questions are more functional about the questionner and the question.
The metadata categories on QuestionPoint can be a bit limiting. The Geographic code is “Scotland” – clearly a useless level of detail for a Scottish service which is why we’ve had to take some different approaches. So for example we have a search for Dumfries, it brings back a question on “what does the word rig or rigg mean” – to be fair that answer does mention Dumfries. But if you search for Robert Burns you find a fairly random set of results. the mechanisms fall a bit short of delivering on user needs.
Well we looked at Dewey Decimal Classification (DDC) and we looked at Library of Congress Subject Headings (LCSH) and we are currently thinking about which one we should apply.
The DDC is hierachical which is helpful for reference enquiries. There is also some consistency as Scotland’s Information used DDC summaries. But it’s only partially available as linked data (abridged ed. 14).
LCSH is entirely available as open linked data but, crucially, does it have sufficient depth for Scotland.
We’ve had some other suggestions about what we could use. Also the idea of using a folksonomy, or these with a folksonomy.
I mentioned the consistency with Scotland’s Information – there is a tag cloud on that service so perhaps we could have that on Ask Scotland as well. One of the other ideas was looking at the Bubl service – basic headings for browsing through answers etc.
Web 2.0 & Ask Scotland
We are tweeting and using Facebook to publish data snippits from the Answerbase. But this is quite one directional and we wanted to explore interactivity more. And we wanted to promote the libraries and library resources that actually support Ask Scotland more. So we are now linking to resources, we are asking questions, we are trying to make them more engaging snippits. For example we asked about Sir John Pringle – we got lots of answers back about crisps. Nope. The audience here suggests knitwear. Also no. He was a military medic (and a cheer for the man behind me who knew that!).
Facebook just republishes the tweets, we haven’t used it in any more detailed ways but it’s a way to reach those not using Twitter.
We use a scheduler for the Tweets. This makes the whole thing much more manageable.
The site is: www.askscotland.org.uk. Twitter is @Ask_Scotland.
Q) How do you monitor the Web 2.0 activity?
A) We monitor our stats but it’s hard to tell sometimes which type of promotion has which impact. But we get questions back through Twitter. Press releases you can track but social media is always there so hard to tell. But a blog or web mention can produce spikes that you can track.
Q) I was interested in what you were saying in the contrast between DDC and LCSH. The one thing about LCSH is that you can of course add to LCSH and I am keen that we increasingly develop LCSH to cover Scotland and Scottish issues. So it has to go through a Library of Congress process (and we’ve had occasional spats in the past about Scotland’s standing in the world I guess) but there are mechanisms to get Scotland into LCSH.
A) Well perhaps this is the sort of service that can demonstrate that need.
Q) Why is the ranking in the so odd?
A) There is an issue with the fact that you have a local knowledgebase, and there is a global one. So we have librarians of different knowledge and abilities completing the records and metadata – we have a guidebook but there can still be training issues. Maybe records don’t appear because of the records. Two issues there. We’re not far enough down the road with that but we want more information about how the ranking and the weighting works. In fact I think that they thought that you could separate your knowledgebase but for Ask Scotland it became clear that that didn’t actually work. But they have been very responsive so far so I’m positive that we’ll be able to find a way to do this.
Q) Have OCLC been looking at developing Question Point so that people asking questions can ask as well as tag questions with what they think is relevant?
A) Not that I know about but it would be consistent to the approach in some of the other OCLC products.
Web 2.0 in action : experiences from the University of Huddersfield / Dave Pattern, Huddersfield University
I’ll be talking mostly about usage data rather than metadata. Slides will be at http://eprints.hud.ac.uk/9629/
I’m not actually a librarian, and I’m not a cataloguer…! Cue “The Joy of AACR2” cover to win us over. Dave prefers to think of himself as a “Shambrarian” – he knows some stuff about MARC and libraries etc.
A few years ago I saw a presentation by Cara Jones, University of Bath and she asked lots of questions so I’ve pretty much stolen that idea. [hear we run through lots of questions, I’d type them but I’ve had to keep my hand in the techie air].
Dave is givin and overview of web 2.0 and what it’s for – for editing and intereacting with stuff. For instance to prepare for today I looked up the venue on Google Maps and did my virtual route from the hotel. I had a look at Wikipedia where I discovered that it’s almost the second anniversary of the great pipe disaster! I can follow @natlibscot on Twitter. I can look on Flickr for pictures that have been geotagged with Edinburgh. I can look at images tagged with Edinburgh. These are folksonomies and not formal ontologies. Flickr uses most popular tags for Edinburgh to show related comments too. If I blog today and want a CC licenced image there are loads of tagged available images on Flickr. It’s all second nature now this web 2.0 stuff.
What happens when web 2.0 meets libraries. My first experience was hearing about John Blyberg at Ann Arbor District Library. Ann Arbor was redoing their website and John was a fan of Web 2 and wanted to add loads of stuff in. The whole site is like a blog, you can comment on anything. Loads of their staffmembers – including the director – blog on the site. You can, as a patron, post comments to that Director which few libraries enable. The Catalogue has bok cover, reviews and virtual graffiti onto card index images. Great stuff thats fun, playful, if a tad pointless 😉 So got very excited about all of this stuff and got pushy with management at Huddersfield. I didn’t get blocked exactly but traditionally we’d brought products from vendors and done things that way. In 2006 I got together with a lecturer in the school of art and design (who used blogs and wikis) and we introduced the library to web 2.0 and started to get support to try things out.
So back in 2005/6 we didn’t have spell checker – loads of keyword results came back with no results. So we plugged in spell checker and looked for most popular keyword results to improve results. We had 10 years of circulation data that we weren’t using. But we thought these could generate recommendations and as often they come through reading lists it’s not bad though you get a few random items. That’s been very popular. We’ve tried to do bespoke ones as well – when you login you get recommendations for what you might want in the future – and we want to work more on that.
Often the new books are a bit varies so we thought we could use usage data and loans to come up with bespoke new book lists. So for instance there is a feed for a Journalism and media course based on previous borrowing, created a Dewey profile for that course, and that classification combination drives the feed of new books.
We had some dead space at the front of the catalogue so we fired the most popular keyword results onto the front page. In September that’s a hugely popular way to find their first way into the catalogue. The other thing we’ve done is to display common words combined with a keyword – for instance if you search for law you can click to combine with other popular combos. Again to do with making the catalogue more useful.
Recently trialling journal level recommendations. Students often don’t realise that journals are on multiple platforms and what is actually available, those recommendations are increasing usage.
Borrowing statistics can now be viewed by staff to get a good idea of usage. Whether demand is waning, if it can be weeded etc. Can also see trends and spikes in usage of books. So we can compare an item towards expected averages for all items – and look at differences.
Like most libraries we’ve done blogs. The most popular is the Electronic Resources Blogs – it highlights new purchases, downtimes, issues etc. And various other libraries subscribe to the RSS feeds. We also feed that RSS into other places. We have a wiki and can automatically drop a feed on each resource into it’s wiki page very easily. We also have a Summon – a research search tool and we also surface issues from the blog here. Don’t expect people to find the blog, feed it to them.
We have lots of stuff we;ve tried but maybe not put out there. It’s about trying stuff out in Web 2.0, you might use them, you might not.
For instance we have Keyword click stream data. So these are keywords used to find a title. We don’t have a use for it but it’s interesting.
We have a colour sorted book cover tool – using the Amazon cover service and a colour selector – so students can find stuff by colour. Useful? probably not…
Cover search. Search for an image, find covers that looks like it. So a Renoir image that maps to Renoir book covers. Not much use (Madonna looks like John Sargeant! Charles Darwin looks like a Diabetic foot!) but fun!
One of the more useful things we can do though is track usage data and measure impact.
One of the interesting things is the change in borrowing trends – a lot of stuff we started doing in early 2006 looking at the number of unique titles circulating. We have seen a slight increase of variety of titles – this could be down to some of the serendipity in the catalogue. A slight uptake in the number of items borrowed per student per academic year in the same timeline.
We also had to do a Quality Impact Assessment on the library opening hours. the data we gathered for this surfaced the high number of students not using the library resources at all. So we started looking in more depth at this – at students who do not use the library but we think should be, a bit of intervention too. Been doing that for about 5 years now so can see results and changes.
We also compared library usage and final grade – it’s a question I wish we’d thought of earlier. So we had about 2 years data on the student records where we could compare grades to item loans to MetaLib logins to visits to the library. Interesting. You’d think that those that came in the most might get better grades, but its fairly level and in some courses more visits follows with better grades, in others the reverse is true. The interesting thing is that those borrowing items and particularly using online resources do seem to get better grades. Trying to just gain a deeper insight into how students used the library.
There is this huge pot of usage data and there is a real trend to use usage data (TBL got his TED crowd shouting “Raw Data Now!”). So in December 2008 we made our circulation and recommendation data available on a CC licence. A few days later a US librarian made a better linked data ersion of the data. “the coolest thing to do with your data will probably be done by someone else” – Rufus Pollock, Open Knowledge Foundation. Paul Walk also talks about the importance of the state of mind in which you make data available and that you do make your data available.
Some of the stuff done and visualised with the data is quite diverse – Art and Design students had a go at this and really interesting things emerged and it also changed students perception of the lirbary.
Some of this work has gone into the JISC MOSAIC project, some also feeding into the Mimas SALT(?) project. The more we can get out there and share our data the better ideas we can find to use it. The University of Minnesota have the Code4lib events look at this stuff. Seattle Public Library have a “making visible the invisible” data visualisation in Seattle that are shown there [note to Dave here, yes, that is still there!].
ExLibris is working on recommendation. Harvard Law Library is doing something called LibraryCloud project which will have an API and data and tools shares. There is also a JISC strand on this. #jiscad #lidp – about 10 usage data projects funded by JISC.
Iman who did that first talk with me in 2006 has his own company now LemonTree and he wants to lok at libraries and game (library.hud.ac.uk/lemontree @librarygame) – want the Farmville or Gowalla of the library through usage data!
Q) Is keyword the only access to the catalogue?
A) No, other access is possible but largely staff and academics use the more advanced searches but students tend to prefer Keywords.
Q again) Well maybe if you just had a title search you’d eliminate some of the work you’re doing making it easier for students. We’re not Google, we’re not as good as Google.
A) I freely understand that but our students use keywords. We may be banging our heads against walls trying to get them to do that.
Q again) But if you don’t make title search possible you’re guaranteeing that.
A) Search is good if you are looking at keyword, not so good with known items. There is a tension there but not sure of a good resultion yet.
Q again) Google makes it look easy but it gives you lots of help.
A) Yeah, I think much of what we are doing is getting round some of the issues with the catalogue we have which is a late 90s catalogue system. I think VuFind, which is newer, also hides alphabetical search etc.
Q) You can do all the standard searches at Huddersfield too, just to clarify. It’s there underneath the fancy stuff. What I did want to ask is that students using the catalogue are always a bit horrified by the idea that recommendations on tiny courses might expose individuals privacy
A) I am very concened about hiding anything that could identify an individual. We tend to ditch stuff that is unusual to help make people unrecognisable. Also some borrowing will be for personal reasons – won’t be relevant to your course and you wouldn’t want exposed. We strip that data. I think 5 or 6 people need to have borrowed the same thing before a recommendation is generated. It’s been very very popular. We wanted to steal that Apple coverflow idea a bit – we haven’t gone that fancy but book covers and visual searching have been hugely popular.
Comment) Are we making students lazier?
A) no, I’m happy if they have a bit more pub time!
Q) Maintenance issues with this?
A) No, not too bad. Our catalogue is quite hard to break
Q) what do you put into the catalogue that generates data for impact
A) we try to shoehorn all sorts of stuff into the catalogue. We try to have the catalogue as the monograph collection. We removed most of the electronic resources (except ebooks) as hard to maintain and sort through. We don’t have a cataloguer at Summon but we found out recently that we had been miscataloguing content for years there anyway – a result of a UKMARC -> MARC21 conversion error. We have played with VuFind but still have a lot to do to make something like that work. My colleague jokes that since we don’t have a cataloguer our catalogue works better [sharp intake of breath here – in a friendly way].
Q) Users often want article level access to materials and want to see if the library has that item.
A) What we realised early on is that a lot of people went direct to their journals or were looking for something quite specific. If you search Summon for a journal the title shows up as a dummy link quite high up in the results, that helps a bit. We did that through creating automatic dummy records from the publishers/platform.
Comment) We wnat something with an orange cover is useful – half the room has had that query!
Q) You mentioned increased borrowing – any idea why?
A) We have made it easier to take books out – we implemented RFID in 2005/6. I hope exposing students to a wider array of suggestions and titles may be helping. We should try to get to the bottom of it and nail down the causes a bit. We have seen the percentage of students who are active borrowers has been reducing. But active students borrow more. No idea of the cause for that combined trend though.
Q) Are you relying on external downloading from databases now that you don’t have a cataloguer? How do you check it?
A) for a number of years we’ve used shelf ready stock and records from Dawsons. We had cataloguers checking those but actually the quality is good enough. Not perfect but good enough. Librarians with cataloguing skills can correct errors but we don’t routinely check the records.
Q) Once of the things we found when implementing AquaBrowser was that it exposes catalogue issues – found things that we thought were perhaps not as rich as expected. And that generated new queries from users expecting to find something. Are you having similar responses?
A) I think most of the things I’ve found that look wrong were from the UKMARC to MARC21 coverted records. We took our eye off some of the bigger picture stuff when we did that but some records were done well. Copy cataloguing has been done before as well which can be helpful. But I can’t really answer the question about the supplier cataloguer records. There is an initiative from Open Library and another organisation have created something called I think biblios.net – an attempt to create a Wikipedia for cataloguing records. An attempt to ensure there is a basic record for all.
Comment) Was that not generated out of difficulties with OCLC records etc?
A) I think so. I haven’t really kept up with that issue – is it ongoing?
Comment) Yes. OCLC tried to introduce draconian reuse of data rules but the membership reacted. I think they are taking a less stringent approach
A) that’s good, there is such a push for open data and it’s worrying when people like OCLC try to shut down data.
Using crowdsourcing to create the UK SoundMap / Richard Ranft, British Library
Richard is head of Sound and Moving Image collections at the British Library and he is also responsible for other visual materials like photographs. In 2001 we worked on “Listen to Nature” maps – bird and nature sounds with geodata so we wanted people to be able to browse these by area. (bl.uk/listentonature). This is all handcoded static web 1 stuff. Clicking the map launches an audio player where you can listen to the sounds.
A few years later we launched “Sounds Familiar” – British dialects map (bl.uk/soundsfamiliar). We plotted these materials on a map and very useful for those researching dialects and speech even though not originally recorded for that version. A hugely popular part of the website. People could submit a CD to the library of their own dialect. Not very interactive in an electronic way with the contributors.
Then in 2007, with support from JISC, we put on the Archival Sound Recordings map (bl.uk/sounds) and these collections were made available in various ways including a map. This time it was database driven but users couldn’t add to it. We do add it it regularly though. So we were interested in using crowdsourcing to add to these collections in an easy way. These collections are professionally recorded, high quality data etc. How could we get good enough research data from the crowd?
Crowdsourcing has been used to enrich existing metadata, creating metadata, correcting OCR (e.g. National Library of Australia or ReCaptcha). The Oxford English Dictionary, it’s very first edition, was also an exercise in crowdsourcing.
Increasingly poeple create their own sound maps by geotagging recordings. e.g. Radio Aporee. Lots of examples of sound maps of the world here now.
Noise Futures Network, funded by EPSRC – the group is architects, those interested in sound pollution. They wanted to find a cost effective way to record sounds. We wanted to explore potential for a web mash up for digital scholarship, and wanted to map evolution of national soundscapes by aggregating everyday sounds from around the UK for 12 months – hard to do as people don’t usually record the ordinary, they normally try to record the unusual. We wanted to test a low cost effective way to do this.
This came around through the release of Audioboo app – a very simple iPhone (also now Android) application that launched March 2009 – a ready made solution for our problem. It’s packed with integration to social networks, lets you record 5 minutes of sound. There are about 80k controbutors, 280k recordings, 11m listens a day. Users are in the UK, US and Germany in particular. A map showing contributions shows loads in the US and UK, scatterings elsewhere [mainly english language area].
Each boo has a webpage. There is automatic and user generated metadata. The phone grabs time, date and GPS position and that automatically plots to a map (you can switch that off for security). You can add tags, phrases, words to the post. You can upload an image with the sound. And as usual you can then embed or link to the boo or share it on various sites. And each contribtor has an RSS feed and you can follow them. Each keyword also has an RSS feed, can build all sorts of feeds and interactions. So here is a free ready to use application. And it works on such tiny devices (previously a pro recording requires a big backpack and boom mic). There are more phones than people in the UK, entirely ubiquitous. They aren’t just phones, they are sound recording devices, cameras etc. Ripe for taking advantage of.
We used a “Pro” Audioboo account – this lets you add invisible tags and that has an RSS feed and that then feeds into our website. Piloted in Sheffield in July 2010. Didn’t want too many contributions, nor too few. We promoted the pilot to the press – people recorded disappearing noises as well as the everyday. Social networks really heavily used to promote this work – people contributing will be in these spaces and we wanted their attention in spaces they already use. Went UK wide in August 2010.
How it works. USer captures audio, adds metadata (auto and manual) tag for project is #uksm. This goes to an Amazon cloud service, comes out at MP3 plus metadata. Then we moderate the contribution, add the magic tag, and then it publishes to the mag. We had long discussions on moderation. It adds a layer of remove in the process but it was important to ensure we didn’t publish inappropriate content
We also grab original source file and metadata and that magic tagged version and archive that at the British Library. When the project ends this will be presented through the libraries catalogue – and we’ll probably continue the maps service.
You can also take part via a web browser that works in the same way. But not as immediate or as easy as the phone app. Can upload better quality sound though.
Some of the biggest challenges were legal/reputational risks of user generated content – deliverate or inadvertant capture of 3rd party rights so prominent music etc. filtered out. – defamitory remarks needed removed and all recordings therefore have to be heard beginning to end. Similarly invasions of privacy or compromosed confidentiality protected by.
These were mitigated by moderation, take down policies, clear selection criteria, and the fact that people are signing up to those terms when they contribute.
Technical challenges – lowering the bar? We wouldn’t have dreamt of doing this a few years ago. the iPhone created 22kHz, 16-bit, mono FLAC file – quite good quality. Other phones vary wildly. the Omni mic on the phone does pic up wind noise – we have process for rejecting or suggesting better quality recording where wind noise too great.
We also tested the iPhone white noise spectrum – doesn’t cut out the noise that pro kit would. Service anti aliasing at higher frequencies. Android very uneven. We know about this though, so we are prepared for it.
Many thought this was a wonderful project to get involved in (though not everyone did). Winner of innovation in public sector use of social media award as well.
We got more recordings when press push but some hard core contributors are hard at it still. About 1600 recordings added uly 2010 to Feb 2011. About 260 contribtors (average of 4 per contributor). 82% made with mobile, rest with audio recorders. All sorts of noises – voices, actions, machinery, animals etc.
Reasons for rejection – mostly due to copyright of music, broadcast or performance. We don’t republish anything – even street performance or buskers – on our website. Poor quality also high rejection criteria, no geodata is a rejection criteria – though we’ll ask for it but not all do that, obscenities, time wasters, advertising etc (though great footie match recording fitted into obcenities category). And some recorded outside the UK.
So looking at the Uk Soundmap you can search for keyword/text e.g. “rain”.
Locations of recordsings – many on the street, some in shops, etc.
So here is an example – the time and tide bell at Bosta (Outer Hebrides, Scotland) – this is part of collective memory there but won’t be there for ever. [sounds good and bell ish]
And another example – the noise of buying bananas – and the automatic till! Now you may say what is the point of collecting that material – many of our researchers would love to hear sounds of Victorian streets, we can be sure these noises will change so it’s important to collect and may be of interest in the future. Our researchers notes the ubiquity of female voices as the voice of authority in supermarkets, train stations etc – 50 years ago it would have been a male voice.
And another example – Castle Market Pensioners banter – people chatting away. SOunds of the environment, dialects etc.
The Sound map project runs until June, we’ll add a timeline to the map to compare the months of the year then.
The project we are now involved in is capturing dialects through the Evolving Englishes map. We have lots of recordings of peoples lives that linguists find very useful. One of these is people reading the book Mr Tickle by Roger Hargreve. Linguists say that book covers most phonetic parts of the english language so it’s a great example for comparing dialects. It also encourages relaxed deliveries. We wanted to make it easy so we asked for a really easy version too. So there are two options – you can read 6 words (controversy, garage, neither, scone, schedule, attitude) or read Mr Tickle. You also submit your age and where you spent your formative years. This contributes to existing data sets but collected by the general public. It accompanies an exhibition at the British Library that shows how unstatic the english language really is. This runs along side from November 2010 to April 2011. There are booths in the exhibition. Had about 600 contributions so far. Spikes of activity follow social media chatter about the project – those spikes are a bit scary for moderation though. We launched with a press release, on the BBC website and Today programme but a blogger posted last weekend and 300 submissions came in!
Map Your Voice is a project for the whole world. The map shows upload location, not the dialect recorded. Caused user frustration but we don’t mind. Default zoom level shows lots of UK and US recordings.
So, an example, a lady in Houstan, Texas, 1982 female reading Mr Tickle. This recording is made on a mobile phone, used out of context, combining from a research project. It’s a valuable way to gather data though. Now an example from Madrid, Spain, 1987, male. Not high quality sound but very useful data.
Selection is still a manual process with moderation. This is the most time consuming part.
Low quality – metadata is minimal, audio quality is borderline of acceptable but a form of mass observation
Simple to implement – RSS feeds to Google map mashups. Super easy. For our map we’ve now got tables to make sure the map doesn’t crash but still a simple process.
Cost effective way to rapidly collect large amounts of data
Augments existing research collections
Engages wider publis with institution, it democratises the curators role, and it’s surfaced interested prolific contributors.
Q) can you do keyword searches on Audioboo?
A) Yes, that’s on there. The timestamp tells you upload time but can be misleading if recorded later. Geolocation tag also subject to same issues – you might be out of range when you record. People could fake an accept of course for the dialect map, you can’t guarantee against that but with lots of data the quality to even out a bit.
Q) Do you accept all of say the supermarket noises or how do you pick which one to keep?
A) Currently we accept them all but if they uploaded the same sound every day we might! We will do some publicity in the spring and we may mention some key sounds – which will likely encourage people to do likewise. Depends what the Noise Futures Network want too. But we didn’t initially want to prompt or encourage people to do specific things at first.
Q) Will these materials be made available for people to reuse in other ways later on – noises that are hard to recreate but may be useful for tv recordings etc. Will you be able to make money for project in the future in this way?
Q) Can you implement a map of where people come from rather than upload location
A) Yes, when we archive and complete the project we probably will do that as the default map interface. The thing is that people aren’t that specific about the area they grew up in when compared with geotagging.
And with that we take a break for Lunch – and a little unplanned fixing of the projector!
UX2 :usability and contemporary user experience in digital libraries / Boon Low, Edinburgh University
Boon is introducing us to the project which concludes soon. He and Lorraine Paterson are the members of the team and the project is about looking at usability and contemporary
Digital libraries – EDINA [Boon is showing a screen shot of our website] is a good example of a digital libraries. There is a user interface, there are all sorts of other systems behind the scenes, catalogues behind the scenes as well.
A definiteion here “A resource representing the intellectual substance and services of a traditional library in digital form” – there are human contexts and broader roles, it is not just a place with content. Can digital libraries be a place for solving complex problems, a place for social interactions, a “sharium”, an active workspace. Facebook, Second Life, etc. all come into definitions of the digital libraries. There are many opportunities for libraries to improve the user experience.
Web 2.0 gives lots of possibility for user-centric and user generated content, for social networking and for creation of folksonomies; there are new technologies and UI interactions through AJAX and rich application UI; there are virtual 2D and 3D environments – there are web based applications that look like desktop tools, there are virtual worlds.
User experience is a very broad term. More than human-computer interaction, encompassing all aspects of the user’s interactions. And it’s about experience beyond what users actually articulate. How a user feels about using a system including overall perceptions. Jacob Nelson talked about user experience embracing the holistic aspects of interaction – beyond technology and function. Its about finding an objective and functional experience.
Example: World Digital Library – a huge world map and also a slider for it through time so highlighted content changes through time. NOT a typical web 1 interface. You can browse images from here – very rich interactions.
What are the usability challenges? We want a new UI so what do we need to present
What are the usefulness challenges? Often new technology is wonderful but is it what is needed?
The UX2 Project attempts to answer some of these questions with respect to digital libraries through a range of user research and usability studies, through UI prototypes development.
Usefulness is about measuring/evaluating user experience (sum of interactions): usability, usefulness, appeal (Usability Week 2009); Usable, Useful and Desirable (Jon Kolko, 2010) – very subjective terms in use here that talk about emotion of the user.
We now have another view onto the data in the world map – a series of shelves of areas of the world. This site provides several views in but we undertook a heuristic study and that was interesting to understand how people navigate and understand information – this large grid of images here was found in the study to be confusing and rather overloaded.
Comment from the audience that actually using something this confusing would put her off the World Digital Library website. Boon responds that there are alternative searches and you’d probably follow different views in rather than selecting this.
Another example – this time for faceted search – is Europeana. This is common on shopping sites for instance. So on Europeana is a search for cultural materials and one of the facets here is the list of countries from which materials originate. We have looked at Othello digital library and an open source system that provides these types of facets. These facets work well for users. Toggling buttons is very good – gives the user good control but it is not clear to the user that you can select multiple facets – in the breadcrumb trail on this site it’s hard to understand which facets are in play. Lots of question marks for the user. The other issue is that facets are all closed when you search – you have to click the heading to access the facets, then you have to click on each facet to see the options. These are extra steps that really effect the usability.
Looking back at the World Digital Library the facets are a bit clearer – shown clearly without clicking. But there are also usability issues here too. The time periods are now shown in chronological order but in order of matches – confusing to navigate expecially if you expect the user to do that quickly.
Looking at Aquabrowser at the University of Edinburgh the publication date facet is more logical, as is the breadcrumb trail. A very popular layout is to have the facets on the right (as here) and this clears the space for a tag cloud feature on the left hand side which is animated, coloured etc. The idea is that you get suggestions of words to combine and spelling variations etc. Very contentious feature for users. Lots of experimentation with these tag clouds but advise from others not to use these sorts of developments on mainstream websites. No right or wrong answer here but highly contentious.
So now a video of how a user is using Aquabrowser. During the test we have asked a user to look for a book by Jacob Nelson on web usability. They start with the search box for the title and lots of results come back – the user spells things wrongly and although the tag cloud is suggesting the author, and alternative spellings the user doesn’t follow those hints or links. She never finds the books! Hopefully that video gives you an idea of how the user works. The tag cloud is really prominent on the left hand side to try and take advantage of the fact that users read the screen in an “F” formation. But this user doesn’t look at it at all
Comment – there are so many incorrect answers. In Google they would find that.
Indeed that is a weakness of Aquabrowser – it will suggest a corrected spelling but only if there are no results at all – so these searches didn’t trigger that.
So for user research and testing Boon suggests surveys, interviews, focus groups – make sure you evaluate prototypes via low-fi (wireframe) or working prototypes. The Wireframe can be the best way to evaluate an idea. Users can be distracted by colour, look and feel but just focusing on function can be very helpful. Usability testing can be formative, can be more agile – “guerilla testing”! Field studies can be very valuable – you can undertake contextual inquiry, related to ethnographic methods. You can see what actually happens when your system is used in a real user settings. Also broader understandings of behaviours, user patterns etc. useful.
We have conducted user testing in the University of Edinburgh main libraries. There are library machines in the foyer where you rapidly use terminals and must stand to use them. So we observed 17 sessions of AquaBrowser in this setting. Most users look for books searching for a specific subject, some look for books on a reading list, some were only looking up the shelf-mark – Boon asked users why they didn’t look at the shelf since all the books would be there on the shelf too!
The main activities are keyword search, they review results (metadata, TOC), make notes on paper – but why don’t libraries provide an online version.
Comment) Paper used so you can take your shelfmark to the shelf.
Yes, so online notes not as accessible for that [my comment: unless zapped to mobile phone!]
Of the aquaBrowser study about 7 people had straightforward searches, looking for singular known items. 4 people used term tactics – repeating modified search terms after reviewing results or following suggested terms (freq. 4 times max). Some usd the information structure tactics – books found via shelfmark link of known book. Some roll through Boolean operators. Many look at references of items or search on google then feed in information to the catalogue.
But… no tag cloud usage. 3 users collapsed the tag cloud UI – 2 confused the button for something else. Non reopened the UI and continued the search tasks.
Users are aware of the the tag cloud although they don’t use it – and think of it as related words.
There is usage of facets. Some users use the facets in unexpected ways. If you had relevent facets they could be useful but many are too generic so users just click one facet for their subject areas.
Another usage issue was terminal hopping. A majority of users tried links near the top of the UI to begin search. All greeted with system error message (repeatedly). And they hop onto other kiosks instead of fixing. When asked why they clicked on “Library Online’ or “Classic Catalogue” they said they wanted to clear the current search and a fresh session! This isn’t a function here and this is an issue (not only because users can’t have a clean starting page, also because they cannot wipe their session post-search). Because the library has both Voyager and AquaBrowser competing it is hard for users to understand.
Comment) Users don’t always look for items in a straight line. the Idea of AquaBrowser and similar tools is to allow serendipidous discovery.
In this study people going to terminals are not using that sort of discovery path but these machines are about quickly checking for items etc. But there have been times we’ve seen some browsing by users via facets but most behaviour here is quite straightforward.
Some quick findings
– Usefulness evidance of faceted search/navigation
– tag cloud not useful at library terminals, usability issues – collapse button mistaken as back button
– there ought to be a start over button in AquaBrowser
– all links must work!
If you are interested in this work on facets do find Boon at the end or look at the UX 2 website [link coming soon!]
Tales of one city / Graham Mainds, Edinburgh City Libraries
Graham is going to be talking about the Edinburgh City Libraries work with web 2. We have used the Tales of One City badging since 2009 to gather blogs, facebook, twitter etc.
Why do they do this? Well as Ken Chad talks about the problem is in not engaging – citizens expect their councils to use the tools that they use and otherwise you risk being sidelined. You need to fine the conversations and join in. People will talk about Edinburgh City Libraries regardless of whether we are there or not – or about topics we are interested in. We want to join in. The beauty of social media is that we can go out to people, not just expect them to come to us.
The aims of this project is aboit engaging people and encouraging activity. We want more members, more issues, more visits to libraries, more use of all of our services. And this makes it easier when you’re writing content you need to think about why you are doing this. It’s not about number of followers of like-ers. We want people who are service users to find us in these places.
So how do we find our users?
On Twitter there is an advanced search on Twitter. You can search for words, in a location etc. and then feed that into a Feed Reader – we are using Google Reader for this. You can do that for all sorts of events etc. Can set up all sorts of feeds for this stuff and you just need to check it regularly. These are things we’d otherwise miss.
Another example. IF you set up social media channels your users will expect you to respond to them when they ask about your services etc. Sometimes they send queries to the wrong accounts etc. Usually people are thankful for responses. Some of these things you just have to ignore though – random shouts or rude comments.
So we respond to queries, reply with URLs etc. We checked back on someone we helped a week later and they were terribly excited to announce their library card had arrived!
Another person notes a book the library should stock a book and they were based in Edinburgh and they were delighted that we could reply and say it was now on order.
I’ve been showing Twitter but actually this happens on blogs too. We are blogged about on sites like the Guardian Edinburgh page – it is worth cultivating relationships with bloggers, journalists etc. Often they have a bigger readership and if they champion us it’s really useful.
Here’s another thing that can work. We had a vintage fashion event and she blogged the event and her image used there. Can drive traffic in like that too. For this event we also joined a group on Flickr (Fabulous Vintage Fashion) and our Flickr stats went wild!
We have another collection called Mystery Photographs – We have thousands of pictures and we don’t know where they are. We put them on our flickr page to ask for help. We also know there is a group called guess where Edinburgh – they love the pictures we’ve posted. People put in streetview links and give detials. and that’s fantastic. And just the other week mystery photos appeared on STV’s Edinburgh West site as well.
All about finding conversations and people and also about turning conversations round to what you want to talk about. Last year a film called the Illusionist came out and it was set here in the 1950s so we put up some of the amazing 1950s pictures we have up on Flickr to connect to that. Got quite good value out of that since we put it out when it was shown as EIFF, then on it’s release, then when it was up for the Oscar.
How do we measure our audience?
Most social media platforms have their own stats facilities. Flickr gives you stats for images and also rates images for interestingness. YouTube has a Google Analytics like set of stats. Facebook gives you demographic information on people that like your page. We have really wide reach. You can see that most of our Facebook fans are between 25 and 45 years old. Facebook is not for teenagers for us, and it’s good to know that.
We use WordPress on that is great, the statistics tell you who is referring to your blog. We found that a blogger (the Metablog) was writing about our blog and she reviewed it and made suggestions and recommendations and we’ve implemented some of these and made improvements to our blog.
Another tool we use is polls. So we did a simple one on who is/who is not a member of the library. That’s useful to know about our readers. Usage of library cards will be relevant for instance.
First issue is that getting staff access permissions can be tricky. Local authorities are understandably nervous about social media. Sometimes we’re raising expectations but perhaps unable to deliver. For instance on Facebook we had a consultation period on proposed Management Rules. Someone commented and that was taken back to committees etc. and we could only respond after 2 weeks. But the expectation on social media is a fast response. The person here was fine with it but people can feel ignored. Similarly a complaint made through Twitter may expect an instant response in 140 characters etc.
What counts as a complain? Does a comment that “most libraries in Edinburgh seem to double as tramp shelters” count as a complaint? They didn’t come to us… but it’s out there. How do we deal with these?
– training (or making it up as you go along?) – there is no right or wrong way, its what works best for you. We tried lots of stuff to get to this point and learned a lot along the way
– keeping up and learning from best practice – social media is great for finding out what other organisations and library services are doing and what you can learn from them.
– who’s involved? how much do they need to know? – do you have one page for a whole service or one per site for instance?
– is it worth it? I think it is.
Q) The public can access this but can staff?
A) there is special access set up for some staff. All should be able to access blog but not other sites. We use a kind of write once read many system so information feeds into other spaces.
Q) Some libraries have own facebook pages?
A) YEs, we are encouraging enthusiastic staff but not forcing people to do that. We are trialling a site library facebook page and if that goes well we’ll roll it out.
Q) Have you seen changes in usage data as a result of this work?
A) We think so and that’s certainly the aim of this work.
Commenter1) Following on from what Graham was saying, using social media as a way to publicise your existance. We have an internal Scottish Government form of Facebook. Previously I felt that Facebook was like shouting out a window down the street but a few times members of staff have posted to that internal network with requests for information and we’ve used that to make staff aware of the internal library service and that that is available for them.
Graeme Forbes, NLS) there’s an organisational Twitter called Yammer, not used by everyone, certainly interesting and lets staff share info on other projects and things. Even in a fairly small organisations people working on projects with overlap etc and knowing what’s going on in the organisation is really useful. We have publications which gives directorate information but you want ground level info. But you have to buy in and use for it to be useful. The other downside is that… when we started Yammer off we had a lot of chat on lunch stuff and we’ve found a fairly professional level that can be really interesting.
Commenter1 again) Yes, it’s Yammer that we have too. It’s been marketed as a professional communication tool. We’ve had little noise on it. It’s been a professional space and can spark really interesting knowledge of projects taking place.
Commenter2) I came to Twitter late but I found it so useful – its like a daily magazine of interesting professional stuff brought to your attention, I just think it’s fantastic.
Commenter3) It’s wrong to think of facebook or twitter or blogs as advertising. you want a real sense of conversation or word of mouth and things that seem like advertising are a real turn off.
Graeme Forbes, NLS) There was a mention of an event on the blog and usage peaking – Richard you mentioned that. The impact of a blogger can be huge. It goes to show that it’s not just the formal channels, that we spend a lot of money on, but also informal channels that have a huge effect.
Richard R, BL) At the weekend someone tweeted about a page on our website that’s been there for years. He has 1million followers and that page was suddenly insanely busy as a result!
Graeme Forbes, NLS) So we need A list celebrities chatting
Gordon Dunsire, SDLC) Marshall McLuhan predicted much of this – gossip and buzz etc. We use global communication channel in very much a human mode. I urge people to go back and read him. There’s nothing wrong with gossiping and communication (aside from known downsides). None of it is new, the context is very new indeed. This is one place where libraries are equal with their users.
Graeme Forbes, NLS) I look at who I follow. I was following people at first but increasingly I follow organisations for news and updates and professional organisations etc. The balance has changed. I think a lot of us follow more organisations than people.
Comment4) is this about libraries finding us? People give information on facebook, what they want, what they are interested in. We’ve been saying for years library catalogues need to be like Google but maybe it needs to be like Facebook – it knows about users and can deliver them useful information.
Graeme Forbes, NLS) But it’s not homogenous. So facebook for teenagers doesn’t work – we can put stuff there but we miss a whole sector of society that maybe wants to use our services
Comment5) If libraries and organisationsa re on Facebook, teenagers maybe don’t want to be.
Graeme Forbes, NLS) Is that because that sector aren’t interested in those services or is it because Facebook doesn’t appeal to them.
[Cue discussions on Facebook etc. And what teenagers use – we think mobile and other spaces are where it’s at. Facebook was never for teenagers – it started for university students].
And with that analysis of teen habits we close.