Jul 102014

Today I am at the European Conference on Social Media (#ECSM2014) at the University of Brighton. I will be presenting my paper, “Learning from others mistakes: how social media etiquette distorts informal learning online” this afternoon but until then I will be blogging the talks I attend. As usual this is a live blog so please let me know if you spot any errors or omissions and I’ll be happy to fix them. 

After a welcome to the event from Sue Nugus of acpi, we are now hearing from Bruce Brown, Pro Vice Chancellor of Research at University of Brighton, welcoming us and stating that everything is up for grabs right now, a really important historic moment in time making this a really important conference which we are delighted to be hosting! Over 35 countries are represented here today, welcome! We are a post 92 University here but we have had a lot of success in research, and have a really exciting research agenda here particularly around arts and humanities. If I mention “impact” to UK colleagues here I can see a bit of a dark cloud looming… I chair the main panel for Arts and Humanities nationally, in which I have a group in Arts and Society and Commerce who met in Edinburgh yesterday, and I think you will be pleasantly surprised by just how much impact there is in these fields. So, I wish you well for a great conference.

Asher Rospigliosi, University of Brighton

Myself and Sue Greener, who’ll join me in a moment, have been working together for the last 12 years or so. Although we are located in a business school we have focused on e-learning, on the impact of the internet on everyday life. We were therefore very keen to look beyond the business world, to the wider range of how social media is impacting on life. We deliberately start with Farida Vis, who we are delighted to have here speaking about big data, for that reason. We also wanted to recognise the impact on business and changing business practices which is why we are delighted that our second keynote comes from David Gurteen.

Dr Sue Greener, University of Brighton

And the other side of what we are looking at today is learning, because we learn through social media all the time. So learn, discuss… and read about what happened at yesterday’s Social Media Showcase.

Farida Vis – The Evolution of Research on Social Media

As has already been said this is a really important moment, and something of a crunch point of academia, industry and government really coming together around social media. Social media research is becoming mainstream and visible across research and across sectors in different ways.

So, a few provocations…

Increasingly social media is becoming synonymous with big data. The tracks and traces we leave online mean that social media research is increasingly needing to engage with or at least acknowledge this big data. And real time analytics are an important part of this. What do they mean for academia and the time frames we are used to? How quickly can we produce findings, and findings which are robust… there are ways in which our work is being broken up and being challenged.

I was pleased to see the word cloud of keywords for papers and note lots of mentions of Facebook and LinkedIn and not so much Twitter. That would be good to see… in the literature we are seeing a real focus on particular platforms… Twitter seems to be a dominant platform there but social media is not Twitter, we have to be careful how we extrapolate from one platform to others… I think this is partly to do with attention and real time aspects. Other platforms that get researched a lot less have a very different dynamic. A site like Pinterest isn’t as concerned with real time, it works quite differently. We have to be careful how we build this field collectively.

So, where are the research questions, when we talk about social media? And big data? Often we are data driven – what is available to us not a series of critical research questions that lead to data, to tools. And social media research, at least in the early days, was a lot about how to get a handle on this data, how to deal with it… but we are now moving to a phase where we need to think about the theory. We can no longer get away with being theory-light.

And some other issues that come up time and time again, not least in relation to the Facebook contagion study, are issues around research ethics – do we need new ethical frameworks, do we need more agile ethics, how do we apply traditional ethics in a new research space. There are questions of methods. There are issues of sampling. And I think we still haven’t really grappled with is data sharing… when you deal with social media data it is data you cannot share with other researchers and that has real implications… For instance Twitter are really honing in on data use. Twitter, when they went public, have become very much concerned with selling data which is their business plan. That means for us as researchers we have real challenges with sharing proprietary data sets. And real issues with regards to open data and transparency, and with the funding council. Making applications for research funding you are expected to talk about data sharing and that means proprietary data is a real problem.

It’s brilliant to see so much research on social media… but less good to see a lack of funding for social media research. Both the AHRC and ESRC talked about funding a research centre last year, but for various reasons that funding never made it to a call… the funding calls could do a lot more to fund specific social media research. The ESRC are moving into their third phase of Big Data funding, but none specifically for social media, despite it being a major big data topic.

So, what is the future for this research field? In some ways we have this tension between huge enthusiasm and interest, there is a lot of excitement and innovation happening, but that has to be underpinned by a funding but also training framework to underpin this research.

I just want to talk for a while about where I have come from in this research field, and where I see this going… and how some future of social media may be going. I got involved in social media fairly early on. I did a PhD on the Israel-Palestine conflict and focusing on the representation of victims. And in 2005 when Hurricane Katrina happened the media representation of victims there, particularly two press pieces representing black people as thieves, white people as victims. There was a real backlash from the blogosphere and I found that community, that voice online, really fascinating and exciting, providing a voice for those not being represented.

Similarly in 2008 the Fitna: the battle YouTube controversy similarly sparked response from a community that was not getting it’s voice heard elsewhere. Again this was very interesting, and I was moving through the platforms. And in 2011 the London Riots were getting blamed on social media, particularly Twitter, and I became involved in work investigating those claims, the Reading the Riots project.

So, my research was becoming about data, big data sets, and that meant requiring new tools, new approaches, collaborations with others. When I looked at Flickr in 2005 the scale was several hundred images, doable by hand, small scale. By Fitna there were 1413 videos and 700 individuals. You cannot collect all of those. And in social media there is this beguiling idea that because you can see the data, it will be easy to capture that data. So for YouTube I had to work with computer scientists to get at that data.

And by 2011 we were asked by the Guardian to look at the riots tweets – a data set of 2.6 million tweets – and that meant a whole lot of computer science. So over that period we were really moving into needing far more fire power, more computing power, and computer science input.

So, coming back to reading the riots… the Guardian were given this data set. Twitter were uncomfortable, as a brand, were uncomfortable to be linked to the riots particularly before the Olympics. They were happy to be linked to the Arab Spring, but not those riots. But the Guardian didn’t know what to do with that data, and this work was in the context of a parliamentary enquiry… We formed a multidisciplinary team, lead by Rob Proctor, but that was work with real and immediate relevance.

Something very personal to add here… I feel that I am something of a “border runner”, working within academia, with government, with industry. In my own time I sit on a World Economic Forum Council on Social Media. What is interesting in this moment is trying to have these discussions across these sectors, bringing perspectives from academia to industry… and I think that border running is really important.

So, back to big data. Gardner (in Sicular, 2013) define big data as being about volume, velocity and variety. And there is a huge industry built around “social data” and “listening platforms” but many of these are Black Box systems, not suitable for academic work where you want to understand what takes place beyond the screen. So there is a great set of provocations and challenges to big data from boyd and Crawford: about the mythology that big data sets offer a higher form of intelligence and knowledge that can generate insights that were previously impossible, with the aura of truth and authenticity based on scale. They highlight the importance of critiquing claims of objectivity of data.

There are issues of the overwhelming focus on quantitative methods. And does data answer questions it was not designed to answer? How can we be sure we are asking the right research questions? We shouldn’t put data before research questions. And there are inherent biases in large linked error prone datasets, a really complex area. And there is a focus on text and numbers that can be mined algorithmically. Natural Language Processing works on stuff that can be mined, but what happens with that data we can’t easily mine? And I will talk a little on data fundamentalism…

Data fundamentalism is about the notion that correlation always indicates causation, that massive data sets and predictive analytics always reflect “objective truth”. The idea and belief in the existence of objects. And in that we can fail to situate ourselves in relation to that big data. And where are the critical big data studies? This is an important call to to arms I think.

So, how do we ground online data? It’s important to foreground data and what we think the data can tell us. There is a tension in where people want to ground their data. When we talk about social media we need to think about whether we want to ground the participants as citizens, in their offline context as people. Governments do want to understand individuals as people. So, do we ground social media users in the real world, as citizens, in the online world. Or do we want to ground our online users in that online world, social media users as social media users. So a Facebook user in the context of other Facebook users… this idea of grounding in the online world was pioneered by Richard Rogers in his research methodologies. So, for instance, in the riots one of the big key Twitter users was “Lord Voldemort” and, whilst there is a real person behind that account, it really points to those tensions of how we understand the grounding, whether offline or on lie.

Important considerations:

1. Asking the right question – research should be question driven rather than data driven. And honestly there is something troubling about the Riots work – started with the data and it was donated by a company, it goes against many of my provocations here. But we have to be open to using the data that is made available – Twitter is fairly transparent in it’s data ecosystem and what is available.

2. Accept poor data quality and users gaming metrics – once online metrics have value users will try to game them. Approach this data with huge suspicion. Try to ensure that you critically investigate that data, ensure what you think you have are what you actually have.

3. Limitations of tools – they are often built in disconnected ways… they may be built by people with expertise other than your own research perspective… dealing much more with user requirements in tool building is central, but as researchers we also have to be much better at describing the limitations better.

4. Transparancy – researchers should be upfront about limitations of research and research design. Can the data answer the questions? Increasingly we struggle to know what the limitations actually are – factors include what companies give us access to, what limitations we have as researchers, as well as others we don’t envisage, even if trying to be transparent.

I wanted to talk about a paper I wrote on Big Data and APIs (Vis 2013), and those aspects we can be unaware of… I am very keen that we have to be clear about how we create this data… it isn’t ready and waiting for us. We co-create that data. We need to be much more aware of APIs, of the tools that we use. So for instance Twitter lets you access three free APIs (Streaming, Search, REST), you have to understand from the outset which you need and what implications that has, and often you may want all three APIs. There are a number of API sampling problems. Now, if you have a lot of money to spend – as commercial companies will do – you can access the “FIREHOSE” – all of the tweets. But the Streaming API is a 1% random sample of the firehose… but it’s not totally random. I spoke to them and gave them a grilling on this. Twitter could do a whole lot better to explain how that 1% is being selected, what is and is not included, so that we understand what it is we are dealing with. From the Search or Streaming API, if you are not rate limited in a timeframe, you may actually be collecting all the data. So the implications will all depend on the type of data you are tracking. If you tracked all the tweets from this conference we are unlikely to generate 3 million tweets… collecting all the tweets through Search of Streaming means we might get 100% of the data or very near to it… but for a major event like the Arab Spring or Riots it’s a very different beast.

But it gets more complicated… this data is absolutely the backbone of monetising these platforms. We are seeing new business models around enriched metadata. We did, until recently, see three big players here: Datasift, GNIP and Topsy. But GNIP has been brought by Twitter, Topsy by Apple… we can see a tweet for instance, but the metadata will tell us the context – how many followers the person has, what the connections are, etc. And that’s where the value is… so we have seen the emergence of a social data industry. We saw Social Data Week take place last year. Big Boulder, traditionally organised by GNIP but last month’s was ostensibly organised by Facebook, is another big key conference here. So this is some of the wider context in which some of our research is taking place, we are at the mercy of this industry, and how data is made available.

So… is this new enriched metadata that companies sell/want to sell actually useful? For academia, industry and government we are all interested in location and influence – geolocation and how influential users are and how their networks look, where the key nodes and influencers are for sales but also for spreading policies or curbing negative spread.

So, the difference between social media and social data. Last year Martin Hawksey spotted that when you sent a query to the Twitter search API you used to get a small amount of data back, but now are giving you about four times more data: much much more context, to help you understand better what individuals are doing. But I get suspicious when I see this… is this stuff they could give us before? where is it from? is some of it made up?

New Profile Geo Enrichment – a GNIP product that came out last year… on Twitter you can click the geolocation pin to switch on for all of your tweets to be giving an exact Lat/Long geolocation. This is the gold standard of geo location. But only 1% of Twitter users are comfortable to give away their location all the time… and this is a really skewed group of users. So 2-3% of tweets in Firehose have geolocation. And those tend to be early adopters who are comfortable with sharing that data, do not have privacy concerns. So this new GNIP tools uses your biography and the location you can state there to parse your proxy location. The crucial thing here is that many many Twitter users do give a location in their biography… so a company like GNIP claim you can hear from all Twitter users, that the data is representative… to find the people discussing your brand in which location… This tool also parses tweets that mention a location…. now you don’t have to think too hard to see some issues there. This “enriched” metadata product mashes together gold standard geo location data with all this other stuff.

And there is another issue… people often delete tweets, and they often delete tweets with exact locations. In principle the Twitter API will send data to the tool but you have responsibility to check that. People batch delete locations. They also delete content.

Back to that Geo Enrichment of profiles… they are linking data and talk about “unlocking” demographic data and other information that is not otherwise possible with activity location. But how do we conduct the cheeks and balances we want and need to do to actually use that data in research.

We are obsessed with influence, ranking, lists… and we are also increasingly concerned with how influential we are as individuals on social media, maybe not everyone in this audience but a lot of social media users. So you have companies like Klout who rank you on social media influence… but change your scores based on which tools you connect. And they would create dark profiles – harvesting data and creating profiles even if you are not interested. Mining your data and processing and profiling you whether or not you want to. And the results of who influences you, and who you influence can be bizarre… people you’ve never seen are apparently your top influences. Direct Messages appear in key moments…

And Klout is a gamified space… they reward users for giving data… more data = more influential?? And of course there is the tension between online or offline influence. Up until recently Justin Bieber had a perfect Klout score… is he really more influential than, say, Obama, offline?! And you can buy your Klout score… the site Fiverr for instance lets you buy a Facebook Girlfriend, or boost your Klout scores… this stuff is out there… these tools exist…

So in April 2013 Mitt Romney decided to buy 100,000 extra followers in one day… a huge spike in one day was suspicious and he was found out. There are as many as 20 million fake follower accounts out of the 200 million active users – that’s from last year – so 10% of the Twittersphere are fake followers. And that doesn’t count spoof accounts. If we think about offline data sets… these should make us incredibly nervous… but we forgot to be critical about this stuff and we should be.

One more word on Klout… GNIP is now partnered with Klout… we can now buy Twitter data with Klout scores… and those could really skew our research.

We really need to be better at describing the limitations of our data. We have to see APIs as data makers, once data is linked very hard to untangle how metadata is constructed and where problems might be. Included in terms of deleted content – people delete for many different reasons. And we need to think of ourselves as data makers as well. And when creating a dataset it is important to describe how it was made, what the limitations are. You have to be suspicious of your data, to verify it, to describe that process. And how do we do that in a standard journal article – perhaps we have to have a more detailed account elsewhere of how our data was created.

Tools as data makers… I increasingly see research projects designed around tools that will get them the data. That massively narrows the scope of what we are looking at… if that’s what we do, what kind of research landscape are we building. I essentially see the same Twitter tool being built over and over again. We do have to focus on the questions. So we really need to understand this as a very dynamic field where humans and tools co-create data. And we have to avoid thinking about social media as lots of data, and that it is for people who work with data to build those. Instead we have to have a good understanding of the platforms themselves. What kind of domain expertise do we need in this field? To do Twitter research you need to understand the platform, you also need to be a user of that platform.

So, what’s the future? Well we need to address what gets left out – all the stuff we are not looking at right now. One thing that gets left out is images, very little research on images but 750 million images shared daily, not reflected in research. Images grab our attention, key to engagement for companies. iPhone world’s number one camera. Top cameras on Flickr all iPhones. A camera used to be for special occasion. Smartphones are always on us… we take selfies, everyday snaps, but also witness to events. And smartphone penetration is really quite high – 65% in US, similar in UK – going along with this is mobile web access, and that’s shifting what we could look at… And we see a rise of platforms focusing on visual content… Pinterest, Tumblr, Instagram, Vine, Snapchat. But academia just getting a handle on Twitter… and we have to move on again. And so many issues of ethics. We have issues of ephemerality… how do we research Snapchat? Through interviews with users? Through using it directly ourselves? Snapchat is a really important new player. 400 million images shared every day… we should be researching these areas.

In response to how images are being used Twitter has changed how we see images… now showing them inline. And saw a huge boost in RTs for inline pictures – changing practice in platform and in user behaviour, so important changes.

So, in the future, we need to think about pitfalls, limitations, and think about what are we not researching. I will be working on an image project in the next year. Images are not easy to mine. Maybe we are avoiding things it is hard to draw meaning out from. Images do, however, have huge interest in industry – and may move way ahead of academia, though we can learn a lot from certain developments. We need to switch our focus to understanding what all of this means… why are people doing this? How do we understand this social world?

Social Media for Informal Minority Language Learning: Welsh Learners’ Practices – Ann Jones, Institute of Educational Technology, Open University

This is an educational case study on minority language learning, specifically Welsh. This will be quite a straight forward talk on the challenges, the literature and this case study. We are quite a small number in this room but here is a map to locate Wales and to get a sense of Welsh speakers by local authority… So Cardiff is south. Aberystwyth, with a high number of Welsh speakers is in mid Wales.

Welsh is a very old language, from about the 6th century. It was the main language until the 1900s. Now about 20% of the population speak Welsh (~560k). The distribution is very uneven. In Cardiff it’s 8% of people, in Aberystwyth its 42%, in Caernarfon it’s 88%. So 2 challenges… small number of speakers, and uneven distribution. As a learner wanting to practice that can be tricky.

We thought about how one might be able to overcome this a bit online. Last year Lamy and Zourou (2013) “Social networking for language education: two particular foci: identity and community building” was really helpful, and included focus on minority and heritage language learning. Zorou (2012) talks about 3 terms for language learning:

1. social media as a set of tools

2. social network sites

3. language learning communities – more than just tools to learn the language, sometimes including peer assessment.

And I’ve also drawn on Conole and Alevizou (2010) and their topography for SM for language learning. And they talk about media sharing; instant message, conversation and chat; social networking; blogging and microblogging. In the study I looked at microblogging was quite important, even for beginners. But for me I needed to add another aspect…

In terms of studies of welsh there is quite a lot on the status of welsh on social media, not so much for learning, so what happens if you are a bilingual speaker – do you speak welsh or do you speak English? Honeycutt and Cunliffe (2010) looked at Facebook, found quite a lot of use… groups that ranged from tiny numbers, to those in the thousands… later studies haven’t been quite so positive though. And on the social identity of welsh learners (Prosser 1986) looked at how welsh isn’t usually learned in order to community, it is about identity and your relationship to welsh identities.

So, an informal welsh learning case study. This was a small study with quite lengthy interviews with 12 learners. This gives you an indication of what they were doing – all made some use of social media. Even beginners used Twitter – if only to follow a tweet of the day in Welsh. Some also used email if they felt confident and it was about exchange with another learner.

There was one community that has grown up, 30k participants, called Say Something in Welsh. It talks about welsh learning podcasts. It emphasises communication skills. It was used by and referred to by many of my participants. They have two courses, a forum, a weekly newsletter, and an Online Eisteddfod – encouraging learners to  take pictures, write plays, etc. And they also run physical bootcamps for intensive speaking practice. There are local meetings. There are 30,000 users. And it is run by passionate people so that forum is very actively monitored.

So I want to give you some examples of social media use. Media sharing is an obvious one, and tends to come top in informal language learning. They were watching and sharing TV, often via app, and they watched kids programmes and programmes for learners. But as their learning progressed it changed. So a participant talked about listening to a documentary and understanding a little bit for the first time. Many listened to Radio Cymru – at work it didn’t distract them but felt it was training their ears. And there were materials on YouTube, music downloads, BBC resources for learners, etc.

In terms of instant messaging and chat, even if not that well progressed, were used. Emailing was part of this. Skype was particularly useful – both audio/video and text chat. And texting also part of the mix. And the forum included hugely detailed and caring discussions of detailed language use, such as correct use of “i”.

The social media spaces here were basically only Facebook. A participant here talks about having a Welsh Facebook page – and using the spell checker as part of that process, quite a sophisticated learning use. And learners talked about using Facebook to bring learners together… for instance welsh learners in England who used Say Something in Welsh to set up and support meet up groups – see Welsh Learners in England Facebook Page for instance. An online space and advertising that compliments in person activities and meetings.

I mentioned that there was a really active forum on SSIW. There is real encouragement, sharing of experiences, etc. One of my interviewees talked about going to Wales for a week, looking for resources, and downloading resources onto his smartphone. And how he was using that. And he talks about going into a shop and being understood. So access to that online course and community has been key to his understanding of the language.

Conclusions. I did a small scale study here. All use social media but their use varies. And it changes from a beginner to when you become more experienced. Most commonly they shared media, used it to interact, used SNS – usually Facebook. SNS successful in connecting learners. Experienced learners particularly creative in supporting other learners – perhaps because of the identity of welsh learners. SSIQ has been particularly successful.


Q: have you looked at how welsh learners adopt new English words… when we have new words related to technology and whether there are common words…

A: People do ask each other. Perhaps similar to other minority languages there is a board of language, and when new words emerge they discuss what they should call that… some are quite amusing. “Microdon” was the word for Microwave, but popularly known as “Poptiping” because of the noise it makes.

Q: Was there a spread beyond the group, that people were drawn in?

A: I didn’t look at it, people at Glamorgan did, but I’m not sure that it did. They say online communities often mirror offline groups. For welsh community some mirroring. Different for learners though. Don’t

Q: Social media communities around politics are often the most active – do you think that the political aspect of learning and speaking welsh is important – would the community work similarly for other minority languages without that political aspect or is that political baggage important?

A: lovely quote I had about technology as a boon but also a real issue – because community is so big online. Welsh government funded rugby union for bilingual website and they hadn’t done it… they are located down south. Has to be a real push. And meanwhile remote communities still don’t have broadband, meanwhile driving with dongle to do homework… definitely a significant political element there.

Social media initial public offerings (IPOs): Failure and Success Factors – Piotr Wisniewski, Warsaw School of Economics, Poland

I will be talking about social media commercialisation, the learning curve and some of the investment challenges. The Global Social Media Index. And some takeaways from key IPOs.

Social media organisations increasingly tapped public stock markets yet, despite appeal and improving economics, the success of several high profile IPOs has been rather lacklustre. Social media have been very popular with younger generation but this is changing. We see them setting trends in the economy. We see projected demands as role of social networking rises. Their primary focus fuels expected growth – the young will become more affluent over time. They are seen as democratic resources because of their ease of access.

From an investment point of view social media can be seen as facilitators of existing offline operations. But you can also look at social media as an asset for investment per se, and that’s my concern.

You have seen growing awareness of social media by industry, and adoption of them. There are critical challenges though: business metrics and KPIs are difficult for social media. Social media stocks represent very different business models so hard to benchmark them against each other. And that makes it hard to put a safe valuation on them. Further many business models have been hard to monetise. They have been popular with users but it is hard to monetise that. Most social media companies are “hit driven” so they have to innovate to remain relevant and interesting to stock holders.

Global Social Media Index: the companies primarily looked at to see the trends for investment stories tend to be those with public status and global outreach. Not only local presence but a global dimension. Which usually, because of languages, have to mean sites in global languages.

In terms of the SOCL Key Components we see a real focus on US and Chinese companies, Facebook and LinkedIn significant here.

Some social media stock got off to inauspicious start, they are seen as highly volatile stocks. We see most indeces outpacing Social Media stocks initially, at their floatation, but then they recover losses over time. We see quite a bit of volatility but we see more favourable Sharpe Ratio… So they have gained ground in terms of risk adjustment over time. Looking at SOCL financials we see LinkedIn as one of the most highly valued stocks, partly about the variance in business models.

As we look at the IPOs, many floatations were made when no clear path to commercialisation and monetisation could be seen by investors. Timing of some IPOs was not so good in some cases. And there were issues of IPO management – aggressive pricing made it difficult to successfully list them on the stock market.

I would say the conclusions that could be drawn from the information on IPOs… whoever brings them onto the market has to pay attention to timing, timing is critical. Pre-IPO integration is important, to make the route to commercialisation more clear. And the IPO management has to be done better in order to limit the mishaps that occurred in, say, the Facebook IPO. Has to be a more coherent process and perhaps with more conservatism on the pricing side.


Q: Why are social media attractive on the markets?

A: See a broadening and widening of customer base. Public markets are susceptible to trends, to public interests. The stories behind the IPOs are attractive. We have a young customer base, a loyal customer base.

Q: How are they valued? What is the product?

A: Earnings, cashflow, projections for instance. The service is networking among people, the product is advertising on the whole. Some applications are paid for… some models more commercially viable than others. Investors have those doubts too… looking for clear path to commercial success. LinkedIn is valued high for that reason… may be too high…

Q: Is LinkedIn so high because it has a more traditional model, a recognisable recruitment model almost

A: It has high quality users, graduates, professionals, and high quality networks that are particularly of interest to investors.

Q: Has the perception of Facebook and transparency changed since the IPO?

A: Argueably it is more transparent now, since the offering. But still questions about commercialising and monetising. But they have come a long way.

Pro-Am Writing: Towards a framework for new media influence on Old Journalism – Andrew Duffy, Nanyang Technological University, Singapore 

I started here by looking at travel writing, the professional travel writers – often armed with trusty notebooks – and the amateur travel bloggers, usually armed with laptops. And you might ask whether this is a serious area of study… but media frameworks influence public perception and reflect pubic opinion (Curran 2002), media shapes world view, provides shares symbols and language (Keller 2002). And the media can change perceptions and behavioural intentions (Hsu and Song 2013).

But lets turn that around… tourists are now an important media source for the public (Duffy 2014). The ambulant traveller can tell the travel writer where to go and what to do when they get there. So I came up with three research questions on the user generated sites. So far we have looked at 18 travel journalism students from the UK, Finland, Singapore, China and Taiwan. They planned their articles before travel-writing practicum to Istanbul. I did a survey and one hour interviews on their experiences.

The first thing they did was to look at background information. And I was surprised at how very vague they were… “about Istanbul”, “Turkish culture”, etc. They looked up sites they had heard about “Blue Mosque”, “Hagia Sofia”. They also did specific travel arctic searches… for “Traditional Turkish Hamam”, “Istanbul moustache transport”. Everything coming back was mainstream, they wanted to be different… so finally they searched for off the beaten track information.

Now they mostly started with Wikipedia/Wikitravel. They were a bit embarrassed and nervous about them. But as a basic starting point it was worth doing it. None mentioned the collaborative nature of those sites. Then they went to Lonely Planet forum and TripAdvisor. Seen as trusted but often seeing only the obvious stuff. And a smaller percentage of students went to blogs by travellers and residence – seen as insider’s viewpoint, authentic… but also seen as rather boring because they were every day. A dichotomy there.

Motivations for using UGC… students noted for Wikipedia – “anybody could write it, hell even I could write it” as if that were the last word in dubious authorship. For Trip Advisor it was seen as a  method of verification. Often people start with the top 10… or if they want to be obscure they look down at number 53 or something more obscure.

For blogs it was important for the reader to decide whether or not the author was on the same wavelength, the same personality as the individual. They make a quick assessment. But seen as giving you new information you don’t find anywhere else. And at the end I had to prod them about whether they used Facebook, Tumblr, Twitter… we are told they are digital natives, told to use Twitter or Instagram… but they go there as a last resort. One said that Twitter might be up to date but wasn’t sure how to search it… another used Facebook pages for a club to find out about it… but it surprised me how grudgingly they used these spaces.

UGC is sought for alternative travel ideas, off the beaten track and real life as it is lived, an authentic traveller experience. Instead it delivers mainstream attractions (no social reporting), reconfirms existing knowledge first, authentic tourist experience. This desire really focused down their trips into really mainstream activities..

So I’m trying to put together a framework for future studies, good practice professional journalism values combines with UGC equivalents. So for news “impact on many” would equate to “must see, must do”. All of these students researched using Google, no one questioned results on front page. Four went to a page sponsored by a hotel, none of them noticed. They were not aware of SEO. These are communications students, they should know better. So what is the influence of UGC on travel journalism? Well many of these factors add up to popular, mainstream, recentness, and a focus on personal experience… that limits how people see the outside world. Self trumps destination – we are producing a generation of travellers that place themselves above their destination. Classic news value in journalism is objectivism, but subjective experience is the outcome from this authenticity as a gold standard factor in UGC. Quite an interesting aspect.

It pushes towards mainstream activities, replicates mainstream media conventions – research on NYT travel pictures sent by users found both those replicating conventions and the jokey tropes for instance. A real focus on tourist activities. A focus on personal experience and the self. In the mainstream rather than the independent. In theory the internet should be freeing us from monolithic media makers, but seems to be the opposite happening. And, as I mentioned, they didn’t really discuss the effect of SEO and how that pointed them towards the mainstream.. I found across a great tool that forces you to page 11 of Google – to see the soft white underbelly of the internet.

So they want to blaze the trail they wish when they actually follow in others’ footsteps.

Q: Are those journalistic frameworks still relevant, is that idea of objectivity still relevant when mainstream media is moving to subjective terms, columnists etc. Objectivity isn’t what is seen in the same way now, may be influenced by social media but much bigger than that.

A: I did think I’d be asked about that. These values are from textbooks, long standing values. Whilst these students may want to end up being a columnist but they have to do that objective stuff, that socialisation, to enter the media, to reach that point.

Comment: reminds me of Ira Glass concept of “The Gap”- the idea that you ape a style you like but have great difficulty creating to that level until you have had a lot of practice.

Q: Could the lack of use of Twitter be about students seeing Twitter as a messaging service? My students certainly see it that way.

A: I don’t think so, as journalism students they see Twitter as an information source, but they didn’t search them.

Q: Why did students trust Trip Advisor, and use in preference to Booking.com or similar.

A: Partly because it is so well known, it also appeared very high up in the search results. But they were embarrassed about using it, like Wikipedia, as created by amateurs. Much more comfortable looking at journalistic sources and newspapers, especially British newspapers, appear highly in search results.

Q: Lets flip this round a bit… what would you do as a travel site to be used more?

A: If I was going to well paid consultancy for travel websites I would tell them that they should use the first person. They saw third person as promotional in tone. They much prefer first person “If they did, then I could do it too”. Why can’t you write in first person in a blog style on the Istanbul website? Need to break away from third person.

Q: Doesn’t that link back to the point made earlier to the objective versus subjective voice. They prefer subjective account.

A: This was the revelation to me… that thing of subjectivity being what they look for, that being the internet way… the impact on journalism is likely to be a significant thing.

David Gurteen – Towards Smarter Socially Mediated Conversations

Let me take you back 12 years… I used to go to talks in London on knowledge management. And afterwards we would go to the pub to chat. Some were good but many were not so good… And on those nights the pub was the best bit, that was where the real connections and learning took place. And so, I decided to set up Gurteen Knowledge Cafe’s and that’s what I do now, I travel the world arranging these sorts of discussion events. People started to ask me about having those conversations online, but I was focused on face to face engagement. But when I was asked to speak here, I thought about what I would really see as being important to creating the right sort of online environment for good conversations.

For those of you familiar with the cafe it’s a really simple process… a way of getting people together around conversation on a topic of mutual interest. It’s a very open format. Tyically a speaker makes a short presentation, poses a short question. People gather in groups for conversation. And ideally we come together to share those conversations, what we have learned from them.

More by accident than anything I have ended up running these cafes across the world – in the UK, Spain, Norway, Russia, USA. etc. I could share many many stories. I ran a Knowledge Sharing Workshop in Jakarta in 2007, but I’d run one the day before in the Dutch Embassy. I realised that English language skills were not great and that meant people dried up, the conversations were not going to work. So I realised that I didn’t need to talk, I let the group engage in their own language, and my host indicated how it was going on. I learned the importance of allowing people to converse in their own tongue. Even when you know a foreign language well it can be hard to have fluid chat.

And a year later in Malaysia, in 2008, I ran a cafe as part of an IBM workshop. What I find is that at the end of the first conversation it’s good to move people to other groups… I did that here and nobody moved at all… my immediate reaction was curiosity… my host, who was Chinese, said “don’t worry, I know the culture! I’ll make them move for you!”. So I said to go ahead. He told them to stand up, and then asked a few to change tables. And no one moved. And someone there said they didn’t want to move and that I had said that I didn’t make them do anything, and they didn’t want to. They had all arrived in in their own groups, they didn’t want to leave their comfort zones. People are not always relaxed about talking to strangers. In future I’ll try asking everyone to move…

In Thailand a week later (2008) I had a big sign up but a small group arrived, the rest wanted to watch and were doing so via a web cam. And when it came to conversations the Americans, Brits, Aussies, Indians joined in big conversations. Thai people engaged in small groups but not in that big group. A real lesson there for me about the comfort of speaking outside your own group versus inside your group.

And the most moving one for me, in Abu Dhabi in 2011, ran a session with Arab man and Arab women. They weren’t really mixing but I asked them to mix a bit. At the end one came up to him at the end quite agitated, quite upset. He said he had, until that day, only spoken to his mum, his wife, his nieces, four women. And I realised how much we don’t know about each others’ histories and backgrounds.

There are so many stories, I’ve boiled it down to key barriers:

  • Poor English – quality and confidence of english
  • Fear of loss of face, of looking foolish, of other dominant people
  • Fear of causing someone else to lose face, particularly people in authority
  • Deference to authority, I saw two people at a workshop in Singapore not engaging but the next day they were hugely involved and the difference was that the CEO wasn’t there the second day which meant no risk of looking foolish or making them look foolish
  • Humility – fear that the individual doesn’t have anything to add, to say of worth
  • Culture – a Chinese woman I met in Norway talked about education as being about sitting quietly, sitting on hands, the teacher talked at them and they could never ask questions, and they were taught to never ever question superiors. She knew that that wasn’t what she wanted to do but it was ingrained.

These traits are dominant in SE Asian Cultures but also exist in our Western Cultures.

These last few years, as I’ve become more interested in conversation, I’ve started to investigate the research on conversation and I just want to draw out some highlights. In “Why is conversation so easy?” (Garrod and Pickering) the researchers find that humans have evolved for conversation, rather than monologue. Influence of group size – above about 5 people it no longer a conversations but a series of mini presentations or monologues (Fay, Garrod and Carletta). Small groups engage, larger groups tend not to. “Friends (and sometimes enemies) with cognitive benefits” (Ybarra, Winkielman, Yeh, Burnstein, Kavanagh) – I’d never thought of that before but I have found that having some friendly ice breaking chat at the beginning of a session really change the energy. And social sensitivity (Williams Wooley, Chabris, Pentland, Hashmi, Malone) find that groups where one person dominates are less collectively intelligent than in groups where the conversational turns are more equally distributed.

So this and other research I have read about had made the cafe evolve… and I have established principles that underly any good conversation:

  • Relaxed, non-threatening, open conversation (close to a pub or cafe conversation)
  • Everyone equal; no table leaders or report back
  • No one forced to do anything  – it’s ok to just listen
  • Trust people to talk about what is important  – it’s ok to go off-topic. for conversation to be engaging it has to have a flow of it’s own.
  • No capture of outcomes – outcomes are what people take away in their heads.

So, the question I have for myself, that I’d like to share with you, is what does this mean for online discussion forums and a potential virtual knowledge cafe? How would I do it given all of those issues. Now, I may not be so bright here but there are many issues here…

English tends to be the dominant language. Large number of people. Open to anyone. No idea who is in the forum. Do not know the people. No idea of the authority figures. No idea of the trolls. Everything is recorded. Maybe not surprising that we have the 90:9:1 law (90% lurk/read; 9% occasionally engage; 1% of really active users). Perhaps not surprising given the experiences of the conversation sessions I talked about before.

And then we think about the nature of many forum conversations: posts tend to be monologues; posts often very lengthy; grandstanding; responses carefully thought through; more debate/argument than dialogue; trolls and “intellectual trolls” thrive; easy to misunderstand someone; not easy to correct misunderstandings.

So, what’s the solution?

I don’t have the answers but I have some ideas. I think we need safe spaces where people can speak in their own language. I think you do need to have some conversations that are peer only. I think you need to know who is in the room, make it clear who is in the forum. The ability to edit or delete posts – to get rid of something that goes too far. Do not store threads for long. Small groups – of 3 or 4 people – and I don’t see anything on the web that does that. Permission to join conversations. Limit the size of posts – Twitter we use to some extent… it is not a conversational tool though. Perhaps limiting a forum to 500 characters would work. Real time dissuasions may make things more useful.

So… Randomised Coffee Trials…

In large organisations not easy for people to connect and build relationships. RCTs pair people at random for coffee once a week. Bank of England connects 4 people and call it “Coffee Fours”. SABMiller have pub chats! Lots of companies and organisations like NESTA trying this. But there is also telepresence as an option – would like to try out at my cafe some time.

Before I finish I want to ask you a question… How do you think we could improve engagement in online forums and how do we improve the quality of those conversations?


Q: A comment and a feeling of camaraderie: working in India I have faced the same issues of hierarchy and fear of loss of face. At first I tried to impose my way of doing things. But when I let go and let them do it their way, that was a huge change.

A: The real issue here, is how do we do this online… but we don’t see the dark side of who is online.

Q: We had a quick conversation and what we came up with is that the visual cue is so important. Online you need some sort of visual cue to connect to the other person.

A: yes, and these telepresence machines seem the best option thus far.

Q: I have a solution. We teach online, we have students all over the world. We use WebEx and Blackboard. We share a question early in the week and students can then post on forums, or can use that real time chat online, students then roll with it, people do chime in. Small groups of no more than 10.

A: I’ll try and chat with you later.

Q: I’m glad that Pat has mentioned the live web conferencing – I was going to mention tools like Skype or Google+ Hangouts. But I also wanted to raise the issue of text and the permeance of text. If you main format for conversation is textual than it carries less permanence, it is more ephemeral. So I recognise that barrier but I think that barrier may be shifting. It seems odd for text to be deemed more permanent than the chat in the pub – which you certainly can’t go back and delete or correct.

A: I do also do a lot of conversing via text but it is a major barrier for many people, the idea that what they say could be quoted back verbatim to them or held against them.

Twitter based Analysis of Public, Fine-grained emotional reactions to Significant Events – Dr Martin Sykora

So I’m talking today about some research funded by the EPSRC. I will be talking about the background, including the software we developed in house for this work, and I will say a bit about the analysis, the data  analysis we have done, and I will also talk about some of our future work.

In terms of the significance, I wanted to talk about the significance of social media which has been really interesting over the last two years as it has been taken up. In Meier 2011 we see an Egyptian activist talking about use of social media to change the world. And we also see Twitter as a way to poll public opinion (O’Conner et al 2010; Tumasjan et al 2010) and that can be a real issue as well. And we do see social media breaking the news – not always the case but it is genuinely disruptive. And there is also a big commercial interest in social media – companies like Attensity, Crimson Hexagon, Sysomas, Socialradar, Radian6 etc. All that attention is appealing to commercial companies. We also see the crisis mapping communities interested in social media. And the security services monitoring social media (Sykora 2013).

Social media streams allow us to observe a large number of spontaneous real-time interactions and varied expression of opinion, often fleeting and private (Miller 2011). And unprecedented opportunity to study human communication. And we wanted to study a range of emotions and a range of heterogeneous emotional measures.

So we have created software called EMOTIVE and the emotions we used there used Ekman’s 6 basic emotions (Anger, Disgust, Fear, etc) as well as Shame. And we decided not to use lexicons but instead to built an ontology – a map of words so richer than a list of words. And basically what we did was we said what emotional terms and expressions people could use with basic emotions. We allowed for intensifiers, for negation, etc. We have over 800 words, phrases, and substring matching as well. This system analyses around 2000 tweets per second.

We built the ontology with an English Language and Literature PhD level research associate, with training in linguistics and discourse analysis during a three month time window. They looked at 600MB of cleaned tweets on 63 different UK-specific topics/search-terms datasets. We focused on explicit declarations of emotions. And we tested that and reviewed it. And we built a Natural Language Processing pipeline. This starts with data pulled in from the Twitter API, we had terms we wanted to monitor live so we collected new tweets repeatedly, polling regularly. For most events we caught most tweets, for some the rate limiting will have meant missed tweets.

So, the pipeline included checking whether a verb or a noun – helpful for understanding meaning of expressions. We used a tree often used in spell checkers to quickly match words and phrases… which means it is very fast! And you can use this to spit out the appropriate basic emotions. We checked this tool against manual and other techniques. It performed to good or excellent accurate. So, we had a system so we decided to run this across some events. We used the Twitter Search REST API 1.1 and continuously retrieved during an event. And often the search term or hashtag chosen to find good data set, often trending. This was about being on top of the news and initiating the process – e.g. Nelson Mandela’s death – and ask for the system to gather tweets and being careful to do that in the right place (e.g. putting names in quotes).

We did this for 25 distinct events, over 1.5 million tweets. And there are 28 separate datasets from this (http://emotive.lboro.ac.uk/resources/ECSM2014/. Not all tweets have an emotion though, about 12% do, standard deviation of 9%. But the five most emotional datasets related to particular news stories – mainly about the nurse who committed suicide in Australia, mainly shame. And death of Daniel Pelka also very emotional tweets. But something more positive – Chinese new year – did trigger lots of emotions. And some hashtags more emotional than others, even around the same event (#september11 anniversary tweets less emotional than those tagged #twintowers).

Around the Woolwich incident we see really interesting ranges of emotions – anger at Anjem Choudary after appearance on newsnight. Sadness, disgust and surprise around the incident itself.

Looking at the September 11th anniversary in 2013 we had a range of sadness and shock. But a few odd blips of happiness – some casually mocking, some claiming to be from terrorists. And than you have some odd tweets – more quirky mixes of surprise or disgust.

And we then have a graph of emotions across a number of events – #JamesGandolfini; Ariel Sharon; Daniel Pelka; Nelson Mandela. Mandela was ill for some time but surprise a strong emotion around his death. A reasonably high level of happiness around Ariel Sharon for instance.

But I want to go back to the death of that nurse. We have a lot of sadness, of shame, of disgust. The ones associated with her person high for sadness and shame. For the radio station you see happiness highish – use of sarcasm there but not for her personally because that didn’t seem appropriate for her.

So there were some basic correlations… we saw happiness-sadness negatively correlated (-.614). Anger-confusion are correlated (.444). anger-disgust (.370) etc. But interesting to see how these emotions correlate with mentions in tweets (-.402) – interesting but based on a small data set. So we want to analyse a much bigger data set.

The other thing we did was clustering, looking for similarities of events based purely on emotional responses. So we saw bank holiday and chinese new year cluster together… some less obvious connections – Daniel Pelka, woolwich, horse meat and g8 summit. Interesting emotional clusters here, quite interesting.

So, we have this tool. We want to look at racism for instance. Our future work will want to be with more data although, as far as we know, this is the biggest study looking at emotions. And we want to look at emotions over time and how they change.


Q: To follow up on question on timings of events and picking up trends… different times of day seems to change engagement online… people may not engage when they are work.

A: A good point. We did look at volume of tweets over time so, for instance for September 11th anniversary you see activity all day, but daytime in US you see peaks. But that was a day and a bit only. But Prism and NSA was over a month. Mandela five days after his death were still quite active. But when we do time series analysis we will focus more on that.

Q: The reason I mention it is because you want the best data you can for when engagement is high.

A: it could effect outcome but we had the issue of not that much data in some cases, less of an issue. For us it was just data collection. Could be important in other studies

Q: What did you do with tweets with more than one emotion in them?

A: We took it case by case, so we assumed you are expressing both…

Q: Is there a range for the emotion?

A: Like a score? Yes, the literature is there. There is a range for each expression, intensifiers etc. and we used that to work out the scoring of that intensity. And we have stronger and less strong words.

Q: But if you have one word showing both fear and disgust together?

A: Independent scores, yes, for both.

Using Twitter for What? – Lemi Baruh, Koc University, Turkey

This is a very small study on how people use Twitter – or report using Twitter – during Gezi Protests. This was part of the Cosmic project that looks at social media in crisis situations.

A bit of background: Turkey is ranked 154th out of 179 countries in terms of press freedom according to 2013 figures – it has gotten worse in the last year. Critics argue that the Turkish media companies have mainly changed hands in the last 7 years, the influence of the ruling president. And at the end of May in 2013 a relatively small sit-in protest against the removal of trees for a new redevelopment project in Taksim square was violently evicted. Protests spread around Turky. Agenda evolved to move onto media and media bias (e.g. Turkish CNN ran a Penguin documentary during protest), often expressed via social media.

So we did a quick study with an online survey administered via Qualtrics. Survey conducted between June 10-June 29th. 10 days after protest started – as it took 9 days for ethics approval. We sent email invites and shared via social media. It took 15 minutes to complete and out of 890 started the survey, 230 completed. 64% female. mean age 28. 54% indicated being students at higher education institution, Internet use of 4 hrs per day. politically active. In many ways this group did not represent Turkey in any way, even the protestors, but it gives us some indications and insights.

We asked the sample how they got news, before the protests they mainly used websites of newspapers, social media and some TV. But after the protests began a huge drop in use of websites of newspapers and big rise in social media usage. They didn’t necessarily trust it… but they needed up to date information, and a desire for first hand information. They reply to email, to a tweet, want to verify what is really happening. About 20% of respondents said that mass media did not cover the protests, another 16% said mass media were biased. These individuals talked about filtering and finding information themselves. For some social media was about getting the feeling of participation…

And when we asked about activities performed on Twitter during the protests we had them report that they frequently read tweets from accounts they follow, reading tweets from accounts that they do not follow, retweets and tweets were less often done. And we saw a lot of people undertaking information verification. They verify with friends on location, they check with multiple sources online, and they check with mass media/news sites. That is despite individuals saying that they did not trust mass media. Some people did searches for information, some did direct background checks.

So in terms of the results. We had respondents indicating the extent to which they would categorise their use of Twitter during Gezi Protests as orientated towards a continuum go “voicing your opinions” and “share news/updates”.

In analysing the data we identified four types of Twitter users. Close to half were “Update Hubs” – getting information in, sharing onwards with minimal opinion in. Then we had about 22% of update seekers – using Twitter to read news/updates and for learning about what others have shared. Then Opinion Seekers (19%) seeking opinions. That remaining Voice Makers group (around 17%) were the actual opinion makers.

We compared these segments around uses and gratifications, focusing on surveillance, self-expression, relationship maintenance, connectivity. The opinion makers didn’t just use Twitter to share their opinions, but also to build their networks. And in terms of types of activities we saw a few significant differences. We saw most retweets from Update Hubs. Replying to tweets much higher in Voice Makers group.

The Opinion Seekers had significantly lower trust in information from Twitter than members of the other segments, interested in information verification, consciously checking information through multiple sources before resharing information. Voice Makers are less likely to cross check.

Conclusion. Well the main drivers of Twitter use ere were mistrust in mainstream media, the desire for access to direct information, willingness to spread information and voice their opinions. Preference for Twitter did not necessarily mean that users trusted social media as a source of information. Cross checking across different social media was commonplace.

And the four segments, whilst all motivated to get information, had quite different preferences and characters.

And finally I would like to acknowledge my co-author Hayley Watson at Trilateral Research and Consulting in the UK, and the European Union for funding this research.


Q: I know your survey sample was skewed but how representative do you generally think that those who were tweeting about these protests were, compared to the wider Turkish population, or those interested in those protests?

A: the people actively tweeting during the event were like this skewed sample… but after the event the pro government side started tweeting much ore actively. It’s reported that the current ruling party has actually recruited thousands of people to tweet on their behalf… we have reportedly got professional trollers for the party… Has shifted post event and now both sides are likely to be tweeting.

Q: Did you include data from those who did not complete your survey?

A: No, many of our respondents stopped when we ask about political views… they were happy to talk about social media but not about their politics.

Q: How did you do segmentation?

A: We used two step cluster analysis rather than hierachichal clusters – the latter I tried first, but didn’t work well for this data. Also tried random forest decision tree with the data – decided not to predict anything!

Q: Looking at your title.. As a marketer I am much more interested in segmentation and why you are focusing on particular controversial events.

A: The reason why this is happening is because this is a project funded by the European Union, we saw an opportunity to gather data for our research on crisis. But on the other hand we have just finished completed collection of data on American audiences on more general twitter using segmentation analysis. We did some work on privacy preferences which was quite revealing.

A Case Study of the Impact of Instructional Design on Blogging and terms Networks in Teacher-Training Course – Minoru Nakayama, Tokyo Institute of Technology, Japan

Social media can be useful in university courses – online discussion using blogs, wikis, discussion boards etc and can allow discussion and sharing of knowledge about the given concept with classmates, and promote critical thinking and interactive learning (Leh et al 2012, 2013). Good for fostering class discussion, attractive features of social media technology, sharing and collaborative filtering (Educase 2005). But the effectiveness may depend on the type of activity and that’s where instructional design comes in. And in terms of learning topics it can sometimes be useful to use a concept map concept.

So, how do you use concept mapping idea in online discussion? Well mapping discussion content (postings) to the concept map. Lexical analysis can illustrate relationships in discussion texts and individual term networks (Rabbany et al 2012). So we undertook a small case study in an online course looking at whether the online discussion can be illustrated using lexical graph visualisation techniques, and what features of this were.

The online course is fully online, at graduate level, on “Instructional Technology” which looks at how to design an online course. There are a series of assignments for the final projects which include discussion boards and blogs. They have specific blogging tasks which include a content task – a lesson plan for online course (to be posted to their blog); critique – every participants did a critique of two peer’s content and were required to address good/strength points; and the third task was suggestions  – every participant made suggestions for peers.

So in the case of critiques, the participants were required to address only good/strong points and suggestions. There were options for more controlled (critque) and open ended (suggestions) entries here.

In terms of participants only five students gave consent for us to use their postings so a fairly small sample and covering several different blog types. So, with that data, we undertook lexical analysis and mapping. We used TreeTagger to extract nouns and extracted consequential nouns as 2-gram. Concurrent relationship can be summarised in adjacency matrix, and that can then be illustrated as a directed graph. So you can see a score for each noun indicating the points of connection, and that can be graphed…. most nouns have some form of connection. We also gather this type of map in order to analyse the texts, and we can then look for points of connection, density of connections etc.

So, looking at graphs we can compare the number of words and number of 2-grams for both the critique and suggestions, looking for similarities, complexities, etc. and differences between critique and suggestions. When you look for closeness you see a real scattering of the critique and suggestions. And most terms were centralised in critiques. By contrast in the suggestions there is a much less central pattern to the use of words.

So in terms of understanding the blog communication, that allows us to build rubrics with specific criteria for particular activities. The lexical analysis can be used to directly evaluate post – concept map can represent term networks. Some features of the postings in terms are measured. The analysis can be applied to the course design – so we can compare the appropriateness of the online discussion design – the controlled versus open ended tasks. And some difference in centralisation of term mapping were observed.


Q: That’s a nice objective measure of different learning activities. Were all posts analysed for lexical analysis. Did you differentiate between the key posts and the social conversations. That conclusion that the closed tasks led to more focused discussion is good, it’s plausible, but may be missing sociable stuff.

A: This is a fully online course… students will discuss things beyond the course. We can analyse all of those posts but we didn’t in this work. The blog post can also be evaluated in other ways and the text analysis compared.

Q: These were graduate students, in what kind of class…

A: This is part of a teacher training course.

Q: I wonder if that makes a difference in terms of the types of posting being done, would it make a difference in your results?

A: They are students in instructional design. They may use social media that we cannot measure. But using the blog posts for the Instructional Design course gives us a point of focus to analyse.

A Massive Open Online Courses Odyssey: A confessional account – Alejandro Ramirez, Carleton University, Ottowa, Canada

Firstly thank you for being here, because a confessional account requires an audience! And my title has both aspects of learning… Odyssey – the original way to transmit knowledge – but also MOOCs! The most modern of learning.

I came to think about technology in education when we were redesigning the curriculum in my school, and we decided to start using social media tools as they needed that competence in these areas. And I was about to go on Sabbatical when the MOOCs exploded! I thought there was a lot of hype taking place but there was always the worry part of the idea being that they may have to force people to use MOOCs. So I decided to spend my sabbatical researching MOOCs. So I thought that I should start by learning more about distance learning, and to think about the context of MOOCs.

MOOCs are not a revolution, it’s more of an evolution. We now have students very savvy with technology – they engage all day long… or they could be engaging in technology without even going into the technology. So it’s not a revolution… and it takes a long period of time for things to change. Technology is more reliable today and that gives us competence to use it properly. If we have MOOCs today it’s because we have Wedermeyer that came up with the concept of distance learning. And that concept is about transposing what we do in these rooms today into a distance setting – engaging in conversations with each other, to learn things, to ask questions, how can technology enable that to occur? That’s the promise of distance education.

I teach a course at first year and a course at fourth year. You see a real change in the students. In first year you see them think the university will be the answer to all the questions that they have, and at the end they realise that it is up to them to make the change, to learn, to take those skills into the future with them. Notes in the 21st century is taking a picture on the iPhone. It’s about remembering the content, things are changing, they expect me to change to.

So I looked at various aspects of the research and decided I could use ethnography. van Mannen (2011) advocates immersion as a student, so, I registered for a MOOC. That was the best way to understand what that experience is. So I registered in a MOOC to immerse myself. And I needed to keep track of the observer in me so that I could track the process. I wanted to be more aware of the process of learning using technology.

So, these big MOOCs were offered for major universities to reach out to wider audiences. You can view a list of courses, you have to create an account, and that’s it, you have registered. At first I was a bit skeptical that the Massive part might be an issue. At the end of the day I knew I would sit alone in my room doing the work. The Online part isn’t different from Open University or distance learning so I wanted to focus on the massive. What were my assumptions about what would happen in this course? I did the process, I did the homework, I viewed the lectures… and I recorded what had changed, how my expectations had changed. So, the first course was offered in fall of 2012, running September to November. It was offered by UC Berkeley via edX. That course was a foundation in Artificial Intelligence and also get hands on experience implementing AI algorithms in a video-game themed context. It included coding in Python which I hadn’t done before, I learned that online to do the course.

On day one I met the massive impact of the Massive factor. I had a question and there are lots of names, and TAs but there was no email address for them. There is a forum. I had no answer to my question. I still haven’t had an answer. That could have caused me to abandon the course, many did. 100k were registered. Less than 10% would finish the course. And of that only 5% had credit for it and passed. But that is the model. And we need to understand why, and what are the expectations for that…

So, to see what happens elsewhere I registered in a course on data science, offered by Washinton University via Coursera. I started to engage but the same thing happened. Again you cannot ask a direct question of anyone, you have to use the forum.

Whilst I waited for that course to take place I was invited to take an ICT in Education in Spring 2013, offered by UNAM in Coursera, in Spanish. But it was more or less the same thing. And there was more or less the same issue. And I needed to create two email accounts in order to be able to take part. I spent most of my day going through videos… these are really very good. Universities have learned from YouTube generation and from TED. There are subtitles, you can pause videos. They are spiced up with some tests to make sure you are listening. Most questions and assignments need to be done only by watching the videos. And one of the thing they have learned is that only the students who already have a degree actually watch the videos, others skip them. Not great. BUT it is a computer mediated teaching where the facility is within the tools that we use. But we have forgotten we can use technology to really engage the students. If we are able to capture the engagement of the students and reflect that back, see patterns, and maybe do that so that they can actually learn. Right now that is not available.

So in terms of my conclusions I see that computer mediated learning has had some missed opportunities. The computer is a means to an end… when it works… you want a conversation, the computer is just the means. It is the adaptable tool to help you to use the computer to achieve your goals and needs. And so there is opportunity there.

The second thing is that the computer is the other observer in an ethnography that we cannot use. It tracks what you are doing. And that could be used as feedback to the students, for them to understand habits and patterns.

And, since they are free, MOOCs are not upset about abandoning their courses. Hopefully we in universities can use them more effectively because they are great ways to engage and spread information. We can use the technology to learn to do things better, since our students are eager to use this technology.

I think that the future of MOOCs will be when we take out more than we pay for…


Q: sort of a question and observation. The original MOOCs were about collaboration and sharing and totally based on social media… and we are all a bit upset with this style of MOOC. But Harvard did a super one using a lot of social tools built in… it was about 2010… it was damn good and before the MOOC surge. I don’t think this style of MOOC is a dead end but I think we should be researching other ways of teaching crowds.

A: I think universities are presenting this option as a way to do research. But they miss the opportunity to empower the students who have signed up to the course. Maybe if they told me that they wanted me to be a research subject in that course, it would be different. If the issue is up front we can learn from that… bit of that history of massive opportunity, maybe it will change. We have to recognise that things have changed.

Q: I didn’t quite understand to which extent students acted as mentors to other students… a logical way to do this stuff at scale, based on pre-exercises perhaps. And secondly our business school we have a different approach where our staff can select MOOCs and report on what is learned, have workshops on how to adopt and use this knowledge. And also that idea of credit bearing courses, the paying for credits. MOOC seen as input to knowledge sharing in classroom.

A: I learned in this research that we have technologies to empower students, not to allow me to suddenly teach 10,000 students. But a bit of your comment before… yes there were groups that started to emerge from these MOOCs. I had an invitation in Facebook for people taking this course, at this time, in a given language and lots of communities popped up like that. But when posting questions etc. there were so may threads… overwhelming… scale was so huge. There are opportunities but they need more management. But I like the idea of having it blended, bringing the MOOC back into the classroom.

Learning from others mistakes: how social media etiquette distorts informal learning online – Me!

A link to the Prezi will appear here shortly once all happily synced from my machine to the web – currently the web is a version behind!

 July 10, 2014  Posted by at 10:36 am Events Attended, LiveBlogs Tagged with: ,  Add comments

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>