For various reasons, aspects of my PhD topic – analysing marketing performance on social media – have been popping up in the news lately. To make things even better, there seems to be support for a lot of the arguments I will be making, along with a core Web Science methodological issue. Super.
Just over a week ago, the BBC published a story titled “Facebook 'likes' and adverts' value doubted”, which is something my Masters dissertation, and subsequent PhD research have been focused on – identifying the appropriate ways to analyse marketing performance, which is definitely not through the number of Likes (or followers on Twitter) a campaign receives. I was happy to see this article on the BBC, as it came less than a month after I presented this argument at my first conference, WebSci’12, in which I used the analogy of assessing school child performance: you wouldn’t rank a child with 100 friends as better performing than a child with 10 friends as their popularity is not a measure of their success or ability. Maybe they would have more avenues for spreading something (although, carrying on with the school child analogy, this would most likely be a rumour… or head lice), but it is likely the 10-friend-child would have stronger, deeper and more trusting relationships that are more valuable. For a marketing campaign it is no different – there is no point in having 100 people “like” you if none of them actually care about your brand and the only reason they clicked their support was to enter a competition or something similar. Awareness of your existence will only get you so far. I, along with many marketers before me (but far from all – some really do seem to love their Likes), argue that engagement is something far more valuable and so metrics that measure this are required, along with those that can measure action or the results of your campaign. Determining how to do go about this is the focus of my research. With the attention this story got, some people began to suggest that all forms of social media based marketing are worthless - I disagree completely, and commented on a followup post to the story to say that I think that the problem is knowing what to measure - traditional measures of success can't carry on being applied in the same old way.
This brings me on to the second story, which could even be a case study of my research. Wired reported on what they called a “masterclass” in using social media during a recent O2 network outage as studied by Wunderman (a marketing agency). As I read this article I got a bit nervous as, like many other reports about this sort of thing, it began talking about how O2’s Twitter account had a massive increase in followers… (a follower to me is comparable to a Like on Facebook, it shouldn’t mean much in terms of value to an organisation). More appropriately the article then discussed increases in people talking about O2, representing something closer to engagement, and therefore, something much more useful. Although, of course, this is engagement during a negative event, it didn’t matter – this is still a positive marketing outcome, which Wunderman then showed by doing something that is sometimes missing from some studies that claim to be 'Web Science'. They combined their quantitative statistics with qualitative analysis of the content of tweets, from both O2’s account and their audience. They could understand the conversation, and categorise O2’s responses in groups such as being direct, firefighting and rising above immaturity. And they could summarise that despite this ‘negative’ event, the results of their handling of it were generally positive (a nice graph shows the reach of - or how many people would potentially have seen - elements of the conversation depending on the sentiment of the messages, with love coming out higher than anger and sadness). Only by looking at the data to get some example of what was actually being said can the statistics (be they tweets or, going back to the previous story, Facebook Likes) ever mean anything and describe the trends that the numbers are implying. As I read the article I was delighted to see this approach being taken, especially when it was simultaneously helping to reinforce the point of my PhD. It is only a start, and a lot more needs to be done to determine what the actual value was in these results, but the further away we get from flaunting the statistics about alleged popularity the better, and this is the first step.
No comments:
Post a Comment