I apologise for not Ranting , Fredwise, for a long while-ee-oh. I have been blogging on some other of Freddy's interests.
And also still engaging in Social Media and monitoring developments and the sociology of being on- and hip in- SM.
One thing I noticed is that I am myself a killer for stubbing threads. I wondered a bit paranoically about this, but then did some spot sample analysis and came up with two rationales behind my lack of social acceptibility in stubbing threads, which can be devided up a bit more as follows:
1) Be ON EARLY. Probably because you engage with the thread owner while they are hot-to-post replies, and your entries will have a recency effect both in what people read or how they read down through the thread, and because you will engage the owner and others who found it worth pouncing on.
People who reply to the original post and then the arguements / contentions/ questions/ qualifiers which come out, are looking for replies themselves and want to own the thread by being there early. Opinion "leaders" and trollers/flamers, often hesitate before answering a thread to see if it is gaining momentum, thus seeing if it is important enough for them.
There seems to be a thread-post read fatigue in that after the first few posts, new drop-ins, by in large, will be less engaged. This is simply explained, perhaps, that people read the original and the first seeds of reply and counter arguement.
On forums with internal post titles within threads, the reply titles can of course take on a life of their own, being more attractive than the original thread.
I've blogged earlier on "sticky/clicky/magnetic" threads, refer you there, but threads often become co-owned by either protagonists or antagonists. If the topic is EITHER insufficiently novel or current then it will quickly run to a stubb, only to be bounced back in a "bump" from a search reference which some clown picks up on.
Although I don't often bump, the arguements are often burnt out by the time I get there. Nothing to see here folks, we ' all got homes to go to now.
Some threads are going to stubb-out early because they are not current or relevant enough for most users. Others will stubb out despite this, because other threads are started by an opinion leader or maybe even a "flamer", and people who "know" them want to chip in. Some forums are coralled by moderators to stick a topic together, all-be-it artificially, with competing threads actually being deleted.
Some threads will bubble on a while, with some short bumps up the listings. This IMHO is because there is a group of users nurturing the thread who have some affinity for each other and / or the topic. Which takes us to:
2) ENGAGE with USERS YOU ARE ALREADY ENGAGED with. To get involved and replied to on a thread, ie a conversation, then you actually need to have some kind of social rapport with people on the forum. This can be antagonistic of course.
"Lurkers" are not very welcome actually on forums, the voyeur thing and all' y'know. However new users are encouraged a while, but if you are crass like me, you soon get the smart alec's trying hard to pick your arguement to bits, in a very alpha male way. Other users are a little wary. You start to notice that you are not part of the boys club, and techie forums are very male oriented. You then notice how small the boys club is and how many trolling idiots come on.
There is the whole "new boy" syndrome ( something I have been exposed to , and even a victim of, too many times in my career and social life actually!). On about one in three new forums I get active on in a personal or work capacity, I get roundly ignored. On one of the other two, I get patronised and even attacked, while on the last I just stubb threads out like Arnie used to terminate people.
I am sure that some users have multiple profiles even in an amateur basis, to propagate their own threads.......Talking to Yourself again, again.... Or even to Bully by Numbers!
3) Pepper opinion with actual facts or relative advantages, and while you should be early on, try to create a fresh take.
4) Short can be sweet in gaining immediate engagement from others. Posing intelligent questions is a very good way of getting engaged. Quite a few "trolling" or "Popcorn" threads are started by a short question and / or contention, and the originator never bothers to reply, despite them running to a large number of hits.
Perspective for SMM
Why are these improtant in social media monitoring and not just your personal social standing in SM?
Thread behaviour is a kind of meta-analysis which can help detect propagation and involvement with issues (brands, politics, you name it) in retrospect, in tracking over time or actually handling crisis or product launched in as near real time as possible.
What is important is that there are a number of qualitative issues which can be identified quantitaively; or in otherwords, the importance of threads can be identified by some statistics: total views being the obvious one, but not always available on forums, while unique / repeat users per thread and posts per hour/day are some fairly top line ones you can think of immediately. Other metametrics could relate to lead influencers on threads, in identifying potential engagement and influence of others.
If social media does not hit the buffers in terms of relative quality of involvment for users and the brands, then it will grow to a level of hits and noise which mean that both meta-metrics and directed sampling (rather than attempts at census, a no-hoper with Twitter) will become more efficient means of gauging consumer involvement. When using ever more complex sentiment algorythms, given you have a good sample or choose to stratify into the most promising looking threads from your meta-analysis, then you save time in processing and reduce noise from off topic hits in a census.
Post Script
If you are a consumer electronics or sports brand, and you want to actively engage in SM forums, then use a female profile as your beach-head, while building a smart alec lead influencer on her back. She can post "damsel in distress" new threads or commentative posts. She will gain both a large male following, and many females on the forums.
Some women are of course, terrible attention seekers, and are on forums primarily to get social attention. While men, like me, want the intellectual, debative , problem-solution-outcome based engagement with other users, building up the socialality and peer recognition underway. There are usually far more would be alpha males than gamma, and women seem to be very poorly represented as lead influencers: at least openly as females. Such is life outside SM also!
Showing posts with label social-media-monitor. Show all posts
Showing posts with label social-media-monitor. Show all posts
Saturday, January 08, 2011
Thursday, September 23, 2010
Social Media Watchers
I was reminded on the danger of over-analytics when I visited a forum myself recently.
My favourite camera manufacturer is going to focus, punnily enough, on smaller cameras and put a hold on new R&D in their older, larger lens system. This has resulted in a hole shit storm of over reaction in forums. They are just shifting focus according to their management.
For consumer electronics, forums are still the place to gain detailed insight into consumer opinion. Facebook is often left wanting for involvement with the groups, as they just never seem to get a critical mass for even large consumer gadget pages. Twitter and the other web SMSs can help with gaining a barometer view of consumer opinion and help firefight crisises. Perhaps the annonymity of forums (most use a handel with a similarily obscure hot- or G-mail address apparently) leads consumers to speak more candidly.
With the former, the one forum, dpreview, has over 29 million posts, but once you get granular on brands or sub forums, and look at recent posts, then you realise that analytic tools can be misleading. Looking at fluxes in consumer hits on brand keywords can be somewhat misleading, when there is a strong undercurrent of discontent or potential for NPI ears to listen.
It is in other words, often better to identify the key forums and just have a junior marketeer keep an eye for trouble and summarise threads rather than go employ a company to show you some key word hit counts and what happened long before you needed to know it happened!
With Twitter too, so far the tools are pretty useless and manual labour to identify keywords, tweet structire, retweet rate and then sentiment are so far a better bet. Very soon though there will be good sentiment analytics, but they will need continual manual tweaking to catch sentiment in the abbreviated tweetspeak world.
The value of analytics has to be balanced with the value of just reading the stuff and tracking the lead influencers manually, especially in "issue management" as fire-fighting is often called in the ePR world now. Over time, brand tracking and sentiment rating is of value, but must be seen in context of the development of the media itself and always related in relative terms to competing brands and some other benchmark brand arena.
My favourite camera manufacturer is going to focus, punnily enough, on smaller cameras and put a hold on new R&D in their older, larger lens system. This has resulted in a hole shit storm of over reaction in forums. They are just shifting focus according to their management.
For consumer electronics, forums are still the place to gain detailed insight into consumer opinion. Facebook is often left wanting for involvement with the groups, as they just never seem to get a critical mass for even large consumer gadget pages. Twitter and the other web SMSs can help with gaining a barometer view of consumer opinion and help firefight crisises. Perhaps the annonymity of forums (most use a handel with a similarily obscure hot- or G-mail address apparently) leads consumers to speak more candidly.
With the former, the one forum, dpreview, has over 29 million posts, but once you get granular on brands or sub forums, and look at recent posts, then you realise that analytic tools can be misleading. Looking at fluxes in consumer hits on brand keywords can be somewhat misleading, when there is a strong undercurrent of discontent or potential for NPI ears to listen.
It is in other words, often better to identify the key forums and just have a junior marketeer keep an eye for trouble and summarise threads rather than go employ a company to show you some key word hit counts and what happened long before you needed to know it happened!
With Twitter too, so far the tools are pretty useless and manual labour to identify keywords, tweet structire, retweet rate and then sentiment are so far a better bet. Very soon though there will be good sentiment analytics, but they will need continual manual tweaking to catch sentiment in the abbreviated tweetspeak world.
The value of analytics has to be balanced with the value of just reading the stuff and tracking the lead influencers manually, especially in "issue management" as fire-fighting is often called in the ePR world now. Over time, brand tracking and sentiment rating is of value, but must be seen in context of the development of the media itself and always related in relative terms to competing brands and some other benchmark brand arena.
Monday, July 26, 2010
Social Media Monitoring Goes Bust
A Brief History
To start with a brief history of social media: from cloudy beginings in nerd talk and security services personal info boards, social media came to the public internet in the late eighties and early nineties as the newsgroup: a simple text based repository for messages and early blogs, with circulation by e-mail subscription or internet boards.....and these were indeed monitored then, mainly by police in connection with the pædo rings and other unsavoury comms.
Newsgroups quickly went HTML on the World Wide Web and evolved into the forum, which is still the best repository for accessible, high value and trackable information on the web. This is where most consumer involvement with products and brands takes place. Although the micro blogging / glue services like Twitter are rapidly becoming a larger, faster resource they are still less useful for examining consumer attittudes over months of time or in detail.
Little Brother is Watching You
Now there are a host of agencies offering "social media monitoring" and these currently run the risk of creating a bubble soft and venture capital will feel the implosion of. Currently the most advanced indexing tools are not much better than stringing together several specialist hot shop functions, a core data depository and crawler and monitor and "deck" resourcest which are free on the internet. The larger agencies, like Meltwater are able to attract the larger brands while the major players of Trad' market research are still testing the water and holding off on acquisitions.
Why is Social Media Monitoring Going Bust?
The whole market for social media analytics, is in peril of undervaluing it's core pricing. Put simply, the barriers to entry are rather low given that comp'sci' students are often given crawler-indexer or deck meta data projects in undergrad years. Most of the new emerging agencies are student shops: comp sci and MBAers slung together to play at business. Problem being that they want to set experience before value and are cutting each others throats by going into brand names on "loss leaders".
Another key issue is the propagation of free services: "amsterdam hooker windows" as one software engineer called them in another area. The problem here is that you can begin to string together enough free services to make a picture of your brands' position in SM and then just go into the key forums and twitter yourself with your short-suffering marketing interns. The initial "Brand X status in CGM" is devalued as are tha later tracker reports: the key value for agencies is in presenting the statistics and sentiment mapping, but this is so relative and subject to the media surface changing itself that you really have to question the value of it for brands with less than 2000 hits per month in SM.
Scalability ....lack of it.
Although a few of the new starts may have strong crawler-indexer technology, one major problem is that the technology has to monitor a diverse range of sources out there on the internet, and those sources are not always too keen on being crawled. Programmer time is used in not just attaining sources, but retaining them. Also indexing for relevance and speed takes programmer time,and unfortunetly a new arena for a new brand may not be as fast or inclusive/exhaustive as the last indexing which had been optmised.
Put on top of this the awful costs of account management and new business development in acquiring and maintaining brand name clients and the issue of scalability just in this one area, is the one which will kill off most of the new starts.
Lumpy Custard
This economics of SM monitor agencies is actually nothing new: most small 1970s-80s advertising agencies failed because they tried to scale and could not make the leap from the core owner-manager team to an expanding, system driven agency.
Most of all in marketing services, it is horrible risky business with huge over reliance on a small number of customers ( clients in agency land) . "lumpy custard" as I used to describe it. In the 1980s the expression "lose a client, lose your job" was the mantra of account directors, while today it is more likely to be "lose a client, lose your VC funding".
To give some detail on this, when a new project is taken on then it inevitably comes with new sources or indexing demands. Also it comes with a new set of expectatons and because Market Research is still cinderella to the communications side of on line marketing, then new demands of the clients are usually out of line with charging the baseline 85€ per man hour to even break even in business services.
Winners and Losers
The largest, most comprehensive indexers will probably win over and then get the third level funding or even IPO / alternative exchange floatations. I'd expect these technology and brand become acquisition hungry and buy up the hot shops with key expertise in delivering more from fewer man hours. They in turn will want to either exit from VC or the stock exchange, by being eaten by the MB/TNC or Frosts of the world.
This would be the current exit model for the numerous university spawned start ups in SMM.
However, another huge issue they have is in defending their IPR: outside the US very litte will be patentable and a copyright can either be worked around or be somewhat irrelevant in a David-Goliath situation. So rounds of acquisition will come down to " can we get there cheaper ourselves? Do we head hunt out the key techies? What value do they really add ? How will they integrate technically and cutlurally to us?" for the potential bigger fish.
Acquisitions rounds will become window shopping with all the above questions firmly in mind once they get a look behind the scenes. Small companies would be wise to limit their exposure in terms of their "black box" code and indeed the identities of their staff.
These days you are open to headhunting through facebook and linked-in, so Social Media monitors may be eaten up by their own poison.
To start with a brief history of social media: from cloudy beginings in nerd talk and security services personal info boards, social media came to the public internet in the late eighties and early nineties as the newsgroup: a simple text based repository for messages and early blogs, with circulation by e-mail subscription or internet boards.....and these were indeed monitored then, mainly by police in connection with the pædo rings and other unsavoury comms.
Newsgroups quickly went HTML on the World Wide Web and evolved into the forum, which is still the best repository for accessible, high value and trackable information on the web. This is where most consumer involvement with products and brands takes place. Although the micro blogging / glue services like Twitter are rapidly becoming a larger, faster resource they are still less useful for examining consumer attittudes over months of time or in detail.
Little Brother is Watching You
Now there are a host of agencies offering "social media monitoring" and these currently run the risk of creating a bubble soft and venture capital will feel the implosion of. Currently the most advanced indexing tools are not much better than stringing together several specialist hot shop functions, a core data depository and crawler and monitor and "deck" resourcest which are free on the internet. The larger agencies, like Meltwater are able to attract the larger brands while the major players of Trad' market research are still testing the water and holding off on acquisitions.
Why is Social Media Monitoring Going Bust?
The whole market for social media analytics, is in peril of undervaluing it's core pricing. Put simply, the barriers to entry are rather low given that comp'sci' students are often given crawler-indexer or deck meta data projects in undergrad years. Most of the new emerging agencies are student shops: comp sci and MBAers slung together to play at business. Problem being that they want to set experience before value and are cutting each others throats by going into brand names on "loss leaders".
Another key issue is the propagation of free services: "amsterdam hooker windows" as one software engineer called them in another area. The problem here is that you can begin to string together enough free services to make a picture of your brands' position in SM and then just go into the key forums and twitter yourself with your short-suffering marketing interns. The initial "Brand X status in CGM" is devalued as are tha later tracker reports: the key value for agencies is in presenting the statistics and sentiment mapping, but this is so relative and subject to the media surface changing itself that you really have to question the value of it for brands with less than 2000 hits per month in SM.
Scalability ....lack of it.
Although a few of the new starts may have strong crawler-indexer technology, one major problem is that the technology has to monitor a diverse range of sources out there on the internet, and those sources are not always too keen on being crawled. Programmer time is used in not just attaining sources, but retaining them. Also indexing for relevance and speed takes programmer time,and unfortunetly a new arena for a new brand may not be as fast or inclusive/exhaustive as the last indexing which had been optmised.
Put on top of this the awful costs of account management and new business development in acquiring and maintaining brand name clients and the issue of scalability just in this one area, is the one which will kill off most of the new starts.
Lumpy Custard
This economics of SM monitor agencies is actually nothing new: most small 1970s-80s advertising agencies failed because they tried to scale and could not make the leap from the core owner-manager team to an expanding, system driven agency.
Most of all in marketing services, it is horrible risky business with huge over reliance on a small number of customers ( clients in agency land) . "lumpy custard" as I used to describe it. In the 1980s the expression "lose a client, lose your job" was the mantra of account directors, while today it is more likely to be "lose a client, lose your VC funding".
To give some detail on this, when a new project is taken on then it inevitably comes with new sources or indexing demands. Also it comes with a new set of expectatons and because Market Research is still cinderella to the communications side of on line marketing, then new demands of the clients are usually out of line with charging the baseline 85€ per man hour to even break even in business services.
Winners and Losers
The largest, most comprehensive indexers will probably win over and then get the third level funding or even IPO / alternative exchange floatations. I'd expect these technology and brand become acquisition hungry and buy up the hot shops with key expertise in delivering more from fewer man hours. They in turn will want to either exit from VC or the stock exchange, by being eaten by the MB/TNC or Frosts of the world.
This would be the current exit model for the numerous university spawned start ups in SMM.
However, another huge issue they have is in defending their IPR: outside the US very litte will be patentable and a copyright can either be worked around or be somewhat irrelevant in a David-Goliath situation. So rounds of acquisition will come down to " can we get there cheaper ourselves? Do we head hunt out the key techies? What value do they really add ? How will they integrate technically and cutlurally to us?" for the potential bigger fish.
Acquisitions rounds will become window shopping with all the above questions firmly in mind once they get a look behind the scenes. Small companies would be wise to limit their exposure in terms of their "black box" code and indeed the identities of their staff.
These days you are open to headhunting through facebook and linked-in, so Social Media monitors may be eaten up by their own poison.
Friday, April 23, 2010
Mointoring Consumer Generated Media- The new King
The other day there was a programme on the idiot box about measuring IQ: Kalahari bushmen have been measured as the lowest IQ of any homosapien. 61 or the like. POS that IQ tests are for measuring actual ability to learn, perfect crafts, hunt and so on. THis was driven by a fairly racist appearing "researcher" who had done some "convenience samples" and some meta analyses to draw up these intelligence quotients for different ethnic groups.
Another researcher pointed out the validity of IQ tests as they stand: to get by in the modern world and use your brain to forward your well being and reproduce, you need to be able to relate to the westernised IQ test! There is little point in being a hunter gatherer in the 21st century, you are on to a loser.
Now this may seem a bit removed from doing social media monitoring: but in future will consumer opinions and groups who DO NOT make CGM about brands not become largely irrelevant?
I think that market research will become completely web and IP telephony based. And we will use anonymsed IP MPEG streaming to sample into street level consumer behaviour, and into those social strata who don't do CGM yet are important enough to some brands of sports clothing .....yeah the underclass buying their bling rags....
Another researcher pointed out the validity of IQ tests as they stand: to get by in the modern world and use your brain to forward your well being and reproduce, you need to be able to relate to the westernised IQ test! There is little point in being a hunter gatherer in the 21st century, you are on to a loser.
Now this may seem a bit removed from doing social media monitoring: but in future will consumer opinions and groups who DO NOT make CGM about brands not become largely irrelevant?
I think that market research will become completely web and IP telephony based. And we will use anonymsed IP MPEG streaming to sample into street level consumer behaviour, and into those social strata who don't do CGM yet are important enough to some brands of sports clothing .....yeah the underclass buying their bling rags....
Friday, March 19, 2010
DIY Consumer Generated Media Survey
DIY Market Research in Consumer Generated Media
At this instant, many universities around the world are spawning out small start ups and VC are raising eyebrows as angel captial invests in a new type of market research and intelligence firm.
The new enterprise opportunities are based on the sheer volume of CGM and the vogue for the big brands on the web in this area: Twitter, Facebook, Digg, and the latest brave new entry, google-buzz. Statistics, as I discussed below, are a little hard to use in reality and the cold world of market-movements and quantitative , conclusive, inferential and the numerically indicative is somewhat removed from CGM at the moment. What the meat-of-the-dinner is in fact, remains qualitative research with some utilisable methods for stat's which help describe the parameters and prominent qualities within voice-of-public or "Buzz" in social media.
Now from the arena I have seen, there is quite a smidgen of the "emperors new clothes" around in social media-monitoring. Take for example what is all dressed up and being used and no doubt abused: In areas like sentiment with some pretty flimsy algorythms out there, or little if any statistical significance to confirm the relative changes over time or differences between brands.
Also hit count statistics: for reasons of the prevalence and magnetism of the big sticky threads I discussed earlier, these can in fact populate a large amount of your hits in a topic, and if a topic has become google-rooted ( search engine ranks are high for that forum on the given free search in the topic area) then these get alot of noise about nothing other than one place to look. You see where I am going? If you want to open a kosha sandwich deli, then you will soon realise that most of the current world market is in new york.
It is actually pretty easy to follow the path " he who hath shall have a cup which over floweth, and he who hath not shall go without for ever" : the sticky sites and the sticky threads suck in a lot of the numbers and within this lies some of the really good qaulitative insight. You don't need large indexing or meta crawling tools to get the same qualitative result: but you do need sound judgement and the "corner pieces" of your social media space and range of consumer expression.
The opposite is also true: very small postings or postings which are very similar over a range of web forums and other social media, can point to a lead indicator or early problem alert after NPI. New users posting in a period after product launch are worth picking up: they are often the tip of the iceberg of customer dissatisfaction!
Until a few years ago, search engines did not want to index "live content" for various reasons best known to them selves! So any php , asp or cfm pages where ignored as perishable and not to be indexed. This had me stuck on a few forums we ran for clients a decade and more ag- Iit became a bit tedious because back then the big-thread magnet phenomenon, and ettiquette (discussed two blogs ago). However the corporate bosses were hanging on every word written down in awe and fear of libel suites or some tumultuous disclosure ( which did happen actually)
So your start point should be to follow the well trodden path like a wolf amongst the sheep who go google, and then like the idea of CGM forum rather than reading the corporatised blurb or sanitised PR bloggs. The doors to the crime scene are all open and there are hundreds of footprints.
So your tools are the search engines. Beware being all google centric: some may be more prominent nationally or within a specialist niche of global or national citizens ( academics always used Alta Vista and then moved over to FAST all the web for example). Now you add google buzz, google blogg search, twitter search, youtube, tweet deck etc and you start to have a powerful set of doorways to be able to set out and build a report like "attitudes the the bumble bee brand amongst international english speaking consumers in Social Media"
A few weeks ago, Google announced they would be indexing public content on FaceBook which will make both some opportunity , and a big stick to beat yourself with. As with analysing tweets, it can be a torturous route of reading conversations or following links to actually make sense of hit results.
It is a little difficult to get meaningful statistics in DIY SMMing but some clever use of search string arithmetic will help. More on this , making your google etc advanced or multiple searches efficient and exhuastive in a later blog.
You can meta-track launches, from rumour mill to unboxing and consumer adoption. You can track political issues, viral news story discussion...anything that affects several hundred thousand people in a western country, and you can bet it will be posted on, blogged, tweeted or have it's own fan or hate club on FB.
On some topics you will find a fairly concise set of mega-threads, a smattering of blogs and a pitter-patter of small threads and comments around the various social media nodes. Other topics you choose to research will be huge, sprawling and broad in both their appeal and the spectrum of opinion which is expressed.
Larger topics are usually worth sub categorising by sub topic, geography or forum-colour. Alternatively you can try to see the amenability of searches which find a type of segmentation based around a more qualitative factor: like consumer intention to purchase ; polars of sentiment ; brand or feature comparative posts and pages.
When you find page hits ( times 10 for post hits on average!) which run into the hundreds then it is worth using a very simple, well validated sampling methodology. First ensure that the page listings are exhaustive and you know the total number. Then take this and take it as every tenth page to counts of 100 or 500, and every 25th page for over 500 and so on. This will mean opening everything in "new tab". Most pages on forums will have 10 posts, but some may list the entire thread or hundreds. Then you can apply the same rule of thumb: every nth post: 5 for 100 would be a more quality result. The point of this discipline being that you sample from the whole distribution, (population of posts as species if you like) and you don't follow "interesting routes". In other words, you are forced to take a wide angled shot so you understand the landscape before you can decide which features are actually representative, prominent or meangingful in light of the whole spectrum.
From this approach you can do some surprisingly quick five-bar gate counts of keywords, brands or even sentiment. Many forums have sentiment ratings, and if you include comments and reviews on places like Amazon as CGM then you can start to do sample based sentiment ratings - which in fact can be pretty much as accurate as the latest AI driven ratings- if you have enough time.
All is not equal, as discussed in the sticky threads blog below. Some threads which are large or have topical subject lines, receive many more hits than others. Also some medias are more prominent and perhaps carry more status: like the BBC web forums and comments boxes. Forums with high SE rankings tend to have the most traffic. Retweet rate /total is another meta-metric .
From a knowlegde of prominent forums for a product type, brand, band, author, lifestyle or political view point you can then consider the sub set of consumers who are most interesting to follow up: the innovators, the early adopters, the opinion shapers, the self-appointed authorities, brand champions ( fan boys / fanboi's) ..brand terrorists....and follow their posting to gain a high level view of the discussion: see if indeed they are influencing people or if generally people make their own minds up and buy that pink coloured laptop anyway!
So you start to get a feel for how a report may be structured, using simple hit counts as a top level introduction and then results from your measurements within the samples. Finally you get into the qualitative observation with the prominence of the media and the activity of the opinion leaders, and the sentiment tallies from the different samples to give some kind of summative opinion poll for the topic. The conclusions you may draw should therefore be based upon prominent information, a knowledge of why it is prominent and what else lies in the spectrum, a handle on the polarity of sentiment expressed and the average point for consumers, be it neutral or not! When you make a conclusion which points to a useful management insight, then go back and check the prominence: check the hit coutns relative to other topics or shades or opinions etc, check your sample is exhaustive and re-check your search strings ( a little more on this latter below and then another blog , coming soon to a soggy-spot near you!)
There are plenty of kid on numbers you can put around these things. For computer scientist graduates, metacrawling or re-indexing can be a way forward to producing statistics based aroung the single post as the "Unit of selection". Different sampling strategies based on random and temporal dips can be useful when confronted with 50 million tweets per day!
For the very numerate amongst you as marketers, sociologists or computer scientists, you should be aware than CGM is in such large numbers that a topic such as a fairly common brand name or product, will have a "normal distribution" of opinion if you like, and this can be captured in a correspondingly 2 SD centric list of keywords: the first six search strings capture the first two or even three standard deviations .
There is a bell curve : x axis rating versus Y axis volume. The majority of opinion/keywords etc, will be within the first two standard deviations. When you do a nth sampling you really get to check that the bell curve is covered. If you do manage to plot data, sentiment or keywords, and you find that there are more peaks and troughs than one bell curve then you have either too small a sample size, a poorly defined opinion-keyword-etc scale, or in fact you are measuring two different things: either from two destinct populations with some degree of polarity to each other on your scale ( OOPS! you sample tory and labour forums ( republican / democrat) and not general political discussion!) .
When you know you have a nice bell curve then you can be very safe in using nth sampling or random statistical sampling and that your comparisons can be shown to be statistically significant: FOR THIS DESCRIPTIVE DATA SET. You cannot use this as inferential statistics, primarily because you cannot accurately capture social demographics in CGM and there fore you cannot make any extrapolations to the population as a whole.
If you combine an offline survey which identifies people's demographics in relation to their interaction with CGM, it can be possible to make some tentative inferences based on the knowledge that your large sampling base is composed of a cross section of society idenitfied in this CGM interaction survey . Even then you have to tread very carefully, statistically speaking, because your "Hits" are by a decided number of authors, some using several handles over forums, some using multiple identities to stimulate discussion on the same forums ! In other words your actual "n" for the study group is too small. Is the post more important than the author? Hmmm well people tend to be consistent and only change opinion after some degree of cognitive dissonance so really your "n" is authors and not posts.
Inference to the general population soon evapourates when as you get into small number of authors per posts, and some of my "sticky, syrupy threads" are very much dominated by a gang of less than 10 key proponents. But then again a knowledge of what cross section are reading those forums and the thread rankings on the SE's means you can start to make a judgemental call on the importance of an issue or the opinions around a topic.
Sociologists and psychologists are very taken up with not interfering with the subjects:not introducing experimental method source errors, researcher interference or interpreter bias. If it is purely observational, then just a simple permission disclaimer is all that interfere, or in focus groups, skilled moderators stimulate debate and keep it on topic while being allegedly carful not to introduce biases (observers are usually in other rooms and should not confer on their notes themselves! )
But in the area of CGM you can be a little more anarchic. Having identified and qualified your CGM sources as "prominent" then you can set out to interact a little by starting threads, or tweets, yourself. This is a purely qualitative approach, but it can help you gain insight in an area where you found many tangenital conversations, unclear opinion or forum leader-or fanboy -bullying ( shutting out opinions, topics , alternative products/ solutions etc) previously skewing the area you are researching. Tread a little carefully and pick those forums or social networks where you have established that "noobs" ( newbies...first time or low count posters) receive a positive welcome and a range of replies and are not shut out when they post sensible . This means you can pose a question within a subject which is tenable : this could indeed include concept building around latent demand and unmet needs.
I hope this has stimulated some ideas for just going out and doing some DIY research from your desktop. This approach deals not only with qualitative observations, but you may also pick up some qualitive ideas on what would work with a crawling-indexing system, or a new type of social media platform!
=================
Perspective
=================
To show how long in the tooth I am, and just how jaded I am by the industry, market research is a be-whoared cinderella within marketing. Of course it should be the lead violin, the first on the dancefloor but instead it is the working girl who turns up in her best frock only to have a hand put up her skirt! They want her knickers off, just to get as quick as they can to what makes them happy: to drop the analogy, product managers have often made their own minds up about what makes a good campaign and where they are going and only want market research which will support that or their plan B. They have sales and national account managers to keep happy and they need to steal a bit of limelight by doing something unique.
This is true of research in social media, and it there is a danger for observer bias in generating keywords and search strings, and the in choosing themes or summarising the spectrum of opinion. Conversely, any-road-will-take-you-there-if-you-dont-know-where-you-are-going, so it is easy to follow seemingly prominent themes and paths of arguement which take you down blind alleys. Avoid the critical path approach, and keep it broad and objective.
In a later blog I will discuss how you create an objective set of search strings which are both exhaustive enough while being efficient in "containing" a topic, and as mentioned making sure you are within the first couple of standard deviations for a given distribution with the majority of your efforts.
Wednesday, March 17, 2010
Why Monitor Social Media ?
Perhaps you are new to the world of social media monitoring at work, or are embarking on your studies at university in this area, and you are wondering just exactly "what is in it for me ?"
Maybe your background is in traditional marketing or perhaps you work in a technical or customer services function and have heard that this new area is indeed something worth looking into further . How can monitoring social media help you make decisions and deal with problems and opportunities?
Well it is of course e-Marketing and web masters are the most interested and are sitting glued to the analytics and consumer opinions. Of course it is not just for the geeks: communications and customer services who are aslo getting engaged with SM. The benefits reach wider and deeper into the organisation though. Everything from new-product-introduction & tracking, or competitor pricing, to issue containment is amenable in almost real time : the feedback loop from action-consumer reaction is drastically shorter.
Statistically Speaking
One burning question is how can information in SM be used to make conclusions about the wider consumer population? Well speaking mathematically, usually we cannot make inferences to the general population or produce actual statistics - yet.
The day may come in the near future though, when the numbers of consumers engaging in discussion will be so high that we can draw inferences to the wider population, such as intent-to-purchase or brand awareness, and put some hard numbers behind this with SM a quantitative source for extrapolation to consumer behaviour in the market as a whole.
Even then it will most likely be from a definable cross section of different geographical- or network-societies: age and education related. It would be dangerous to draw inferences on anything but the first three standard deviations from one of those sub populations who are engaged with the internet and SM. However we will be able to utilise statistical probability based sample methods within any accessable or stored data set to produce smaller data sets which are make analysis more efficient within given parameters of accuracy and inferential significance.
Social Media Monitor Should Be Qualitative
For the moment though, reporting is focuses rather on the valuable qualitative insights to be found, "straight from the horses mouth". These are the root causes of issues, the actual verbatim opinions, the dissatisfactions, the real point-of-touch customer experiences : I could go on! These help illustrate findings from a company's quantitative reports and data-sources, as well as pointing to new insights which uncover consumer opinion hidden or distorted by the very interactive nature of surveys, depth interviews or focus groups. Also they can uncover uncomfortable truths which are hidden by line managers, front line operators or re-sellers.
Everything is Relative
Despite the qualitative output of reporting, descriptive statistics can be used within the domain of social media to illustrate the relative prominence these qualitative observations This includes the relative prevalence (or you could say "share-of-voice" ) of brand names, consumer opinions and for instance problems with newly launched products.
In combination with ever-more-accurate sentiment algortyms running with AI (artificial intelligent) systems, this area of descriptive statistics will be used more and more to give a picture of the cross-section of society using SM to discuss your brands and customer support. The value will increase if it can be shown that the "listen-learn-decide-react" loop on the web connecting to SM, is functioning.
Then our little world of SM becomes a market in it's own right, and this is already happening with companies engaging in different campaigns and communications which are built around information from social-media-reportage. Some companies in future may only interact with consumers through this interface and connections form SM to their web-services.
Numbers and graphs are all very fine and nice to present and talk about, even with the provisio that this is a little and twisted version of the world at large. But even a very low number of "hits" within the latter, for example, can reveal invaluable insight into potential challenges in production lines or further back in the supply chain which are creating problems not detected earlier in the testing and launch programme.
Tracking New Product Introduction
This qualitative approach has been of particular value in tracking new devices launched on the market, which have a plethora of features and most likely diverse internal software. However, this is equally valuable in tracking a new service, or immaterial product from a financial institution or a mobile network operator. Or in defining an unmet need or latent demand out there in the market place.
Within the world of gadgets- consumer electronics like mobile phones, PDAs, laptops or digital cameras - it is in fact often the lead consumers who are the real experts: they can be tracked individually, from maybe a sample of 10, as they try and buy diverse gadgets and report their experiences on the web. Often they seem very informed on how the technical features, like processor, touch screen, GUI, actually deliver benefits in use and how much better this performance is to earlier products or competitors offerings.
In fact these lead consumers seem to have a more wholistic view of the product's perfromance than the head of R&D and most likely the CEO at the manufacturer! Worth listening to SM?
Maybe your background is in traditional marketing or perhaps you work in a technical or customer services function and have heard that this new area is indeed something worth looking into further . How can monitoring social media help you make decisions and deal with problems and opportunities?
Well it is of course e-Marketing and web masters are the most interested and are sitting glued to the analytics and consumer opinions. Of course it is not just for the geeks: communications and customer services who are aslo getting engaged with SM. The benefits reach wider and deeper into the organisation though. Everything from new-product-introduction & tracking, or competitor pricing, to issue containment is amenable in almost real time : the feedback loop from action-consumer reaction is drastically shorter.
Statistically Speaking
One burning question is how can information in SM be used to make conclusions about the wider consumer population? Well speaking mathematically, usually we cannot make inferences to the general population or produce actual statistics - yet.
The day may come in the near future though, when the numbers of consumers engaging in discussion will be so high that we can draw inferences to the wider population, such as intent-to-purchase or brand awareness, and put some hard numbers behind this with SM a quantitative source for extrapolation to consumer behaviour in the market as a whole.
Even then it will most likely be from a definable cross section of different geographical- or network-societies: age and education related. It would be dangerous to draw inferences on anything but the first three standard deviations from one of those sub populations who are engaged with the internet and SM. However we will be able to utilise statistical probability based sample methods within any accessable or stored data set to produce smaller data sets which are make analysis more efficient within given parameters of accuracy and inferential significance.
Social Media Monitor Should Be Qualitative
For the moment though, reporting is focuses rather on the valuable qualitative insights to be found, "straight from the horses mouth". These are the root causes of issues, the actual verbatim opinions, the dissatisfactions, the real point-of-touch customer experiences : I could go on! These help illustrate findings from a company's quantitative reports and data-sources, as well as pointing to new insights which uncover consumer opinion hidden or distorted by the very interactive nature of surveys, depth interviews or focus groups. Also they can uncover uncomfortable truths which are hidden by line managers, front line operators or re-sellers.
Everything is Relative
Despite the qualitative output of reporting, descriptive statistics can be used within the domain of social media to illustrate the relative prominence these qualitative observations This includes the relative prevalence (or you could say "share-of-voice" ) of brand names, consumer opinions and for instance problems with newly launched products.
In combination with ever-more-accurate sentiment algortyms running with AI (artificial intelligent) systems, this area of descriptive statistics will be used more and more to give a picture of the cross-section of society using SM to discuss your brands and customer support. The value will increase if it can be shown that the "listen-learn-decide-react" loop on the web connecting to SM, is functioning.
Then our little world of SM becomes a market in it's own right, and this is already happening with companies engaging in different campaigns and communications which are built around information from social-media-reportage. Some companies in future may only interact with consumers through this interface and connections form SM to their web-services.
Numbers and graphs are all very fine and nice to present and talk about, even with the provisio that this is a little and twisted version of the world at large. But even a very low number of "hits" within the latter, for example, can reveal invaluable insight into potential challenges in production lines or further back in the supply chain which are creating problems not detected earlier in the testing and launch programme.
Tracking New Product Introduction
This qualitative approach has been of particular value in tracking new devices launched on the market, which have a plethora of features and most likely diverse internal software. However, this is equally valuable in tracking a new service, or immaterial product from a financial institution or a mobile network operator. Or in defining an unmet need or latent demand out there in the market place.
Within the world of gadgets- consumer electronics like mobile phones, PDAs, laptops or digital cameras - it is in fact often the lead consumers who are the real experts: they can be tracked individually, from maybe a sample of 10, as they try and buy diverse gadgets and report their experiences on the web. Often they seem very informed on how the technical features, like processor, touch screen, GUI, actually deliver benefits in use and how much better this performance is to earlier products or competitors offerings.
In fact these lead consumers seem to have a more wholistic view of the product's perfromance than the head of R&D and most likely the CEO at the manufacturer! Worth listening to SM?
Tuesday, March 16, 2010
Sticky, Syrupy Threads
Discussion forums continue to be the most useful social media space to gain consumer insight from, for now at least.
Within these forums you can follow consumer's problems, desires, experiences etc and see how discussions evolve around out clients products, services and brand values. One species of thread which has always very been prominent on forums is the mega-thread. Why are they so "sticky"?
These threads are way longer than average in number of posts. Some run to many thousand entries. It seems they break down into both serious, often heated discussion, diverse postings around a topic - like a new mobile telephone - , or pure trivia like jokes or chain stories.
One reason that they are sticky (ie attract users and keep them coming back) is that they often have prominent placings when forum threads are ranked in order of date on topic listings page, or on the more recent "hot topics" side bars on home pages and top level forum category indexes. It would be interesting to see statistically when such threads attract a critical mass, and how they develop from there. This could be a useful metric and watch alarm-trigger. Rate of growth, number of lead influencers, ranking: some algorythm could be made to work and fish them up to a dashboard panel of "Hot stuff"
Consumers coming to a forum see these mega-threads as the " tall oaks" amongst the grasslands of granular postings. By their stature, they demand respect and people will often come on board a product topic and post their " 2 cents" before they would consider creating a new thread, even if their "2 cents" is a little off topic or outside the current line of arguement. In this way they appeal to the conformist consumer.
These threads then, are not purely a result of some natural magnetic force , or social diffusion through the ethers of the internet. Most are nurtured, and some are not only fertilised with fresh content, but have their space cut clear for weeds, these being small competing threads. Users who can be identified as opinion -leaders, early adopters, expert-insiders or collectively lead influencers, will steer a thread they like and by carefully timed posting and replies to users comments, they will keep the thread up there in the top 5.
Lead influencers vary in how often they initiate threads, but they turn up like clockwork on the hot news threads or major theme threads relating to the forum's raison d'etre. They are in fact instrumental in coaxing the threads to gain critical mass.
Nurture also happens unfortunately perhaps, from forum owners and appointed "moderators". Some forum owners will post on new, related threads, stating rather rudely that the users should refer to the long running topic and "this is closed". They even delete competing threads apparently. Scornful lead influencers will also pounce on unsuspecting thread-starting-newbies, and stamp their forum authority by refering the user to an old worn arguement they should join the gang on the proper big thread, or just have used the "search" function to find the info' on old threads.
Some lead influencers like to demonstrate their boundless knowledge and articulate debating skills, while others can appear very helpful and down-to-earth, sometimes though outright patronising to those with lower post number seeking advice. In fact some lead influencers post almost exclusively in big threads and never start their own.
Usually as rule-of-thumb, one can consider those users with over 1000 posts as a start point to identify lead influencers within a topic of interest.
After this start point, you can delve further into their posts and behaviour, to assert if they are leading discussion and influening others to change opinion, be informed or of course buy something.
Mega-threads have another type of gravity: Often they attract a disproportionate number of "reads" to their actual post number, relative to smaller threads in the same forums. In effect they become the headline pages, or new-channels within the forums. People go there first, they grab attention for read-only "lurkers".
This read-count makes them even more important for companies to gain insight and summary of which direction the group of big threads on say, a product launch or a service problem are going.
One explanation for their disproportionate is their prominence on the forum as mentioned. But also these are the threads which the Google/Yahoo type spiders actually come upon and index. The threads live longer and are earlier on the index-crawl. They have bigger clusters of keywords and have by pure virtue of size, more links out and eventually IN to. Hence they are search engine friendly and score high on relevance and hits. In conjucntion with good web site SEO, the forums get quite high index listings on the SE's depending on the search terms. Also consumers set a bigger price on their own generated opinion! They would rather read 100 different user opinions than one PR story regurgitated, neigh, re-tweeted 100 times.
SE listings then helps the threads gain extra critical mass and keeps them "bumping" back with new posts even some time after they seem to have bruned out.
As we know though, yahoo and google only index an estimated 15% of the web, so it is pretty much hit or miss for actually finding these threads. Choose a good SM monitor company with either full indexing on main sector forums and general consumer forums, or those who can sample effectively from these for the big issues.
Within these forums you can follow consumer's problems, desires, experiences etc and see how discussions evolve around out clients products, services and brand values. One species of thread which has always very been prominent on forums is the mega-thread. Why are they so "sticky"?
These threads are way longer than average in number of posts. Some run to many thousand entries. It seems they break down into both serious, often heated discussion, diverse postings around a topic - like a new mobile telephone - , or pure trivia like jokes or chain stories.
One reason that they are sticky (ie attract users and keep them coming back) is that they often have prominent placings when forum threads are ranked in order of date on topic listings page, or on the more recent "hot topics" side bars on home pages and top level forum category indexes. It would be interesting to see statistically when such threads attract a critical mass, and how they develop from there. This could be a useful metric and watch alarm-trigger. Rate of growth, number of lead influencers, ranking: some algorythm could be made to work and fish them up to a dashboard panel of "Hot stuff"
Consumers coming to a forum see these mega-threads as the " tall oaks" amongst the grasslands of granular postings. By their stature, they demand respect and people will often come on board a product topic and post their " 2 cents" before they would consider creating a new thread, even if their "2 cents" is a little off topic or outside the current line of arguement. In this way they appeal to the conformist consumer.
These threads then, are not purely a result of some natural magnetic force , or social diffusion through the ethers of the internet. Most are nurtured, and some are not only fertilised with fresh content, but have their space cut clear for weeds, these being small competing threads. Users who can be identified as opinion -leaders, early adopters, expert-insiders or collectively lead influencers, will steer a thread they like and by carefully timed posting and replies to users comments, they will keep the thread up there in the top 5.
Lead influencers vary in how often they initiate threads, but they turn up like clockwork on the hot news threads or major theme threads relating to the forum's raison d'etre. They are in fact instrumental in coaxing the threads to gain critical mass.
Nurture also happens unfortunately perhaps, from forum owners and appointed "moderators". Some forum owners will post on new, related threads, stating rather rudely that the users should refer to the long running topic and "this is closed". They even delete competing threads apparently. Scornful lead influencers will also pounce on unsuspecting thread-starting-newbies, and stamp their forum authority by refering the user to an old worn arguement they should join the gang on the proper big thread, or just have used the "search" function to find the info' on old threads.
Some lead influencers like to demonstrate their boundless knowledge and articulate debating skills, while others can appear very helpful and down-to-earth, sometimes though outright patronising to those with lower post number seeking advice. In fact some lead influencers post almost exclusively in big threads and never start their own.
Usually as rule-of-thumb, one can consider those users with over 1000 posts as a start point to identify lead influencers within a topic of interest.
After this start point, you can delve further into their posts and behaviour, to assert if they are leading discussion and influening others to change opinion, be informed or of course buy something.
Mega-threads have another type of gravity: Often they attract a disproportionate number of "reads" to their actual post number, relative to smaller threads in the same forums. In effect they become the headline pages, or new-channels within the forums. People go there first, they grab attention for read-only "lurkers".
This read-count makes them even more important for companies to gain insight and summary of which direction the group of big threads on say, a product launch or a service problem are going.
One explanation for their disproportionate is their prominence on the forum as mentioned. But also these are the threads which the Google/Yahoo type spiders actually come upon and index. The threads live longer and are earlier on the index-crawl. They have bigger clusters of keywords and have by pure virtue of size, more links out and eventually IN to. Hence they are search engine friendly and score high on relevance and hits. In conjucntion with good web site SEO, the forums get quite high index listings on the SE's depending on the search terms. Also consumers set a bigger price on their own generated opinion! They would rather read 100 different user opinions than one PR story regurgitated, neigh, re-tweeted 100 times.
SE listings then helps the threads gain extra critical mass and keeps them "bumping" back with new posts even some time after they seem to have bruned out.
As we know though, yahoo and google only index an estimated 15% of the web, so it is pretty much hit or miss for actually finding these threads. Choose a good SM monitor company with either full indexing on main sector forums and general consumer forums, or those who can sample effectively from these for the big issues.
Labels:
analysis,
CGM,
forum,
lead-influencer,
opinion-leader,
post,
posting,
social media,
social-media-monitor,
thread,
wave metrix
Subscribe to:
Posts (Atom)