CASE STUDIES WIND ME UP…
||Case studies wind me up…
Or to be more precise, case studies of B2B PR campaigns in the trade press wind me up.
Creativity should of course be congratulated, as should completion of a job well done. After all, in some quarters, getting a single piece of trade coverage can be considered a result against the odds!
But my case study bugbear comes from the repeated use of nebulous statistics to “prove” that the campaign was a success.
For instance, the avant-garde citing of “increased engagement by Twitter followers, evidenced by follower numbers rising by 34%”. So what?! This does not address the reason why the follower numbers increased – it could have been simply down to the hashtags used, the witty phrasing of the Tweets, the result of a non-PR-related marketing exercise, or even because the Tweets were so outrageous in their claims that they became comical and so gathered a following.
OK, so the last one might be a stretch, but the point still stands that without further digging, how do we, the case study readers, know that the increase in Twitter attention was a) a good thing and b) sparked by the PR campaign?
But what about leaning on web analytics to prove a result? Surely an increased amount of web traffic, the peaks of which map onto the PR activity, can be considered to be evidence of a successful campaign? Again, not necessarily. Who are these additional visitors? Where did they come from? Why did they visit the page and how long did they stay for?
It could be that the extra visitors thought that the site they were visiting was actually a source of quite different information, and so promptly left. Mixed messages in the overall PR campaign in the traditional, online or social media, could easily drive additional, but valueless, visitors to the website or specifically designed landing page. A poor choice of key phrases in the SEO side of the campaign could also result in misdirected and unwanted traffic.
Conversion rates – these cannot be argued with can they? If a PR campaign directly affects the client’s website conversion rates, it must be a success. Sorry… conversion rates are simply percentages of how many of the total number of visitors performed a given action. What if overall website traffic drops? This can make the conversion rate rise as the number of ‘convertors’ is a larger proportion of a smaller total.
So what should be used to measure success? Well it’s simple really… at least in theory. Sales leads. Of all of those extra Twitter followers or additional website visitors that the PR campaign apparently generated, how many actually made a valid enquiry? You could be the most talked-about organisation on the planet, with unfettered positive sentiment surrounding every statement, but if no-one is buying your product, what’s the point?
The metrics above are undeniably of use – they show a change in audience behaviour that then prompts us to ask questions and analyse what is working and what is not. But they are not the be all and end all, and certainly not the golden ticket to measuring PR success. For one thing, such metrics can be affected by so many marketing activities beyond PR.
So don’t rely on them and them alone to prove your PR is effective. Instead, take a wider view of what your chosen audience is up to. Gear up your PR campaign, your broader marketing activity and your own internal sales lead tracking and measurement to make sure that you know exactly where leads are and are not coming from, why the leads are coming to you and no-one else and most importantly, which PR messages are working the best.