Fascinating study or junk science? This one smells funny

When it comes to pitching stories to bloggers, PR folks still don’t get it.

Are you surprised to hear this? I didn’t think so. But now we have a real live study to support the claim.

picture-2.pngOr do we?

A study released last week by APCO Worldwide and the Council of Public Relations Firms (CPRF) looks promising on the surface. Hell, it even has a colon in the title? And as anyone knows, important research always has a colon in the title.

But did you know that “Expectations for Bloggers: Public Relations Executives’ and Bloggers’ Points of View” draws its conclusions from a survey of just 102 people? Just 55 PR executives and 47 bloggers make up the sample. And that, friends, is a survey that — no matter how interesting — has NO statistical validity.

I probably shouldn’t admit this, but I usually skip the “methodology” section. Boring stuff. But Robert French, professor and Web 2.0 guru at Auburn University, took a close look at the APCO Worldwide/CPRF study:

I am so tired of seeing these survey as marketing tool lame examples. To see it coming from an industry group, well … it is just simply sad.

With such a small, insignificant sample, don’t say “59 percent spend more than 20 hours per week (blogging)” [Source for survey sample quote]… rather, say 27 people. Sheesh! I’d fail a student for something like this. Use of percentages to try and make the results seem more credible is simply wrong.

Of course, if you’re a PR firm pitching expertise in “blogger relations,” you’ll want to download this study and make copies for your next client meeting. Just hope that no one across the table knows anything about research methods. The report’s use of percentages based on such a tiny sample is out of line. It’s fine as an exploratory study, but it’s nothing more.

In fairness to the survey sponsors, there is no attempt to hide the sample size. In fact, it was reported in PRWeek. Unfortunately, the survey cites percentages in a way one should do only with generalizable data. As such, it misleads.

Update 3/27/08: The PRNewswire release by APCO/CPRF doesn’t mention sample size or any limitations of the research, thus PRWeek must have gotten the numbers in follow-up.

Thank you, Professor French, for keeping an eye on how our profession uses, or misuses, the numbers. It’s a reminder to us all to read the fine print.

Maybe there’s still time to stop this story before it becomes Internet legend. That takes precisely 3.6 weeks. I read a survey about it somewhere.

Update 3/26: Folks, I badly underestimated the power of bloggers to pick up and run with a story before giving it a critical look. Use Google Blog Search and type in: APCO blogger study. Most folks have taken the bait on this one — the “media snack” of the week. Let’s be careful out there!

10 Responses to Fascinating study or junk science? This one smells funny

  1. 100% of the people typing this comment agreed with Bill.

  2. Bill Sledzik says:

    Thank you, Steve. It’s seldom that I get such broad consensus.

  3. Hi Bill – thanks for picking this up. I was involved in this effort. We have never tried to promote this effort as a scientifically-valid survey – as you noted, we’re more than happy to share the numbers with everyone. We are trying to start a discussion.

    We developed a series of questions based on a number of discussions and meetings between PR professionals and bloggers. We’re trying to use social media tools such as the wiki at bloggersandpr.com to facilitate that discussion between those of us who reach out to bloggers and those who get “pitched” quite a bit. That’s why we’re encouraging everyone to help us refine and develop new questions to ask for our next go at this. That’s also why we’re asking everyone to help us shape the “best practices” statements you can find at the wiki. We’re trying to get more people to participate – not because we’d cross some threshold of statistical validity to satisfy academics, but because we think letting everyone participate is the right thing to do.

    Social media outreach is evolving every bit as fast as the technology tools are, and we’re trying to facilitate a discussion that moves at the same pace. We haven’t hidden anything about the process, as you can plainly see. I really hope you’ll join the discussion we’re having at bloggersandpr.com and help us do this right.

    – David Wescott

  4. Hey, Bill. I guess I’m just a stickler when it comes to surveys.

    I’m not convinced “there is no attempt to hide the sample size.” Yes, it was revealed in an interview, but do you see it on their site promoting the survey and results? I don’t.

    APCO VP David Wescott kindly came by to comment on my post. I responded to his comment with further questions and issues I’d like to see addressed.

    I’ve seen way too many of these vague survey result reports. I’ve seen publications do this and, when pressed for the survey respondent pool, they reveal that they only surveyed their own readers. Yet, the resulting promotion touted their publication as the perceived best source by “all” in the profession they target. Gee, I imagine that subscribers (especially the paid ones) likely do have positive opinions about a publication. The examples go on.

    Too often there is no methodology and even less public disclosure. I see this particular effort as an attempt to try and position the firm and CPRF as thought leaders. IMO, it was a poor attempt that runs contrary to some of the transparency and open dialog principles they profess.

  5. Bill Sledzik says:

    Thanks, David. I will put that site on my feeder and join the conversation as appropriate.

    The organizations (APCO and CPRF) may not have promoted the study as “scientifically valid” in those words. But the news as released though PRNewswire makes no mention of the sample size, and it presents percentages in a way that implies validity. As Robert said in his post, we fail students for this sort of thing, since it misrepresents the data. It’s not a matter, as you say, of “satisfying academics” as it is presenting reality.

    So while I applaud the organizations’ attempts to understand this important intersection between public relations and bloggers, I have to agree with Robert: This is more an example of “survey as marketing tool” than valid research.

    Update: Robert’s comment came in while I was writing and posting this one. No Robert, there isn’t much of an effort in the APCO/CPRF report to reveal the survey sample size, though the number “55 senior-level PR executives” is mentioned in the fine print on Page 4. I can send that pdf to anyone who’s interested. Just click the email link in the right column and let me know.

    And as I’ve said, the sample size is NOT included in the PRNewswire release, so I can only assume that PRWeek asked about it. From where I sit, I see a good bit of “spin” built into this story. We all should worry about the impact that careless treatment of data can have on our reputations. It’s not an academic thing at all.

  6. One issue here is how the research report differs from the article about it. We PR people don’t like numbers, we like words. But we like to simplify (over-) even more. So, instead of dealing with all that awful methodology stuff, we just cut to the chase — what are the findings? Qualitative assessments are inherently not able to provide extrapolation of results. Even if it’s a questionnaire, if we’re asking for opinions and self-reporting of data, we’re not going to be able to predict. All that science makes our heads hurt, and we have SO many other things to do than understand the topic…

    Bah!
    S.

  7. Sally Hodge says:

    Interesting issues raised in your post that ponders statistically valid research versus surveys as a marketing tool. I find it especially relevant because I’m just in the process of creating a PR program around a client’s best practices study. To the client’s credit and my firm’s, we did cite the sample base of 150 CXOs up high in the executive summary and in the fourth paragraph of the news release. But what is a statistically valid sample? Would it be 200? Or 500 Or 10,000? Is it better, when slicing and dicing such numbers, to issue a disclaimer saying (in case anyone who knows anything didn’t already know this), “This is not a statistically valid sample; readers beware!”? (Got a feeling that the clients wouldn’t like that one…) Or to emphasize discreetly that the survey was intended to reveal trends? We in the field are kind of stuck between the rock and a hard place in trying to guide clients while also building their images in a positive and, ideally meaningful way.

    Okay, okay…I, too, think 100 is too small a number, and thought burying the sample size was obfuscation at its finest!

  8. Bill Sledzik says:

    As I began to write this response, my wife walked in the door after another long day in the tax mill. As a CPA and tax adviser, she is sometimes asked to break the rules in order to help her clients save money. If she were to do this, she’d be violating the legal and ethical tenets of her profession, and thus endangering her licensure while also placing her clients in legal peril. So she doesn’t do it.

    In PR we’re too often expected to fabricate reality on behalf of our clients by embellishing the numbers — spinning them to paint a more favorable picture. Some of us do it. Some don’t.

    If we were real professionals, like the licensed accountant I live with, we would simply refuse, since a professional’s first duty is to society and the public interest, not to the client of the moment. In my wife’s profession, liars go to jail. In ours — all too often — they get promoted.

    So the answer to your question of how to handle shaky survey data lies in transparency. Report the research data, but explain its limitations in straightforward terms. The APCO/CPRF survey report simply doesn’t do that. And while it makes no false statements, it misleads with its omissions.

    What is statistically valid? I’m not a statistician, but let me see if I can do this in a paragraph and not sound like an idiot. To produce valid and generalizable results, you begin by drawing your sample from a population that includes ALL those you want to survey. Example: All left-handed pediatricians in the U.S. The size of the survey sample you draw will determine your margin of error. I won’t try to explain that calculation, as I don’t understand it myself. But we all know the lower the margin of error, the more precise and generalizable the data will be.

    But you must do one more thing. To generalize based on the data, you must do probability sampling (aka, the “random” sample). What this means is that every member of the population has an equal chance of being selected. It may sound complex, but to a research statistician it’s a day at the beach.

    There’s no shame in doing an exploratory survey to test the waters, so long as you warn readers that the results are not generalizable, that is, cannot be projected onto the larger population with any confidence at all. But you owe it to your readers to state explicitly the limitations of your data.

    Of course, if the purpose of the survey is just to support a marketing campaign, then fire away. But don’t expect a knowledgeable person to trust your data — ever.

  9. Greg Smith says:

    Unfortunately, this may highlight a more deeper problem for PR people: the growing reluctance of people to complete surveys. It really is a worrying trend with massive implications for researchers.

  10. […] did it hold discussions with?  Well, the population sample size for this study was a grand 102 people (55 PR professionals and 47 bloggers).  While I don’t think that the conclusions themselves are not anything terrificly new or […]

%d bloggers like this: