You’ve probably heard something like this before: A junior manager walks into a meeting and, trying to impress, announces, “if the United States would only reduce oil imports from Norway, we could literally reduce deaths of drivers colliding with railway trains … per capita. It’s true, I got the data from the ‘X’ U.S. Department website and the ‘X’ monthly journal of research.”
[gap height=”15″]
This junior manager was met with blank stares, blinking eyes and banishment to re-read “Statistics 101.” The poor manager fell prey to “lies, damn lies, and statistics,” and while this story is an exaggeration, being misled – intentionally or incompetently – by statistics and graphs is all too common.
[gap height=”15″]
Bad Polling / Surveys
[gap height=”15″]
Academics spend an inordinate amount of time and resources testing questions, question order, and question effects in surveys. For example, one of the largest political science studies housed at a major mid-western university constantly welcomes and publishes pilot studies to explore “new methods and new substantive instrumentation” against their own survey questions, and they have been conducting political surveys continuously since 1948. Survey questions can go astray for any multiple of reasons – leading / biased questions, poorly worded questions, force-choice questions, and questions running head into social desirability – to name a few. Survey questions should strive for neutrality and always offer an “I don’t know” option. When approaching any controversial topics, great care should be taken to avoid respondents offering the socially desirable answer.
[gap height=”15″]
Not Communicating Polling Uncertainty
[gap height=”15″]
All research – especially survey research – contains uncertainty, and almost never is this uncertainty correctly communicated. A poll is released with a margin of error and a confidence level, for example: this poll has a margin of error of +-5% with a 95% level of confidence. This means that if this exact poll is repeated 100 times, results would be expected to match 95 times – meaning on average, the results are an outlier five times. The margin of error describes the true figure in the population – normally expressed as plus or minus a percentage. Meaning if the survey has a margin of error of plus or minus 5% and the question result is that 42% of the people like “the product,” you would accept results of 37%-47% as valid. Put both together, we would accept a result of 37% – 47%, 95% of the time. This uncertainty is the inherent danger of relying on a single poll result.
[gap height=”15″]
Bad Actors
[gap height=”15″]
While they are rare, bad actors exist – some for hire – who intentionally commission a survey with a result in mind. In addition, there are bad actors who hack away at data without a hypothesis.
[gap height=”15″]
[gap height=”15″]
While most research is conducted ethically, part of the scientific and research process is healthy skepticism.
[gap height=”15″]
Company Culture
[gap height=”15″]
Most importantly, your company’s culture may be driving poor research. Ask yourself, how open is your senior executive team to hearing “I know we spent significant time, energy and money on this research, and we found the results to be non-conclusive. The results indicate additional research needs to be done”?
[gap height=”15″]
Not Telling the Entire Story
[gap height=”15″]
“More than 80% of veterinarians recommend brand x dog food …” What they aren’t reporting is the survey allowed veterinarians to pick multiple brands. While this may be acceptable in advertising, it is not acceptable to manipulate or wordsmith when conducting research.
[gap height=”15″]
Faulty Correlations
[gap height=”15″]
Repeat to yourself: “correlation is not causation; correlation is not causation.” While laughing at the junior manager in the intro, it is all too easy to make this mistake. Human brains are hardwired to take shortcuts, group information cues and seek out patterns. This “skill” has allowed you to understand that a rustling in the jungle brush could be a hungry tiger, so you should run. The tendency to use heuristics and jump to conclusions is what mixes up Norway oil imports and train collisions. Human behavior is messy and proving what comes first and causes the other is extremely difficult.
[gap height=”15″]
[gap height=”15″]
[gap height=”15″]
Improper Presentation of Results
(See graphs above)
[gap height=”15″]
One of the most common ways to purposefully or unwilling manipulate an audience is manipulation of the visual presentation of results. A vast majority of people are visual learners, and one easy manipulation of a report or a board room is the manipulation of axis or scales. The most common x axis manipulation is the changing of time periods. Need to downplay a downward trend in this quarter’s sales? Simply, display 40 quarters of sales information. The most common y axis manipulation is the changing of scales. As seen above, the exact same data with different scales tell two different tales. Are sales increasing or flat?
[gap height=”15″]
Conclusion
[gap height=”15″]
The prevalence of online tools has made surveying and the presentation of results incredibly “easy” to the point that anyone can sign up and conduct “research.” While most research is conducted ethically, part of the scientific and research process is healthy skepticism. A good manager will make any marketing or data team explicitly state their underlying assumptions about their research. A great manager understands there are few “conclusive results” and is continuously asking their team, “Where may your research be incorrect or incomplete?”