JP Observer: Election Warning #2: Watch out for media hype of poll ‘results’

​“Don’t rely on misconceptions about aging when thinking about President Joe Biden’s—or any other person’s or candidate’s—competency.”

That was Presidential Election Warning #1 in the JP Observer published in September. Voters should look at his specific job performance and actual facts about aging and competence in general before letting a simple birth year, cautious gait or a slip of the tongue by one candidate influence them negatively. Instead, we should compare Biden’s competence and credentials with fellow senior citizen and serially indicted Donald Trump, his presumed opponent in the final.

.​Election Warning #2 focuses on all those attention-getting polls we keep hearing about. “It’s relentless,” JP resident Georgia Mattison said about reporting that mentions polls. 

Voters need to make sure any poll they see these days is reliable and used scientific methods to gather and analyze data before they give any credence to it.

More important—and a bit more difficult, actually—we need to be very aware of how media outlets portray and “spin” raw poll results for their own benefit these days.

Reporting on the number of respondents who selected Joe Biden versus Donald Trump in a New York Times/Siena College Poll of six swing states just last month provides a great example of drama getting away from reporting as media spin went beyond what the complete numbers actually indicated.

​Difficulties with polling come up in what can be a wide gulf between the data itself and verbal descriptions of their results by pollsters. The gulf often widens further as the results get to newsrooms and editorial offices of media outlets—print and electronic—where they are crafted to be attention getting.

Reporting on poll results—often hyped as breaking news when first unveiled—is closely related to media businesses being just that—money making enterprises. And the money to be made by media on polls doesn’t come from politicians or parties. Media outlets basically sell their numbers of readers, listeners, viewers, and participants, (i.e., audiences) to prospective advertisers.

And what better way for media (and freelance journalists) to attract eyes and ears than with the promised revelation of where the horses are on the track to the next presidential election and why. In short, presidential race poll results tend to get big headlines from media types ready to make a dollar from over-simplified, quick to present numerical findings.

Several independent pollsters and journalism nonprofits offer good advice to journalists about how to deal with and present their polls and results. Experts suggest that journalists should inform their audiences of the time, place, people, content and methodology of the survey they are reporting on.

It’s most important that, along with the results, journalists also give the poll’s “margin of error,” which sound mathematical principles say needs to be applied even to what are basically good polls.

“Many in the news media compound the [interpreting of polls] situation by misleading reporting. Too many reports, for example, ignore that each poll carries a margin of error—nor explain what that means. Adding fine print at the bottom of a graphic doesn’t cut it,” wrote Frank O. Sotomayor in the Poynter. Newsletter in 2020. The Poynter Institute for Media Studies, a respected non-profit journalism school and research organization in St. Petersburg, Florida.

If an article or opinion piece doesn’t give the margin of error prominents along with the results it is writing about, we should automatically be skeptical of what the journalist says about them. The outlet is probably going for sensation over accuracy to save time or space and—even more important—to attract audience for advertisers.

​CNN has its own polling company. In an interview, CNN polling director Jennifer Agiesta warns against horse-race polls: “Because of sampling, any race that’s closer than something like a 5-point margin will mostly just look like a close race in polls.”

Agiesta has lots of advice for interpreting polls: https://www.cnn.com/2023/07/15/politics/polls-elections-what-matters/index.html

Other context that might need to be provided in order to give the correct impression includes: results from similar polls at other times, amount of time until the election; and other polling trends.

“Since 2003, the national mood has grown unbelievably sour, and since 2005, sitting presidents have had underwater approval ratings during about 77 percent of their terms,” columnist David Brooks wrote in the Times on Nov. 9, offering important historical context for the Times/Siena and all presidential poll results these days.

Brooks quoted progressive political strategist Michael Podhorzer saying “a lot of this negativity is not a reflection on particular politicians” but is “indicative of broad and intense dissatisfaction with our governing institutions and political parties.” Brooks and Podhorzer said that doesn’t mean respondents will actually vote the same negative way, especially when the election is far away.

Two Sunday morning network TV news programs seem to love poll results as a jazzy way to regularly and quickly talk about lots of issues. They often show a few topic words and a big number on cards or a wallboard or a screen, and say “Polls say xx percent of people say xxx about xxx, and they sometimes add a sentence or two. They do 5-10 issues. And that’s a segment with little or no context further describing the who, what, when, why, etc. Last Sunday CBS showed tiny date range and margin of error in one corner of each issue card.

If audience members come across poll reporting or opinion that doesn’t contain a complete description of it, they can search for the relevant information themselves on pollster websites where it is sometimes posted. And/or they can just take in the poll results coverage by itself without the needed context, reminding themselves that the content is, therefore, to be read more for entertainment than edification. Republican candidate Chris Christie actually complained about the way the polls results were presented on CBS as well as the sample size.

Polls that ask everyday people about an area of specific expertise are never good or credible. One this past year asked something like: “Do you think Joe Biden and/or Donald Trump (The poll gave individual choices.) have dementia?” Only a specially trained medical person can make such a diagnosis after administering tests and gathering information. This is an example of a junk poll, even if a known company did it..

Next, those sensationalist pollsters will be asking people if they think NASA has the right fuel formula, if we’re not careful!

Roper Center for Public Opinion Research offers a PDF of “20 questions a journalist should ask about poll results” from the National Council on Public Polls before they report or comment on them. Audiences for journalism might well ask the same questions of articles and columns, too.

Pew Research Center features a short online course called Public Opinion Poll basics.

Wild coverage of Times/Sienna poll

​Media coverage of the results of the New York Times/Siena College poll of Biden v. Trump provides a great example of several things that can go wrong for those of us in the poll results coverage audience. Taken in six what they called “battleground states” or “swing states” (my preferred term for states that are key to winning the electoral college vote), it took place between Oct. 22 and Nov. 3, 2023.

Media coverage of these poll results had a huge content gap, favoring focus on the horse-race aspect. What respondents said in answer to nearly a dozen issues questions in the survey—including about the economy and Ukraine, did not seem to get much, if any, media coverage, even as background. The vast majority of coverage consisted of Biden v. Trump talk.

​The so-called “bottom line” (actually located near the top of the long results document at: https://www.nytimes.com/interactive/2023/11/07/us/elections/times-siena-battlegrounds-likely-electorate.html) showed Trump 3 percentage points ahead of Biden among definite and likely voters in those states. This was shouted from headlines all over the place (Some media reported the difference was 4 percent but that was the count of people who said they were definitely, going to vote, with “likely to vote” not included.)

Democrats wrung their hands. Republicans gloated. Pundits puzzled, because Democrats had won quite a few local/state election battles the Tuesday just before the results were released.

As news pages continued to report about the poll, opinion writers took off, spinning toward whether Democrats should be discouraged and, of so, what Democrats and Biden supporters should think and do. To their credit, a few journalists pointed out that the election is not for another year, and polls this far out aren’t usually good predictors.

“Trump Leads in 5 Critical States as Voters Blast Biden, Times/Siena Poll Finds,” was the headline of the article about the poll results in the New York Times itself on Nov. 5

New York Times Chief political analyst Nate Cohn referred to “…the newest New York Times/Siena College poll, which seemed to spell doom for the Democrats,” in a column on Nov. 8.

A week later, media outlets were still spinning: “Fretting but not yet fearful in the Biden camp” was a page 1 headline in the Boston Globe about reaction to the results.

​Worst of all and very important, buried in the data on the New York Times/Siena College poll website were two key characteristics of the poll respondents who were likely voters—un-remarked upon in dramatic media coverage—that were major factors, no doubt, in the outcome of the poll that showed 48 percent of “definite and likely” voters for Trump and 45 percent Biden. Why?

a) Poll respondents identified themselves as registered in or leaning toward the two major parties in almost exactly the same numbers as the vote spread at 49 percent Republican, 45 percent Democrat. (Party affiliation is recorded near the bottom of the results document from Times/Siena.) What a noteworthy surprise those horse-race results turned out to be. Not. The incredibly close presidential choices fell pretty close to exactly along respondents’ stated, preferred party lines!

b) The same poll that showed a difference of 3 points between candidates’ “vote” results had a margin of error of plus or minus 2 percentage points. (This margin was stated in small print footnotes at the distant bottom of the pollster’s results documents.) So, possibly, scientifically thinking, Biden and Trump were maybe only 1 point apart—a total mismatch to the all the shouting headlines.

BTW, very few articles and columns that reported on the poll gave the margin of error—information described by experts as absolutely necessary to credible poll reporting.

Random Sampling

​Fortunately, we usually don’t have to get advanced math degrees to analyze polling methods and data for ourselves. The highly respected organization FiveThirtyEight pretty much does those calculations and studies and supplies us with pollster evaluations at https://projects.fivethirtyeight.com/pollster-ratings. The website rates more than 100 pollsters with grades and reports about accuracy, etc. on a handy grid. The New York Times/Siena College (Times/Siena) polling group had an A+ then.

​A Harvard Science Review article in September said “for all practical purposes all random sampling is dead” because so few people respond. The article argues that the polling field needs to move to a more general paradigm built around [what’s called] the Meng (2018) equation that characterizes survey error for any sampling approach, including nonrandom samples. Most poll surveying relies on various kinds of outreach these days besides telephone, including email.

Last words

​I asked a person with a statistics minor in college and experience analyzing statistics in professional situations to analyze the Times/Siena poll results. She looked them over for a while, finally saying she concluded from them that the race is very close, and, with the election so far away, a lot could change in the interim. Far from major media interpretations and messages!

She suggested I remind readers of what Mark Twain used to say: “There are three kinds of lies: lies, damned lies, and statistics.”

I would add, “…all depending a lot on how numbers are analyzed and reported.” And it’s on us readers, listeners and viewers to be skeptical of everything we are told about polls until we do some checking.

​According to Forbes magazine on Nov. 9, when a reporter asked President Biden, in response to reporting on the Times/Siena poll, if he believes he’s trailing in battleground states, he said simply, “No, I don’t.”

​The headline on that Forbes article was, “Biden Says He Doesn’t Believe Polls Showing Him Trailing Trump — After Daunting New York Times Survey.”

​Talk about spin—this headline has several twists to it! Seems like the media coverage was typically more “daunting” than the actual raw survey results. 

Leave a Reply

Your email address will not be published. Required fields are marked *