Titlebar
 

Home

 
Search
Printer Friendly Page Dirty Little Secret of Polling ~ #14

Dirty Little Secret of Polling ~ #14

By Alan F. Kay, PhD
2002, (fair use with attribution and copy to authors)
Dec. 11, 2002

My friend, commercial pollster Fred Steeper, called it "The Dirty Little Secret of Polling". I had given the same questions to two different high quality field houses to do the interviewing. The findings were great, useful and consistent with preceding findings, but there were some big, unexpected differences. The dirty little secret was how those differences could be rectified but were not — unless the pollster insisted on it. But pollsters prefer living with the dirty little secret. Why? Well, the differences revolve around obscure, hard-to-explain technicalities and can easily go unnoticed by clients. Many sponsors, pundits, officials, reporters, editors, and anchors are totally unaware of discrepancies that depend on the extent to which a field house "probes the Don't Knows".

I know. Your eyes are glazing over already. But listen.

Anyone trying to follow polls needs to know how to tell a good poll from a bad poll and how to take into account these underlying discrepancies. Not being up-front on the problem is increasingly giving all polling a black eye. It makes the findings of different field houses, and thus poll findings in general, vary more than necessary. It caters to sloppy quickie polls. Most important of all, when we apply what we learn from exploring the innards of the dirty little secret, it turns out we can extract new findings from polls that most pollsters will never find. And you'll see all that happen right here.

It won't be hard. Going into the technicalities is unsuitable for a short column like this. Instead we will learn everything from just one question, a question that many different field houses have asked frequently. We'll use, in fact, the most frequently asked question in polling:

"Do you approve or disapprove of the way George W. Bush is handling his job as president?"

Table 1 shows the field dates of the most recent asking of this single question, by eleven different highly regarded and prominent polling organizations in the United States, followed by the number of times they have each asked this question in the past 22 months (311 times altogether). The organizations are often a collaboration of three: pollster/print news/TV news. The field house used may or may not be independent of the collaboration.

 Table 1 In order of most recent asking 
(Pollsters/Print news/TV news)  
most recent
field dates
# of times asked this same question in past 22 months
  1. Gallup/USA Today/ CNN 11/8-10

68

  2. PrincetonSRA/ Newsweek 11/7-8

32

  3. NYT/ CBS News 11/2-4

33

  4. Wash.Post/ ABC News 10/31-11/2

26

  5. Ipsos-Reid/ Cook 10/28-31

17

  6. Zogby/Reuters 10/26-29

27

  7. Harris/Time/ CNN 10/23-24

17

  8. OpinionDyn/ FOX news 10/22-23

36

  9. Hart-Teeter/ WSJ/ NBC news 10/18-21

13

  10. Princeton SRA/ Pew 10/17-27

23

  11. Harris 10/15-21

19

     

311

Table 2 shows the responses to the most recent asking of each organization (listed in the same order as in Tables 1 and 3). The percent of the public who don't answer the question or give no answer, or say they are not sure or, as in (5), have "mixed feelings", are recorded under the catch-all, "Don't Know" (DK).

 
Table 2. Responses to most recent asking. Organizations in same order as in Table 1.

    Percentages      
Organizations Approve Disapprove DK Population
Sampled
Wording
Variations
1. 68 27 5 Adults  
2. 60 30 10 Adults  
3. 61 30 9 Likely Voters  
4. 67 32 1 Likely Voters  
5. 64 34 2 Adults "mixed feelings"
6. 64 35 1 Likely Voters  
7. 61 33 6 Adults "In general"
8. 60 30 10 Likely Voters  
9. 63 31 6 Registered Voters "In general"
10. 59 29 12 Adults  
11. 64 35 1 Adults *

 
In all 11 surveys, the populations sampled are as indicated: either all "adults" (over 18), people who say they are "likely to vote", or "registered voters".

In surveys (7) and (9) the phrase "In general" was included as the first words of the question. In the Harris poll (11) the public responses allowed were: "excellent", "pretty good", "only fair", or "poor". As frequently done "excellent" and "pretty good" were combined under "approve", while "only fair" and "poor" were combined under "disapprove".

A clue to unlocking the secret comes from noting that organizations, (4), (6) and (11) who have 1% DK in Table 2, generally obtained very low DKs when they repeatedly asked the question earlier. (Not shown in Table 2, but available in www.PollingReport.com). The four organizations with DKs ranging from 9% to 12%, similarly had large DKs when they asked the same question earlier.

A polling organization that wants small DKs asks its field house to "probe the DKs", which requires them to use a number of techniques to increase either "approve" or "disapprove" responses and reduce DKs. Techniques include: (1) waiting patiently for an answer (2) after a long time encouraging a substantive, not a DK answer, by saying a neutral colloquial phrase like, "Well, whad'ya think?" (3) being willing to call back later if respondent feels rushed, (4) reminding respondents that the survey results are important in determining what kind of governance we'll all get in the future.

Those pollsters who need to meet short deadlines, or put their interviewers on hourly quotas, or want their costs as low as possible (the shorter the survey, the less it costs the sponsor) want an interviewer to give the respondent almost no thinking time. After a second or two interviewer impatiently says, "Don't know? That's OK". Most respondents assent, and it's on to the next question. Another reason for encouraging a quick DK response, is ideology. Many organizations sincerely believe that the general public knows little about anything as important as politics. For some or all of these reasons, certain organizations prefer large DKs.

In each of the 11 rows of Table 2, the ratio of "approve" to "disapprove" is close to two to one. Table 3 results are derived from Table 2 values by splitting the percentage points of each DK in the two to one ratio and adding those points to the "approve" and "disapprove" percentages in the same row, rounding off fractional points for best-fit. For example, the third row DK is 9. 9, split 2 to 1, is 6 and 3. 6 is added to "approve" and 3 is added to "disapprove", making them 67 and 33 with DK reduced to zero. If the DK is not divisible by 3 without fractions, such as in row 2, where it is 5, we get five divided 2 to 1 is 3.33 and 1.67, which rounds off to 3 and 2, when added to the "approve" and "disapprove" percentages and give exactly what is shown in Table 3. This procedure, called allocating the DKs, is based on the idea that those who respond to the question have roughly the same preference ratio as those who explicitly choose their responses for either "approve" or "disapprove". Allocating the DKs, develops empirical evidence to substantiate the validity of the idea.

 
Table 3. Responses, most recent asking, "Don't Knows" allocated to 0.
 
Organizations Approve Disapprove

1.

71

29

2.

66

34

3.

67

33

4.

67

33

5.

65

35

6.

65

35

7.

65

35

8.

66

34

9.

67

33

10.

67

33

11.

65

35

 
Look at Table 3. At the top, (1) is anomalous: 4 points (or more) higher for "approve", and 4 points (or more) lower for "disapprove", than any of the other 10 cases. The 10 are each within +1 point of 66% "approve" and 34% "disapprove". Was there something momentous that happened before (1) was in the field and after (2)-(11) had been completed? Yes there was. The UN Security Council unanimously approved the Iraq resolution wanted by Bush to make it clear that the "whole world was behind the United States", a very important forward step for Bush's evolving preemptive policy. Bush's approval rating went up significantly after his big United Nations win. If we had not allocated the DKs, the distinction of the Gallup result, would have been almost unnoticeable in Table 2.

We have shown empirically that the variation in responses in Table 2 that theoretically might be due to four differences among the 11 surveys can be ascribed to a sampling error of only +1%, not the usual +3%, and within that small error, the facts that (A) the field dates of the 11 surveys were not exactly the same, but stretched over a short 24-day period. (B) The populations sampled varied and (C) the exact wording varied somewhat - are all immaterial.

Allocating DKs is useful, makes polling results more consistent, and sometimes tells us something important that would be totally lost. I'm for it.

My very first column, "Telling a Good Poll From a Bad Poll - 5/21/02", looked at the big differences between findings of "good" public-interest polls and "bad" private interest polls. Now we have seen that a choice in how DKs are probed also divides pollsters between "good" and "bad".

 

 

>>> 2.5  The Polling Critic

Table of Contents

Return to Top